id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
250608709
|
pes2o/s2orc
|
v3-fos-license
|
Endometrial glycogen metabolism during early pregnancy in mice
Abstract Glucose is critical during early pregnancy. The uterus can store glucose as glycogen but uterine glycogen metabolism is poorly understood. This study analyzed glycogen storage and localization of glycogen metabolizing enzymes from proestrus until implantation in the murine uterus. Quantification of diastase‐labile periodic acid–Schiff (PAS) staining showed glycogen in the glandular epithelium decreased 71.4% at 1.5 days postcoitum (DPC) and 62.13% at DPC 3.5 compared to proestrus. In the luminal epithelium, glycogen was the highest at proestrus, decreased 46.2% at DPC 1.5 and 63.2% at DPC 3.5. Immunostaining showed that before implantation, glycogen metabolizing enzymes were primarily localized to the glandular and luminal epithelium. Stromal glycogen was low from proestrus to DPC 3.5. However, at the DPC 5.5 implantation sites, stromal glycogen levels increased sevenfold. Similarly, artificial decidualization resulted in a fivefold increase in glycogen levels. In both models, decidualization increased expression of glycogen synthase as determine by immunohistochemistry and western blot. In conclusion, glycogen levels decreased in the uterine epithelium before implantation, indicating that it could be used to support preimplantation embryos. Decidualization resulted in a dramatic increase in stromal glycogen levels, suggesting it may have an important, but yet undefined, role in pregnancy.
| INTRODUCTION
Pregnancy loss is quite common in humans, with most losses occurring very early in pregnancy (Annual Capri Workshop Group, 2020;Zinaman et al., 1996). Before implantation, embryos are dependent on nutrients secreted into the uterine lumen. Of the nutrients in uterine secretions, glucose is one of the most important. Glucose uptake by embryos is low from fertilization until the eight-cell stage, and before compaction too much glucose is toxic. Around the morula stage glucose uptake starts to increase and is dramatically higher by the blastocyst stage (Dan-Goor et al., 1998;Leese & Barton, 1984). In human embryos produced via in vitro fertilization or intracytoplasmic sperm injection, glucose consumption was higher in the embryos that resulted in a live birth (Gardner et al., 2011). Matching the increased glucose needs of the blastocyst, the glucose concentrations are higher in fluid from the uterus than those in fluid from the oviduct (Gardner et al., 1996;Hugentobler et al., 2010).
At the implantation site (IS), the stromal fibroblasts undergo a morphological and physiological change into decidual cells. Decidualization results in increased glucose flux through the pentose phosphate pathway, and blocking this pathway impairs the decidual response in mice and human endometrial stromal cells Tsai et al., 2013). After decidualization, glucose uptake increases due to a shift to Warburg metabolism (Zuo et al., 2015).
Hence, the glucose needs of both the embryo and uterus change in a spatiotemporal manner during early pregnancy.
The uterus lacks the enzymes to make glucose de novo; therefore, all glucose used by the endometrium or secreted into the uterine lumen must come from maternal circulation (Yánez et al., 2003;Zimmer & Magnuson, 1990). The facilitative glucose transporters (GLUTs, gene family Slc2a) and sodium-glucose-linked transporter 1 (gene symbol Slc5a1) are both expressed in the endometrium Zhang et al., 2021). Thus, the uterus may take up glucose from maternal circulation as needed; however, the endometrium can also transiently store glucose as the macromolecule glycogen.
After glucose enters a cell, it is phosphorylated by hexokinase (HK) to produce glucose-6-phosphate. Glucose-6-phosphate can be metabolized by many different pathways. To be stored as glycogen, the glucose-residue is isomerized to glucose-1-phosphate and then transferred to UTP, yielding UDP-glucose. From there, glycogen synthase (GYS) transfers the glucose to a pre-existing glycogen molecule. Glucose-1-phosphate is liberated from glycogen by the enzyme glycogen phosphorylase (PYG). Glucose-1-phosphate is isomerized back to glucose-6-phosphate, which is trapped in the cell. To be secreted, the glucose moiety must be dephosphorylated by glucose-6-phosphatase (G6PC).
In humans, endometrial glycogen concentrations peak during the luteal phase and are correlated with fertility (Maeyama et al., 1977).
In rats, uterine glycogen concentrations are high on Day 1 of pregnancy and then decrease over preimplantation. Glycogen concentrations began to increase after implantation (Greenstreet & Fotherby, 1973). However, it is unclear which tissues store the glycogen or where the glycogen metabolizing enzymes are expressed.
Mice are important biomedical research models; yet uterine glycogen metabolism has never been characterized in this species.
Our objectives were to 1) characterize glycogen stores in the murine uterus from proestrus through implantation in the glandular epithelium, luminal epithelium, and stroma; 2) localize key glycogen metabolizing enzymes during the same period; and 3) determine if decidualization is sufficient to drive glycogen accumulation in the endometrial stroma independently of pregnancy.
| Endometrial glycogen levels during early pregnancy
Uteri were collected from mice at proestrus and at days postcoitum (DPC) 1.5, DPC 3.5, and DPC 5.5 and stained with periodic acid-Schiff (PAS), with or without diastase (PASD) pretreatment to localize glycogen. PAS and PASD staining indicated the presence of glycogen in the epithelium at proestrus and in the decidua after implantation (Figure 1a). Quantification of the diastase-labile staining showed that in the glandular epithelium, glycogen content was highest at proestrus, decreased 71.4% at DPC 1.5 (p < 0.01), and decreased 62.13% at DPC 3.5 (p < 0.01). By DPC 5.5, the glycogen content in the glandular epithelium at the interimplantation site (IIS) increased and was similar to proestrus ( Figure 1b). Similar results were found in the luminal epithelium, where glycogen content was highest at proestrus, 46.2% lower at DPC 1.5 (p = 0.061), and 63.2% lower at DPC 3.5 (p < 0.05). At DPC 5.5-IIS, the glycogen content of the luminal epithelium was 32% lower than that of proestrus, but this was not significant (p = 0.37; Figure 1c).
In contrast, the stroma stored little glycogen during the preimplantation period. Glycogen content was low and did not change significantly from proestrus through DPC 3.5. At DPC 5.5, glycogen content was still low in the stroma at the IIS; however, the glycogen level increased sevenfold at the IS compared to the stroma of proestrus or the IIS (p < 0.0001; Figure 1d).
| Glycogen metabolizing enzymes during early pregnancy
The levels of glycogen metabolizing enzymes (HK1, GYS, phospho- Immunohistochemistry demonstrated that glycogen synthesizing enzymes (HK1 and GYS) were highly expressed in the uterine epithelium. HK1 was localized to the glandular and lumen epithelium and was undetectable in the stroma. Immunostaining in the epithelium was consistent from proestrus to DPC 5.5 IIS (Figure 3 top). GYS was present in the luminal and glandular epithelium.
Immunostaining was higher on DPC 1.5 and 3.5 compared to proestrus or DPC 5.5 IIS. Some immunostaining for GYS was observed in the stroma on DPC 3.5 (Figure 3
bottom).
Similar to glycogen synthesizing enzymes, glycogen catabolizing enzymes (PYG and G6PC) were also found primarily in the glandular and luminal epithelium. PYG immunostaining was higher after mating HK1 expression was undetectable in the decidualized stroma at the DPC 5.5 IS by immunohistochemistry, similar to the stroma at the IIS. In contrast, there was a dramatic increase in immunostaining for F I G U R E 1 Glycogen levels in the murine endometrium during the first 6 days of pregnancy. (a) Representative images from the murine uterus collected at proestrus (PROE), days postcoitum (DPC) 1.5, DPC 3.5, and DPC 5.5. Sections were stained with periodic acid-Schiff (PAS, top). Other slides were pretreated with diastase (PASD) to digest glycogen before PAS staining (bottom). (b-d) Glycogen content of the glandular epithelium (GE; b), luminal epithelium (LE; c), and stroma (S; d) as calculated with ImageJ. Glycogen content was determined by measuring the area occupied by each tissue and the area PAS positive. The percent area PAS positive in PASD slides was subtracted from the area positive in PAS slides to account for nonspecific PAS staining. *p < 0.05; **p < 0.01; ****p < 0.0001 relative to PROE. n = 6. Scale bar = 50 µm. IIS, interimplantation site; IS, implantation site. Western blot were used to further examine the glycogen metabolizing enzymes at DPC 5.5 IIS and IS. The levels of HK1 tended to be lower in the IS than the IIS (p = 0.097; Figure 6a). There were no significant difference in pGYS levels between DPC 5.5 IIS and IS ( Figure 5b). In agreement with the immunohistochemistry data, GYS levels were 2.4-fold higher at the IS compared to IIS (p < 0.05; Figure 6c). The level of PYG was the same at the IISs and ISs (Figure 6d).
| Endometrial glycogen metabolism after artificial decidualization
Next, we induced decidualization artificially to determine if decidualization, by itself, increased glycogen storage. Mice were ovariectomized, primed with ovarian steroids, and the left uterine horn was stimulated to initiate the decidual reaction. The right uterine horn was unstimulated and served as an internal control. The stimulated uterine horn appeared larger and weighed significantly more than the nonstimulated horn, confirming successful decidualization ( Figure 7a). Quantification of PAS and PASD staining showed that the glycogen content was five times higher in the stimulated horn than that in the unstimulated horn (Figure 7b; p < 0.05). Similar to the data from the DPC 5.5 IIS and IS, HK1 immunostaining was undetectable in the stroma of the unstimulated horn and stimulated horn. GYS immunostaining was absent in the stroma of the unstimulated horn and was markedly increased in the stimulated horn (Figure 8 top). In addition, immunostaining for both PYG and G6PC appeared to slightly increase in the stimulated horn compared to the stroma of the unstimulated horn (Figure 8 bottom). Western blots revealed that HK1 tended to be lower in the stimulated horn relative to the unstimulated horn (p = 0.064). pGYS showed no significant difference between the unstimulated and stimulated horn ( Figure 9b). The level of GYS was fivefold higher in the stimulated F I G U R E 3 Localization of glycogen synthesizing enzymes in the endometrium from proestrus (PROE) until days postcoitum (DPC) 5.5. Immunohistochemistry for hexokinase 1 (HK1) and glycogen synthase (GYS) in uteri collected at PROE, DPC 1.5, DPC 3.5, and DPC 5.5 IIS. n = 4. Scale bar = 50 µm. IIS, interimplantation site; Neg, negative control.
| DISCUSSION
The early embryo prefers pyruvate and lactate as energy substrates but has switched to glucose by the blastocyst stage (Gardner & Leese, 1990;Leese & Barton, 1984). Too much glucose during cleavage development is toxic to the embryo (Cagnone et al., 2012;Pantaleon et al., 2010). As a result, preimplantation embryos require optimal glucose concentrations to survive. Given the near-ubiquitous expression of GLUTs in the endometrium and their facilitated diffusion mechanism of action , GLUTs themselves are unlikely to adequately regulate glucose secretion into the uterine lumen.
In other species, endometrial glycogen content peaks during estrus and then declines during the luteal phase or pregnancy (Dean et al., 2014;Demers et al., 1972;Sandoval et al., 2021). This has led to the theory that glycogen acts as an energy reservoir for preimplantation embryos (Dean, 2019). In support of that, we show that glycogen mobilized during the preimplantation period is coming from the uterine epithelium, the cells that secrete histotroph. We also showed that the epithelium expresses G6PC, which is necessary for the secretion of glucose liberated from glycogen. G6PC has also been localized to the uterine epithelium of cyclic cows (Sandoval F I G U R E 5 Immunostaining for glycogen metabolizing enzymes at the implantation site (IS) and the interimplantation site (IIS) on DPC 5.5. Immunohistochemistry for hexokinase 1 (HK1), glycogen synthase (GYS), glycogen phosphorylase (PYG), and glucose-6phosphatase (G6PC) in uteri at DPC 5.5. n = 4. Scale bar = 50 µm. DPC, days postcoitum; Neg, negative control. et al., 2021). Global knockout of G6PC leads to a 50% decrease in litter size in mice, suggesting that G6PC is important for pregnancy (Jun et al., 2012), though systemic effects of G6PC knockout cannot be ruled out.
Before implantation, all four enzymes detected by immunohistochemistry (HK1, GYS, PYG, and G6PC) were primarily localized in the glandular and luminal epithelium, which is consistent with the significant change of glycogen content in the epithelium instead of the stroma. The expression of GYS, PYG, and G6PC in the uterine epithelium appeared to increase during the preimplantation period (DPC 1.5 and 3.5). These results agree with a study in mink that found uterine expression of Gys, Pyg, and G6pc messenger RNA increased after progesterone treatment with estradiol priming (Bowman & Rose, 2016). Western blots detected significant differences in GYS between the IS and IIS but no differences in HK1, PYG, or G6PC expression. However, western blots cannot differentiate between enzymes in the uterine epithelium, stroma, and myometrium. The trend for lower HK1 levels in the decidua is probably due to high expression in the uterine epithelium, which contributes a smaller part of the endometrium after decidualization. The concurrent expression of glycogen synthesizing and catabolizing enzymes in the uterine epithelium suggest that synthesis and catabolism are occurring concurrently within the epithelium. This may facilitate the continued transport of glucose from maternal blood to the uterine lumen even as glycogen levels are decreasing.
We also observed a substantial increase of glycogen in decidualized stromal cells at the IS and after artificial decidualization.
In agreement, electron microscope studies detected glycogen deposits in the decidual cells in hamsters (Blankenship et al., 1990).
In humans, endometrial glycogen concentration peaks in the luteal phase, presumably due to glycogen in the decidua, though glycogen has also been observed in the epithelium during the luteal phase (Gordon, 1975;Jones et al., 2015). Endometrial glycogen deficiency during the luteal phase is correlated to infertility in humans (Maeyama et al., 1977). Collectively, these results suggest that glycogen synthesis may be an inherent feature of decidualization and might be critical for maintaining a successful pregnancy.
The increase in GYS expression at the IS and in the artificially decidualized endometrium agrees with the dramatic increase of glycogen in the same tissues. The purpose of this glycogen reserve is currently unclear. Decidualization is a glucose-intensive process, requiring glucose metabolism via the pentose-phosphate pathway F I G U R E 8 Localization of glycogen metabolizing enzymes in the decidualized and undecidualized uterine horn in an artificial decidualization model. Immunohistochemistry for hexokinase 1 (HK1), glycogen synthase (GYS), glycogen phosphorylase (PYG), and glucose-6-phosphatase (G6PC) in hormonally primed mice. One horn was simulated to decidualize. The unstimulated horn served as a nondecidualized control. n = 4. Scale bar = 50 µm. (Frolova & O'Neill, & Moley, 2011;Tsai et al., 2013). After decidualization, the decidua switches to Warburg metabolism, metabolizing a large amount of glucose via glycolysis (Zuo et al., 2015).
Yet we consistently found high levels of glycogen after decidualization. It is possible that glycogen in the decidua is used to regulate the supply of glucose to decidual cells themselves, preventing negative effects of hyperglycemia, or to supply glucose to the developing embryo (Favaro et al., 2013;Zuo et al., 2015). More work is needed to elucidate the role of glycogen in the decidua and to determine if it is required for a successful pregnancy.
In conclusion, we show that the glycogen content of the glandular and luminal epithelium decreased during early pregnancy.
| PAS staining
Tissues were sectioned at 5 μm. Two slides were used for PAS and PASD staining separately. Slides were deparaffined in xylene F I G U R E 9 Levels of glycogen metabolizing enzymes in uterine horns stimulated to decidualize or left unstimulated. (a-d) Western blots for hexokinase 1 (HK1, a), phospho-glycogen synthase (pGYS, b), glycogen synthase (GYS, c), and glycogen phosphorylase (PYG, d) in uterine horns stimulated to decidualization with corn oil or unstimulated after hormonal priming. **p < 0.01 relative to unstimulated horn. n = 5.
(Avantor; 8668-16) for 10 min and rehydrated with graded concentrations of ethanol. Slides for PAS staining were incubated in PBS, and slides for PASD staining were immersed in PBS with 0.5% diastase (Sigma-Aldrich; 09962) at 37°C for 60 min. After incubation, slides were incubated in 0.5% periodic acid solution (Fisher Scientific; AC453171000) for 5 min at room temperature and washed three times with distilled water. Sections were immersed in Schiff's reagent (Sigma-Aldrich;3952016) for 15 min at room temperature, followed by 5 min wash in lukewarm running tap water. Then, slides were counterstained with hematoxylin (Fisher Scientific; and dipped in ammonium hydroxide buffer (Thermo Fisher; A669S) for 20 s. Slides were dehydrated in an ethanol series and incubated in xylene overnight. Slides were mounted with Permount mounting media (Fisher Scientific; SP15100). Images were captured using Zeiss Axioskop with 305 Axoicam color camera.
Images were analyzed with ImageJ (https://imagej.nih.gov/ij/) as previously described (Sandoval et al., 2021). Using hue, saturation, and brightness a threshold was set to define PAS positive pixels. The
| Immunohistochemistry
Tissues were sectioned at 5 μm, added to slides, deparaffinized, and rehydrated. Slides were boiled in sodium citrate buffer (Fisher Scientific; S271-3) and then cooled to room temperature. Then, the slides were incubated in 3% hydrogen peroxide (Fisher Scientific; H325-500) for 15 min. Nonspecific blocking was inhibited with block containing 10% goat serum (Vector Laboratory; S-1000-20) and 5% bovine serum albumin (BSA; Fisher Scientific; BP9706100) in Trisbuffered saline (TBS) for 1 h at room temperature. After the serum block, previously validated primary antibodies (Table 1) were diluted in the block, added to tissue sections, and incubated at 4°C overnight (Sandoval et al., 2021). All incubations were performed in hydrated chamber. The next day, slides were washed in TBS with tween (TBS-T) three times and incubated with secondary antibody (Vector Laboratories; BA-5000-1.5) diluted in the block for 30 min at room temperature. Then, slides were washed three times and incubated with avidin-biotin complex reagent (Vector Laboratory; SP-2001) for 30 min at room temperature. After three washes in TBS-T, 3, 3′-diaminobenzidine (Vector Laboratory; SK-4100) was applied.
Slides were counterstained with hematoxylin for 2 min. Then, the tissues were dehydrated, mounted, and imaged with a Zeiss Axioskop with 305 Axoicam color camera. Negative controls were treated as described above, except that the primary antibody was replaced with an isotype control (anti-green fluorescent protein) antibody.
| Western blots
When comparing ISs and IISs, the uterus was removed and ISs and IISs were separated. The IS was then cut longitudinally and the embryo was carefully separated from the uterus under a dissecting scope. In all circumstances, the uterine segments contained myometrium and endometrium. Tissues were then snap frozen at after processing.
Uterine tissues were homogenized in radioimmunoprecipitation assay buffer supplemented with phosphatase and protease inhibitors T A B L E 1 Primary antibodies and summary of conditions used for western blot (WB) and immunohistochemistry (IHC) Abbreviations: BSA, bovine serum albumin; GFP, green fluorescent protein; TBS, Tris-buffered saline; TBS-T, TBS with tween.
Then, proteins were transferred onto polyvinylidene difluoride membranes and were blocked for 1 h with either 5% BSA in TBS-T or 5% nonfat dry milk in TBS-T depending on the primary antibody. The membranes were incubated in primary antibody (Table 1)
| Statistical analysis
Statistical calculations were performed using GraphPad Prism version 8.3.1. Data collected during early pregnancy was analyzed by a Oneway analysis of variance followed by a Dunnett's analysis. Western blot of DPC 5.5 IIS and DPC 5.5 IS was analyzed by a paired t-test.
For the artificial decidualization experiments, glycogen content, uterine weight, and western blot were analyzed using a paired ttest. Results are presented as mean ± SEM and differences were considered statistically significant when p < 0.05.
|
2022-07-18T06:20:38.804Z
|
2022-07-17T00:00:00.000
|
{
"year": 2022,
"sha1": "5098f8ce5be846f9ac42d15332ff1fa822a74a18",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mrd.23634",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "ad1f49a16aaa1c113b72a90faa8e498dee5c226e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
191282732
|
pes2o/s2orc
|
v3-fos-license
|
Wilderness, the West and the Myth of the Frontier in Sean Penn’s Into the Wild
This article investigates the representation of wilderness in Sean Penn’s Into the Wild, specifically with regards to the myth of the American frontier. By using the myth of the frontier as a structure through which to read the film, we discover that the film proves William Cronon’s thesis that the idea of wilderness as an anti-human place is merely a human construct.
Introduction
Sean Penn's Into the Wild was released in 2007 to widespread critical acclaim, with some describing it as "spellbinding" (Ebert) while others claimed that "it deserves to be one of the most talked about films of the season" (LaSalle).Based on the novel of Jon Krakauer, which in turn is based on a real life story, the film depicts the quest of college graduate Chris McCandless to escape from what he sees as the deceit ridden fabric of modern consumerist capitalist society into the freedom and the vast open spaces of the wild. 1 As the title of the film indicates, one of its major preoccupations is the representation of wilderness and, beyond this, the meaning of wilderness in modern culture and society.William Cronon, in "The Trouble with Wilderness" (1995), makes what was seen as, at the time, "a very controversial attack" on the overly romanticised view of Nature held in the West and particularly in America (Clark 234). 2 Cronon, Professor of History, Geography and Environmental Studies and specialist in American environmental history and the history of the American West, claims that "for many Americans, wilderness stands as the last remaining place where civilisation, that all too human disease, has not infected the Earth" (69).Cronon goes on, however, to highlight that this view of wilderness as an escape from human civilisation is highly ironic, given the fact that "far from being the one place on earth that stands apart from humanity, it is quite profoundly a human creation"(69).In summary, Cronon claims that, in modern society, wilderness has come to embody a space that is antihuman but, in fact, wilderness is such a human construct that true wilderness in the sense of being completely other to the human, does not actually exist.Therefore, the contemporary Western relationship with wilderness cannot be perceived to be as "natural" as we might assume.
In this essay, I intend to argue that Chris McCandless, the protagonist of Into the Wild, views wilderness in exactly the way that is condemned by Cronon.He thinks of it as a completely nonhuman place where he will be able to leave the falsity of human civilisation behind and at last discover "truth" (although he is always very vague as to what this truth might be).Chris' highly romanticised and idealised view of nature becomes apparent very quickly in the film.At the basis of Chris' romantic perceptions of nature, we find the same two ideas that Cronon also claims are at the core of the wilderness' sudden transformation at the end of the nineteenth century from being a place of barren wasteland to the desirable destination that it is today -the myth of the frontier and the idea of the sublime.
Although we can use both these ideas as structures through which to read the film, this essay will focus on the myth of the frontier due to the fact that Into the Wild can be considered a type of Western, a genre to which the representation of the frontier is crucial.By using the myth of the frontier as a structure through which to read the film, I intend to prove that, ironically, Chris' romanticised view of nature has been formed by the very society which he himself rejects and furthermore that he brings some of this civilisation into the wilderness with him, illustrating Cronon's thesis that wilderness as an anti-human place cannot exist.
The Significance of the Frontier in American History
In venturing into the wild, Chris is following in the footsteps of his ancestors who set out, hundreds of years ago, as pioneers to discover the unknown lands of the Wild West beyond the frontier.Fredrick J.
Turner, in his seminal essay "The Significance of the Frontier in American History" (1893), was the first to posit the theory that, rather than the relationship to colonial European powers, it was the American frontier that had the most significant formative influence not only on American history, but also on American character and culture.He argued that "the existence of an area of free land, its continuous recession, and the advance of American settlement westward explain American development" (3).Turner ends by claiming that, even by 1893, the geographical place of the frontier "has gone and with its going has closed the first period of American history" (38).This end, however, is also a beginning -Turner lays the foundations for the myth of the frontier which will come to have a pervasive influence on American culture and society, as evidenced in literature and film by the rise of the Western genre. 3deed, many of what will come to be the key aspects of the frontier myth, dealt with in the Western genre, are already evident in Turner's essay.He defines the frontier as "the meeting point between savagery and civilisation" (2).Thus, as we witness, for example, in many Western films, it is a landscape of violence, "a place to express conquest and domination" (Schneekloth 210).Yet, paradoxically, seen through the eyes of the agrarian ideal, the frontier, as Turner emphasises throughout his essay, is also a place of "perennial rebirth" providing rejuvenation of both man and society (2), as well as a "new field of opportunity" for the rebirth of civilisation and society that will never come again (28).Believing to such an extent in the frontier's capacity for renewal, Turner even goes so far as to describe the frontier as a "magic fountain of youth" (qtd. in Nash Smith 5). 4 The Frontier Myth in Into the Wild This is precisely the aspect of the American frontier that is mythologised in Into the Wild.It is the potential for rebirth and renewal provided by the frontier and what lies beyond, that really captures Chris' imagination.This is evoked in the film by Chris' rebirth as Alexander Supertramp and we are constantly reminded of this by the naming of the "chapters" the film is divided into: these progress from "My Own Birth" through to the "Getting of Wisdom."Indeed, Penn deliberately associates this sense of rebirth and renewal with the myth of the frontier itself.In one of the opening scenes, when Chris is dropped off by the truck driver at the end of the Stampede Trail, the edge of the Alaskan wilderness, the way the camera is angled gives us a very high and wide bird's eye view of the desolate and seemingly virgin wilderness into which Chris is about to set off, so much so that at first we do not even notice the approach of the van in one small corner of the screen.It is completely overpowered by the vast and immense landscape surrounding it.Even when the dialogue begins it is still the view of the landscape which dominates our attention, we do not even see the people who are talking.We are then presented with an overhead view of a tiny figure making fresh footprints in the otherwise untouched snow, signifying that this land into which he is about to venture is unknown and undiscovered.The edge of the Stampede trail, the climax of Chris' journey of self discovery, is thus represented as a frontier beyond which the lands remain to be discovered.This idea is also emphasised throughout the sequence of the opening credits as we are presented with a montage of Chris journeying into the wilderness.The camera pans across the enormous, snow covered mountains, pausing to give us a shot of Chris' face for the first time as he struggles through the snow, with no other sign of humanity in sight except the knitted hat that Chris places on a stick in a similar gesture to mountaineers who place a flag at the top of the mountain.The bright orange hat stands in great visual contrast to the snowy plains surrounding it and emphasises the lack of any other sign of civilisation by its stark and anomalous appearance.
With the appearance of Chris' words "I now walk into the wild" superimposed on the screen, the camera begins zooming out to show how vast, unforgiving and uninhabited this landscape appears.Chris is thus cast as a pioneer, about to embark on an adventure never before attempted.This cinematic technique is repeated at the start of the chapter noticeably entitled "My Own Birth." Just after we have been shown Chris throwing off the ties and constraints of modern civilisation by cutting up his identification cards and giving away his money in order to allow for his rebirth as Alexander Supertramp, we are again presented with a high overhead shot, this time of him in his car driving into the desert, representing what he sees as his escape from society.Chris then simultaneously vocalises this escape and explicitly invokes the myth of the frontier by saying: It should not be denied that being footloose has always exhilarated us.It is associated in our minds with escape.History and oppression are gone and irksome obligations.The absolute freedom of the road has always led west.(Into the Wild) Thus, with this statement, Chris not only inescapably references the frontier myth but makes it clear that his mythologisation of it specifically relates to the West as a place of liberation and self discovery.This, however, also presents us with a problem.Writing almost a century after Turner, Richard Slotkin, highlights the significance of the role of violence in the frontier myth to a far greater extent and brings the two opposing aspects of the myth together by claiming that this regeneration that Turner also speaks of was in fact achieved "through violence" (12 original emphasis).Hence we realise that by only viewing the frontier and the wilderness that lies beyond it as a place of escape from society, and failing to acknowledge the violence that Slotkin claims must go hand in hand with this, Chris holds a romanticised and unrealistic point of view about what he expects to discover beyond the frontier.
We see this unrealistic perspective, for example, in his ecstatic statement to his friend Wayne: "I'm going to Alaska, I'm gonna be all the way out there on my own -no fucking watch, no map, no axe, no nothing, just be out there in it, big mountains, river, sky, in the wild -in the wild!" (Into the Wild).Noticeably throughout this conversation, Chris and Wayne are never shown in the same shot.
The camera mostly focuses on Chris' face, alight with passion and excitement, but occasionally switches to Wayne, whose expression is one of scepticism, mirroring the fact that one of the biggest criticisms which can be made of Chris by an audience is that he was severely underprepared for his journey and can therefore be seen as having a suicide wish.Ironically had he brought a map with him, it might have saved his life.However, clearly Chris believes that if he were to bring such tools of civilisation with him he would not be able to experience wilderness as an anti-human place.
Indeed in the very next sentence Chris goes on to say that when he is in the wilderness he will be "just living, just there in that moment, in that special place in time" (Into the Wild).What is significant here is Chris' mention of "that special place in time" because this is how wilderness, thanks to the frontier myth, is often falsely viewed.It is seen as being somewhere where all the past of human existence is miraculously erased and it thus becomes, to use the words of Cronon, "a flight from history" (79).William Talbot, in his 1969 essay "American Visions of Wilderness", claims that in nineteenth-century America it came to be believed that "the past of the wilderness stretched back to creation itself, untouched by human civilization.This virgin nature was as close as man could ever hope to get to the primal state of the world" (152).This advocacy of primitivism, the belief that the ills of our modern society can only be escaped by returning to a more simplistic way of living, is an opinion endorsed by Chris.However, as Cronon is quick to point out, this is, and never has been, true of the lands beyond the frontier which were inhabited by the Native Americans before the Native Americans were forced out in order to maintain the myth of the virgin wilderness (79).In fact this irony only highlights how constructed the idea of wilderness has become.Furthermore, as we have already noted, in the opening scenes of the film, the frontier for Chris is shown to be the extreme North of America and not, in fact, the West.This is precisely because, as Turner highlights, the frontier of the West no longer exists -now it is only in Alaska where virgin wilderness can be found and this is why Alaska holds such an allure for Chris.The properties of the mythic West have been displaced to the North.Therefore, Chris is shown not only to be ignoring the violent aspect of the frontier myth but also to be buying into this myth of virgin wilderness which is irrevocably intertwined with the renewal qualities of the myth of the frontier.
Into the Wild has been recognised as including many elements of a more typical Western film, one of these key elements being a setting at the frontier and a representation of the frontier myth.
Cronlund Anderson, in Cowboy Imperialism, argues that, in fact, without a representation of the frontier myth a Western film is not a Western ( 16).As such, the Western film itself, as a genre, is structured by the same series of oppositions which the frontier myth encompasses -the individual versus the community, nature versus civilisation and the West versus the East. 5In Into the Wild we can see these binaries present not only in Chris' own beliefs but also in the way the film is structured.
As Chris journeys North, Penn makes frequent use of flashbacks in order to contrast his past with his present.For example, as we see Chris renaming himself Alexander Supertramp and subsequently walking off towards the horizon, symbolising the beginning of his adventure, this image is pushed to the left hand side of the screen in order to allow for the simultaneous viewing of the lives of his parents who we see worrying about the whereabouts of their son.This splitting of the screen is a visual representation of the fact that Chris' past is being contrasted with his present, as well as his new found individualism contrasted against the community he has left behind, and civilisation against the wilderness he is now living in.The voiceovers by Chris' sister Carine also have this function -she acts as a voice from the past but a past which Chris has consciously severed himself from and is now placing himself in opposition to.Thus through its very structure the film is aligning itself with the Western film tradition specifically in regards to the dichotomies raised by the myth of the frontier.
We could see Chris himself, however, as a kind of Western anti-hero.The typical hero of a Western film is the cowboy.Despite the fact that Chris does share some traits in common with the stereotypical cowboy, represented by a picture of Clint Eastwood on Chris' wardrobe door, such as courage and skill, he is also very different from them mainly because although, like Chris, cowboys often acted alone, their actions were usually seen to be for the good of a community, as they sought to bring civilisation into the wilderness. 6Not only, however, does Chris act completely for himself but he deliberately seeks the primitive state of living that the wilderness can provide precisely because it is in complete opposition to civilisation, thus leading us to view him as an anti-hero.
The question, then, is raised as to whether Chris manages to avoid tainting the wilderness he enters with traces of civilisation.We must remember that the very idea of the frontier itself is a human creation, as Schneekloth tells us, "it was invented, not discovered" (210).Therefore, we could see Chris' conception of wilderness as a piece of civilisation he brings with him into the Alaskan wildness.
The myth of the frontier as a place of rejuvenation is central to his conception of wilderness, an idea which comes from the very society he seeks to leave behind.This view of wilderness as a piece of civilisation comes to be visually and concretely represented in the symbol of the "magic bus," the abandoned camper van where he lives while in Alaska.Chris calls the bus "magic" precisely because it contains enough of the vestiges of civilisation to fulfil his basic needs and allow him to live out his fantasy of complete self-sufficiency in the Alaskan wilderness but not so much that he feels that he is no longer a pioneer.
Another concrete piece of civilisation that Chris brings with him into the wilderness is his collection of books.Jonah Raskin criticises the film for "not being able to free itself from the written word" (4), but this is precisely because Chris himself is never freed from the written word.Throughout the film, we constantly see Chris reading the works of Thoreau, London, Tolstoy and Pasternak, from all of whom he learns important ideals which he applies to his own life, whether it is while he is sitting alone in the magic bus or perched upon a rock overlooking the sea and the ocean. 7We are shown early on in the film Chris' capacity to take what he reads and apply it to his own life.When he and Carine are arriving at Chris' graduation dinner, Chris reads out some poetry to Carine, who asks him "Who wrote that?' to which Chris replies "Well, could have been either one of us, couldn't it?"(Into the Wild).Thus, we see that, for Chris, there does not necessarily have to be a clear distinction between fiction and real life.
Although books and reading abound in the film, the works of one writer in particular are important -those of Henry David Thoreau.It is from Thoreau that Chris borrows the idea that then forms his central philosophy on life: "rather than love or money or fame or fairness, give me truth" (Into the Wild).Thoreau's Walden, or Life in the Woods is the only work of environmental literary non-fiction to be considered part of the Western literary canon (Clark 27).Throughout Walden, but particularly in the section entitled "Solitude", we hear Thoreau express many sentiments that, as evidenced by references to the book throughout the film, Chris seems to have absorbed in his reading of it.For example, Thoreau speaks of the delight of having "a little world all to myself" and constantly expounds upon the innocence and benevolence of nature (98).When speaking of the one moment in his experience when he felt himself to be a little lonely, he goes on to say: In the midst of a gentle rain while these thoughts prevailed, I was suddenly sensible of such sweet and beneficent society in Nature, in the very pattering of the drops, and in every sound and sight around my house, an infinite and unaccountable friendliness all at once like an atmosphere sustaining me, as made the fancied advantages of human neighborhood insignificant, and I have never thought of them since.(99) Chris too believes that the society he will find in nature will be far superior and less corrupt than the human society he has chosen to leave behind.But most importantly of all, in the society of nature he will find truth where there is none to be found in civilisation.
When Chris first arrives in the magic bus, he tells us that now comes, "after two years, the final and greatest adventure -the climactic battle to kill the false being within victoriously concludes the spiritual revolution" (Into the Wild).Chris believes that this "false being within" has been created by civilisation and only in the Alaskan wilderness is it possible to destroy it because first isolated surroundings must be found where nothing can contaminate the inner spirit.While Chris is speaking, the image of him carving these words into the wood is interspersed with images of him chasing deer in the snow and watching them with tears welling in his eyes.Finally, he thinks that he has returned to "the truth of his existence" and has escaped from everything that kept this truth from him, as his sister tells us, once Chris has graduated: Now he was emancipated from that world of abstraction, false security, parents, materiality, the things that cut Chris off from the truth of his existence.(Into the
Wild)
Thus the frontier between the Alaskan Wilderness and the rest of the world becomes for Chris a boundary, a kind of dividing line, between truth and falsehood and reality and abstraction.He creates a dichotomy in his mind where everything good is ascribed to nature and everything else to humanity and the two are clearly distinct from one another.It is therefore more than a little ironic that Chris believes the "truth of his existence" is to be found in the wilderness, which he thinks has a radical alterity to humanity, when this very idea of what this truth is has been formed by words and books coming from the civilisation he so despises.
Significantly, not only is Chris constantly reading but he also writes a journal and tells his friend Wayne "maybe when I get back, I can write a book about my travels" (Into the Wild).
Furthermore, when Chris speaks of his "spiritual revolution" he is writing the words at the same time and we witness him creating a narrative for himself as he carves painstakingly into the wood, emphasising the act of creation, while simultaneously reading aloud "no longer to be poisoned by civilisation, he flees and walks alone upon the land to become lost in the wild" (Into the Wild).The way the camera jerkily moves across the words as Chris carves them mirrors the act of reading, jumping from one word to another.This is a highly conscious deployment by Chris of the same types of narratives he has read by Jack London, for example, and suggests that in the narrative of his own story he is consciously representing himself as a heroic cowboy figure.This is emphasised by the fact that Chris chooses to write in the third person: by so doing he is distancing himself from his own story and writing about himself as though he were a character in one of his favourite novels.
In Ecology Without Nature, Timothy Morton says, in reference to Thoreau, that "a white male nature writer in the wilderness may be "going native" to some extent, but he is also usefully distancing this wilderness, even from himself, even in his own act of narration" (126).We might modify Morton's words slightly by saying that by the very act of narration the wilderness is being distanced because it starts to become something that is being written about and that has happened in the past rather than that is being experienced in the moment.Moreover it becomes subjected to the laws of storytelling and the control of the narrator.Therefore, just as the frontier has become a myth because it began to be written about, encoded with certain signs and symbols, the same thing happens to wilderness.By turning his wilderness experience into a narrative, Chris seems ultimately to be proving Cronon's point , that the wilderness can never truly be an anti-human place because the wilderness is not allowed merely to exist, or even, to be enjoyed by humanity but is subjected to the narratorial authority of those who inhabit it.
In addition, Chris tells Wayne that, if he writes a book, it will specifically be about "getting out of this sick society" (Into the Wild).This is ironic because to write a book for others to read is to take part in the consumerist nature of capitalist society and by so doing, Chris would be allowing the account of his experiences to forever become trapped within, and perhaps be exploited by, the very society he escaped from.Although not written by Chris himself, 8 a book was indeed written about the experiences the real Christopher McCandless had in escaping from this "sick society".On a metatextual level, we are made aware throughout the film that what we are watching is indeed a story that has already been told and is subject to the narratorial whims of the director.Several times, for example, Emile Hirsch, as Chris McCandless, looks straight into the camera with an extra-diagetic gaze that is aware of the audience, reminding us not only that we are watching a man made film but that it has been constructed especially for our enjoyment and entertainment.These moments are also "intended to invoke the movie's final shot", a picture of the real Chris McCandless sitting outside the magic bus, smiling happily (LaSalle).This closing image reminds the audience just how distanced we are, first by Krakauer's book, then by the film, from the "real" story as it actually happened and just how much narrative control it has been subjected to in the meantime.Therefore, not only are we shown that the wilderness itself is merely a construct but also that any attempt to relate a story about it inevitably involves enmeshing it even further in various human constructs, of which the myth of the frontier is only one.
Notes
1. Throughout this essay, when I refer to Into the Wild, I am always referring to Penn's film unless otherwise stated.It is also worth noting that when I refer to Chris McCandless, I am always referring to the character portrayed in the film and not the character of Krakauer's novel nor the real life person.This essay focuses on the film as opposed to the book because it is through Penn's film that the story of Chris McCandless has become well known.Furthermore, Penn makes more of a conscious effort than Krakauer to highlight the film's links to the Western genre and thus the myth of the frontier takes on more importance in the film.2. Despite Cronon's assertion that his claim was "heretical", it was both later and previously supported by many others.See, for example, J. Callicott, Kate Soper and Carolyn Merchant (although Merchant differs from Cronon in that she believes it is the myth of the garden of Eden that lies at the basis of our modern perception of wilderness rather than the myth of the frontier).3. Much has been written about the influence, and presence, of the frontier myth in American literature and culture.For the impact of the Frontier myth on literature, see, for example, Bakker and McVeigh .For the influence of the frontier on American culture and politics, see Slotkin. 4. Nash Smith is quoting a speech delivered by Turner in 1896 of which I have been unable to find the original version. 5.For the full, more detailed table of oppositions see Kitses (12).6.For more about the civilisation process of the lands of the frontier brought about by cowboys see Calder (5). 7. See Raskin for a discussion of the film's intertextuality in relation to the works of Jack London.Issues of intertextuality are fairly prominent in the film, and would be interesting to investigate further, however, they fall outwith the scope of this essay.8. Krakauer does, however, make extensive use of Chris' journals in his book and has done extensive research on the journey that he took, even taking the same journey that he does.
|
2018-12-29T18:23:24.367Z
|
2013-06-06T00:00:00.000
|
{
"year": 2013,
"sha1": "abee0fa5ca14cab70f9c23ae02ad7f912b0ece79",
"oa_license": "CCBY",
"oa_url": "http://journals.ed.ac.uk/forum/article/download/521/809",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "abee0fa5ca14cab70f9c23ae02ad7f912b0ece79",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Art"
]
}
|
237514093
|
pes2o/s2orc
|
v3-fos-license
|
Slow update of internal representations impedes synchronization in autism
Autism is a neurodevelopmental disorder characterized by impaired social skills, motor and perceptual atypicalities. These difficulties were explained within the Bayesian framework as either reflecting oversensitivity to prediction errors or – just the opposite – slow updating of such errors. To test these opposing theories, we administer paced finger-tapping, a synchronization task that requires use of recent sensory information for fast error-correction. We use computational modelling to disentangle the contributions of error-correction from that of noise in keeping temporal intervals, and in executing motor responses. To assess the specificity of tapping characteristics to autism, we compare performance to both neurotypical individuals and individuals with dyslexia. Only the autism group shows poor sensorimotor synchronization. Trial-by-trial modelling reveals typical noise levels in interval representations and motor responses. However, rate of error correction is reduced in autism, impeding synchronization ability. These results provide evidence for slow updating of internal representations in autism.
T he core difficulty in social interactions of individuals with ASD has traditionally been attributed to a lack of social interest and motivation 1 , but this view has been recently challenged 2 . Recent studies revealed that atypical perceptual and motor processing are consistent characteristics of autistic experience 3 . Individuals with ASD show particular difficulties when sensorimotor integration is required 4,5 , and their magnitude is correlated with symptom severity 6 . The manifestation of various sensory and sensorimotor atypicalities suggests that crossmodal accounts may be required to explain this complex phenotype within a unified framework. Accordingly, several recent studies have attempted to explain autism within the cross-modal Bayesian framework. This framework attributes difficulties to an abnormal estimation of the environment's statistics, which leads to impaired integration of past experiences for regulating ongoing behavior [7][8][9][10][11] . Yet, the nature of this abnormality has been disputed.
A dominant account suggests that individuals with autism overestimate the rate of changes in the statistics of the external environment 10,12 , leading to an overestimation of the reliability of recent events compared with earlier ones. Consequently, recent events are overly represented in the formation of perceptual estimations and motor plans ("increased volatility" hypothesis). An opposing account ("slow updating" hypothesis) proposes that individuals with autism are able to estimate environmental statistics correctly, yet the rate at which internal priors are updated is slower than neurotypical. This account was proposed by Lieder et al. 11 , who used computational modeling of two-tone frequency discrimination to show that participants' responses are biased by the tones in previous trials. Yet, the relative weight of recent and long-term contributions differs between individuals with autism and neurotypical individuals. Early trials influenced perceptual judgments similarly in both groups, but the influence of recent trials was reduced in the autism group. Therefore, while the statistics of earlier events are integrated well into predictions and actions, this accumulation takes longer, and recent events are underweighted. Both theories have clear predictions for broad contexts, yet in many cases, these predictions are opposed. In particular, when fast online updates are needed for adequate task performance, the "increased volatility" hypothesis predicts better performance in autism, whereas the "slow updating" account predicts impaired performance. Synchronization tasks require a fast update of internal representations and motor responses based on external cues and therefore provide an experimental platform for comparison between these opposing predictions.
Synchronization ability was reported to be impaired in autism, in both social and nonsocial contexts [13][14][15] . Studies with neurotypical populations found that synchronization is functionally related to the theory of mind 16,17 and to social behavior 18 . The rationale proposed for these observations is that synchronized actions promote a predictive mechanism trained to anticipate other's actions and intentions 19,20 .
Paced finger tapping is a synchronization task in which participants are asked to align their tap to the beat of an external metronome. Perfect synchrony means perfect alignment between the participant's taps and the external metronome. Human performance is limited in two aspects. First, participants tap with a small negative asynchrony, which is perceived as synchronous (Fig. 1a). The (mean) magnitude of this asynchrony is influenced by many factors, peripheral and central, including the type of movement, type of feedback, and characteristics of the metronome sound [21][22][23][24][25] . Since the relative contribution of the peripheral and central sources is not known, we had no prediction for group differences regarding mean asynchrony. The second limitation on synchrony is variability around this mean. Though tapping variability is also affected by both central (such as intelligence 26,27 ) and peripheral factors, the contribution of peripheral factors, such as motor noise, is considerably smaller 28 . Importantly, the components underlying variability were systematically modeled.
When the metronome tempo is constant, models of paced finger tapping assume that keeping the variability small is challenged by two sources of noise: noise in motor responses and noise in the internal representation of the metronome tempo (timekeeping). Both can be corrected online by using the asynchrony error signal (the perceived interval between the metronome beat and the tap). If errors are not corrected quickly and are kept through metronome beats, they accumulate, increasing the variability around the mean asynchrony and leading to poor synchronization [28][29][30][31][32] . Changing environments introduces another difficulty-to identify when and to what extent the metronome tempo changes and quickly correct for it. This is done Isochronous finger tapping: mean asynchrony is similar in the three groups, but variability around this mean is substantially larger in Autism (ASD) compared to neurotypical (CON, control) and dyslexia groups (DYS). a A schematic illustration of the temporal structure of paced tapping: metronome stimuli (presented every 500 ms, black squares), and finger-tap responses (blue circles) as a function of time; e k -error (asynchrony, typically negative) in tap k; r k -inter-tap interval; d k -delay interval from the previous metronome stimulus (beat k-1) to the following finger tap (tap k). Note that r k ¼ d k À e kÀ1 . b, c Basic tapping parameters: b Mean asynchrony is negative for all three groups (p < 0.001) and similar in the three populations, though more broadly distributed in the ASD group. c Standard deviation is larger in the ASD group compared to the two other groups. Each dot represents the performance of one participant (average of two blocks); the y-axis represents the score in ms, and x-axis and color represent group membership (with a small jitter for readability): blue circles -neurotypical, red triangles-dyslexia, and green squares-ASD. The median of each group is denoted as a line of the same color; error bars around this median denote an interquartile range. Kruskal-Wallis H-statistic and corresponding p values are plotted in the bottom-left corner; p values of comparisons between groups are plotted next to the line connecting the groups' medians. N = 109 subjects (N CON = 47, N DYS = 32, N ASD = 30). Source data are provided as a Source Data file. Though there are a few outlier results in both mean asynchrony and standard deviation of participants with ASD, these are not the same individuals-scores on these two measures were not correlated in the ASD and dyslexia groups (Spearman correlations: ρ ASD ¼ À0:2 (p = 0.3), ρ DYS ¼ À0:24 (p = 0.18)). A significant correlation was found only in the neurotypical group (ρ CON ¼ À0:37, p = 0.01, uncorrected). Statistical tests are two-sided unless stated otherwise. by modifying the internal representation of the external tempo, while concurrently correcting for the stationary noise sources mentioned above. The "slow updating" hypothesis predicts that the rate of error correction will be reduced in autism while motor noise and timekeeping noise will be similar to that of neurotypical individuals. In contrast, the "increased volatility" hypothesis predicts increased (over-)correction, leading to either superior alignment or even overshooting the amount of correction required.
To test the specificity of tapping atypicalities to autism, we also recruited a group of participants with dyslexia, matched for age and cognitive reasoning. Dyslexia, a common neurodevelopmental disorder, is characterized by poor reading and spelling 33 . Similar to individuals with ASD 34 , individuals with dyslexia show high concurrence with ADHD 35 , and atypical perceptual characteristics 36,37 . But individuals with dyslexia are not diagnosed for social difficulties.
In this work, we administer two tapping protocols, one using a fixed metronome tempo (Experiment 1), and the other using a tempo-switch protocol (Experiment 2). Together, the two experiments allow us to quantify the dynamics of error correction in both stationary and changing environments. For both experiments, we use computational modeling to quantify the rate at which internal representations are updated, and dissociate its contribution to task performance from that of internal noise sources. Only the autism group shows impaired synchronization, owing to reduced use of recent sensory information for error correction. Noise levels in both interval representation and motor execution are intact. These results support the "slow updating" account of autism.
Results
Experiment 1isochronous tapping reveals reduced online error correction in ASD, but not in dyslexia. As a main measure of performance, we used asynchrony (the difference between metronome stimulus and participant responses). We measured the mean and standard deviation (SD) of the asynchrony in a paced finger-tapping task, with a fixed 2 Hz auditory metronome beat (illustrated in Fig. 1a; test-retest correlation of the main tapping parameters is~0.8; Supplementary Note 1 and Supplementary Fig. 1).
By contrast, we found significant differences in the variability (denoted by the SD) of the groups around their mean asynchrony (average over two repetitions, median [interquartile range] (ms): neurotypical (CON): 30.6 [8.9], dyslexia (DYS): 30.2 [15.6], autism (ASD): 41.4 [27.1], Kruskal-Wallis test H(2) = 9.74, p = 0.008, see Fig. 1c). The significant group difference was due to the large variability of individuals with ASD (post hoc analysis of ASD group vs. neurotypical or dyslexia using two-sided Tukey-Kramer method (throughout the paper): p < 0.022, Cliff's delta > 0.38 in both cases), while there was no difference between the dyslexia group and the neurotypical group (p > 0.95). Although there were individuals with autism whose SD was in the range of the neurotypical population, the SD of a third of the group was more than two standard deviations (of the neurotypical distribution) above the neurotypical mean, compared with only one individual with dyslexia whose variability was in this range. This pattern of results was replicated in Experiment 2 (Supplementary Figs. 2 and 3).
Reduced online error correction underlies poor synchronization in ASD. Phase correction is the process of using the perceived error (deviation of the current tap from mean asynchrony) to adjust the timing of the next tap to be closer to the participant's mean asynchrony (which is perceived as synchronous with the metronome beat). To test the efficiency of online phase correction we calculated the correlation between consecutive asynchronies (errors). Any positive correlation means that errors tend to persist across beats, and a correlation of one means that errors are fully retained across consecutive beats. A correlation of zero means that errors were not carried across trials, and negative correlations mean overcorrection. All three groups showed a positive correlation ( Fig. 2a-c, r CON ¼ 0:60; r DYS ¼ 0:59; r ASD ¼ 0:75), Correlation between consecutive asynchronies (errors) is highest in the ASD group revealing reduced online error correction. a-c Scatter plots showing correlations between consecutive asynchronies: a neurotypical (CON, control), b dyslexia (DYS), and c ASD. Individual asynchronies were plotted with respect to each participant's mean asynchrony, yielding a mean of 0 ms. Consecutive asynchronies are positively correlated in all groups. This positive correlation is largest in the ASD group, reflecting reduced online error correction. Luminance scale is equal in (a-c): white, the maximum number of asynchronies in a bin, is 165 in all graphs. d Single participant correlations also show the impairment in error correction for the ASD group compared with the neurotypical and dyslexia groups. The median of each group is denoted as a line of the same color; error bars around this median denote an interquartile range. Kruskal-Wallis H-statistic and the corresponding p value are plotted in the bottom-left corner; p values of comparisons between groups are plotted next to the line connecting the groups' medians. N = 109 subjects (N CON = 47, N DYS = 32, N ASD = 30). Source data are provided as a Source Data file.
indicating that participants partially carry errors across consecutive beats. Calculating single participant correlations (Fig. 2d 28], Kruskal-Wallis test H(2) = 8.86, p = 0.012), we found the largest correlation in the autism group, indicating that they retain uncorrected errors longer than the other two groups. The difference between the groups was significant, and post hoc comparisons showed that this is the result of a significant difference between the ASD group and both the neurotypical (p = 0.033, Cliff's delta = 0.35) and the dyslexia groups (p = 0.017, Cliff's delta = 0.39). The source of reduced error correction between consecutive trials in ASD could be slow perceptual updating, leading to a smaller perceived error, or slow updating of motor plans. Our analysis cannot dissociate between these alternatives.
To understand the dynamics of phase correction we used an autoregressive model to predict the current asynchrony. We consider linear dependencies not only with the previous asynchrony but with several previous asynchronies. We used stepwise regression to determine the number of previous asynchronies to use in the model. We ran the models both at the group level (using separate regressors for each participant but using a group level criterion when adding predictors) and at the single-participant level. The final model included three predictors for all three groups and one to three predictors for 103/109 participants ( Supplementary Fig. 4a). That is, it was sufficient to use asynchronies up to three taps back to predict the current asynchrony, and no additional information was given by adding more asynchronies as predictors. There was no difference between the groups with regard to the number of predictors in the final model (χ 2 (2, N = 109) = 8.22, p > 0.4). Together, this suggests that phase correction relies only on the most recent information (<2 s). In accordance with the results of Fig. 2, we found a significant difference between the groups in the contribution of the most recent asynchrony to the current asynchrony (Kruskal-Wallis test H(2) = 6.16, p = 0.046; Supplementary Fig. 4b), indicating that the ASD groups corrected less of the most recent error and carried a larger fraction to the next tap (for more details see Supplementary Note 2 and Supplementary Fig. 4).
Modeling isochronous tapping reveals that rate of error (phase) correction is slow in ASD. Impaired phase correction does not rule out that individuals with autism also have noisier representations of the metronome tempo (timekeeper noise), or "sloppier" production of motor commands (motor noise). To address this possibility, we used a well-established computational model of sensorimotor synchronization 28,31,32 . This model assumes that each tapping interval is the summation of three components: timekeeping of external tempo 29,38 , the time required for motor execution (both incorporating Gaussian noise), and fraction of perceived error (asynchrony) correction from the previous tap (relative to the mean asynchrony which participants view as synchronous with the metronome). Formally, the model can be written as follows (see Fig. 3a): where r k is the inter-tap-interval of the participant, between Fig. 3 Trial-by-trial computational modeling of isochronous tapping: Parameters estimated for each participant show that individuals with autism have reduced error correction and intact timekeeper and motor noise. a Schematic illustration of the computational model used to dissociate error correction mechanisms from poor timekeeping or motor noise 29,31,32 . Each tapping interval (blue empty arrow) is assumed to be the summation of three mechanisms: (1) error correction based on the previous asynchrony (marked in red, the magnitude of the correction is determined by the phase correction parameter α) (2) timekeeping of the base tempo T k (composed of a fixed t 0 , purple, plus the noise at tap k, n k , green), and (3) motor noise (turquoise). See also notations in Fig. 1a. Fitting was performed using the bGLS (bounded General Least Squares) estimation method 28 . b Error correction of phase difference-the fraction corrected (α) is significantly smaller in the ASD group. c Noise in keeping the metronome period, and d Motor noise do not differ between the groups. b-d Each block was modeled separately, and parameters were averaged over the two assessment blocks. The median of each group is denoted as a line of the same color; error bars around this median denote an interquartile range. Kruskal-Wallis H-statistic and corresponding p value are in the bottom-left corner; p values of comparisons between groups are next to the line connecting the groups' medians. CON control (neurotypical), DYS dyslexia, ASD autism. N = 108 subjects (N CON = 47, N DYS = 32, N ASD = 29), one ASD participant was excluded due to a large number of missing taps (see Methods). Source data are provided as a Source Data file. metronome beats k and k-1, T k is the participant's current representation of the metronome tempo, M k is the time of the motor response at time k (both including noise, which is referred to as timekeeper noise and motor noise, respectively), e kÀ1 is the asynchrony at beat k-1 and α denotes the proportion of correction of this asynchrony in tap k. To maintain a constant asynchrony, positive asynchrony deviations should be followed by shorter intervals and vice versa. Therefore, correction of the next interval is performed by subtracting the magnitude of the current deviation from the estimated tempo, which is why α, the phase correction parameter, appears with a negative sign. When α ¼ 0 there is no correction and the previous asynchrony is carried to the next response, therefore, larger phase correction will correspond to improved performance on the task.
Note that we can separate the timekeeper component T k into a fixed mean (t 0 ), which is assumed to be equal to the external metronome tempo, and a noise component with variance σ 2 T and zero mean (denoted by n k ), such that: T k ¼ t 0 þ n k (see Fig. 3a). Previous work suggested that the motor noise, associated with each movement onset, and the timekeeper noise, associated with inter-beat intervals, can be distinguished from one another based on the covariance structure of the noise term 29,31,32 (see Methods). Parameter recovery analysis showed a high correlation between the fitted values and the parameters used to generate simulated data (Spearman correlations were larger than 0.92 for all parameters in each of the three groups), indicating that the fitting procedure was highly reliable (Supplementary Note 3 and Supplementary Fig. 6).
We fitted the model for each participant separately and compared the group parameters ( Fig. 3b-d). Phase correction was (median [interquartile range]) 0. 37 [0.21] in both the neurotypical and dyslexia groups, indicating that error was only partially corrected across consecutive taps, in line with the positive correlation we found (Fig. 2). Yet, phase correction was even smaller (0.27 [0.17]) in the autism group, with a significant group difference ( Fig. 3b; Kruskal-Wallis test H(2) = 6.63, p = 0.036). Post hoc analysis showed a significant difference between the neurotypical and autism groups (p = 0.045, Cliff's delta = 0.31) and a marginal difference between dyslexia and autism groups (p = 0.078, Cliff's delta = 0.32), but no difference between the neurotypical and dyslexia groups (p > 0.95). In contrast to phase correction, we found no group difference in the levels of timekeeping and motor noise (Fig. 3c The specificity of the group difference to phase correction shows that the larger variability in the autism group does not stem from an elevated noise level in either motor or tempo keeping processes. Importantly, simulations based on the model fitted values per participants reproduced the pattern of differences observed for consecutive correlation values (Supplementary Note 4 and Supplementary Fig. 7). Experiment 2-tempo switches reveal reduced online updating of external changes in ASD. In the second finger-tapping experiment we asked whether individuals with autism or individuals with dyslexia have difficulties in adapting to changing environments. We tested this by switching the tempo of the auditory metronome, so that within each block the tempo alternated between two options (randomly every 8-12 intervals). We quantified the dynamics of updating to the new tempos in our three groups using both model-free and model-based analyses.
Individuals with ASD fail to adapt to fast changes in the environment. Figure 4 shows the timing of tapping in each population aligned to the onset of tempo change (left-acceleration, right-deceleration). We present performance using the delay interval d k (the time interval from the previous metronome stimulus to the following finger tap, illustrated in Fig. 1a), rather than inter-response interval (r k ), since the delay interval uses a constant reference point (the previous metronome beat), whereas the inter-response-interval depends on the previous asynchrony which varies from tap to tap. For presentation purposes, we aligned the pre-change delay interval with the metronome beat (canceling the difference that originated from negative mean Individuals with autism adapt to changes in tempo only partially, even when changes are very salient. a, b 90 ms step-size, c, d 70 ms stepsize, and e, f 50 ms step-size. In each panel, the x-axis represents the metronome-beat number around the moment of tempo change (beat 0), and the y-axis measures the delay interval in each beat aligned to the prechange metronome (mean group values, ±SEM; values were calculated by first averaging responses within each participant and then across the group; error bars denote SEM across participants). The dashed lines represent the metronome beat. Changes are quickly corrected, particularly for the larger steps (panels a-d). Reduced updates are seen for the smaller 50 ms step changes (panels e, f), where neurotypicals (CON control) take three-four steps to correct, and individuals with dyslexia (DYS) take longer, perhaps since these steps are less salient. The difficulties of individuals with autism (ASD) are seen in all step changes (including the smallest step-size, panels e, f), and their error is not fully corrected even within seven taps. Each participant tapped through eight-ten accelerations and eight-ten decelerations in each condition. Sample sizes: a, b 90 ms step-size: asynchrony, which varies across individuals). The delay interval in the first beat after the tempo change (beat 0) resembles that of the pre-change delay, since the tempo change at this point was not predicted. Following this initial surprise, participants updated their delay intervals to align with the new metronome tempo. This update was faster in the larger and more salient tempo changes 39,40 : in the 90 ms step-size ( Fig. 4a, b), which is very salient, the neurotypical and dyslexia groups managed to synchronize to the new tempo after 1-2 metronome beats. This was not the case for the ASD group, which under-corrected in the first and second taps following the change and did not fully adapt even after seven taps. Though this effect is clearest for the 90 ms stepsize, similar dynamics can be seen also in the 70 ms step-size (Fig. 4c, d). The smaller, 50 ms step-change (Fig. 4e, f), was less salient and took marginally longer to adapt also for the dyslexia group compared with the neurotypical group, though the difference was not significant in any of our analyses (see following sections). The sluggish update in dyslexia is manifested only in the small tempo change, suggesting that large and abrupt changes are not more challenging to individuals with dyslexia, who do not show an updating difficulty, but possibly reduced perceptual sensitivity to small interval changes. The interpretation of reduced sensitivity to temporal durations, perhaps due to reduced benefits from repeated intervals, is in line with previous observations 41 .
Individuals with ASD do not fully update to tempo changes even following several seconds. To assess whether updating was attained several beats after the tempo change, we calculated the distributions of the delay intervals in each of the metronome tempos, excluding the four beats immediately after the tempo change, where most tempo update takes place, as shown in Fig. 4 (taking out two-six beats after the change produced similar results). If participants eventually adapt to the change in tempo, the two distributions should be highly separable. This was quantified using measurements from signal detection theory: sensitivity index (d′) and area under the curve (AUC) of the receiver operating characteristic (ROC). In the 90 and 70 ms step-sizes ( Fig. 5a-h) we received comparable measurements for the neurotypical and dyslexia groups, and reduced values for the autism group, though in the 50 ms step-size ( Fig. 5i-l) the values of the dyslexia group were between those of the neurotypical and autism groups. This pattern was replicated when we looked at single participant values: on both measures (d′ and AUC), there was a significant difference between the groups in all conditions (Kruskal-Wallis test; all p < 0.012). Post hoc comparisons showed a significant difference between the autism and neurotypical groups in all step-sizes and measures (all p < 0.008), and between the autism and dyslexia groups in the larger step-sizes. The difference between the neurotypical and dyslexia groups was not significant in any step-size (all p > 0.4).
Importantly, d′ and AUC are both affected by the SD of the distribution of asynchronies. Since the SD in the ASD group is larger (Experiment 1), normalizing by SD would decrease d′ in this group more than in the other groups. In order to see if there is an impairment in the autism group on top of the increased variability, we used the difference between the means of the distributions without SD normalization. We found comparable values for the neurotypical and dyslexia groups, and smaller values in the ASD group, for the 90 and 70 ms step-size conditions (Fig. 5a-h). Looking at single participants this pattern was preserved (Kruskal-Wallis test 90 ms: p = 0.007; 70 ms: p = 0.014), with post hoc comparisons showing significant differences between the neurotypical and autism groups, and between dyslexia and autism groups (all p < 0.05), with no difference between neurotypical and dyslexia (p > 0.6). For the 50 ms step-size ( Fig. 5i-l) we found that the dyslexia group value is midway between that of the neurotypical and the ASD groups, as in other measures of small tempo changes (single participant Kruskal-Wallis test: p > 0.2). Combined measures (formed by zscoring each step-size condition using the mean and SD of the neurotypical group, then averaging over the different conditions) showed a significant difference between the groups in all measures (Kruskal-Wallis test, p < 0.002 for d′ and AUC and p = 0.013 for the difference of means), and post hoc comparisons showed no differences between the neurotypical and dyslexia groups (all p > 0.4), but significant differences between the neurotypical and ASD groups (p < 0.001, Cliff's delta > 0.45 for d′ and AUC and p = 0.017, Cliff's delta = 0.37 for the difference of means) and between dyslexia and ASD groups (p = 0.04 for AUC and difference of means, p = 0.08 for d′, all Cliff's delta > 0.35).
Modeling the parameters underlying tempo switches reveals slow period-updating in ASD. In Experiment 1 the mean timekeeper period was assumed to be a fixed value (t 0 )-the metronome period. To model changing environments, we now enabled changes in the mean estimate of timekeeper, so that instead of decomposing T k into a fixed mean and a noise component as we did in the isochronous case, we use the following equation: where t k is dynamically adapting to the changes in tempo. The estimate of the tempo should be informed by the asynchrony, where large positive errors indicate an acceleration in tempo (the period getting shorter), so the internal estimate must be reduced, and large negative errors indicate a deceleration. We used a model proposed by Schulze et al. 42 , where this intuition regarding tempo correction is implemented using the following equation (Fig. 6a): Where β is a parameter denoting the proportion of correction of the period estimate for interval k. Optimally, the period estimate should track the changes in the external tempo, but it would not be an ideal strategy to change this internal estimate too rapidly, since asynchrony errors can result from noise in the participant's taps. The magnitude of β determines the pace of this updating procedure. The To disentangle the estimates of the phase correction (α) and period correction (β) we use the bGLS method 28 (see Methods). To enhance the model's sensitivity to the changes, we used only the segments immediately before and after the tempo change. We fit the model to each tempo-change segment separately and averaged the resulting parameter values for each step-size (first per block, and then across blocks). The extended model explained the data of Experiment 2 substantially better than a model without period correction, namely the model of Experiment 1 (likelihood ratio test, p < 0.001 for all subjects and Akaike information criterion (AIC) for the extended model is smaller than the original model for all subjects). Adequate parameter recovery is shown in Supplementary Fig. 6 and Supplementary Note 3.
In each of the step-size conditions, we found a significant group difference in period correction (Kruskal-Wallis test, all H(2) > 8, p < 0.018), with no significant differences in the other parameter estimates (Kruskal-Wallis test, all H(2) < 3.3, p > 0.2). Since the optimal values for error correction depend on context, we obtained combined estimates by z-scoring each parameter for each step-size condition (using the mean and SD of the neurotypical group) and averaging over the different conditions ( Fig. 6b- Fig. 6b). Post hoc comparisons showed a significant difference between the autism and neurotypical groups (p = 0.0002, Cliff's delta = 0.54), and between the autism and dyslexia groups (p = 0.048, Cliff's delta = 0.35), with no difference between the neurotypical and dyslexia groups (p = 0.34). No differences were found in other estimated parameters (all p > 0.16, see Fig. 6c-e), including z-scored phase correction (α). Simulations based on the fitted values of each participant were able to reproduce the observed patterns of reaction to changes that were characteristic of each group (compare Fig. 4 to Supplementary Fig. 8, Supplementary Note 4).
To conclude, individuals with autism show reduced initial updating of tempo, which is not fully corrected within the next 3-4 s (>7 taps), as can be seen in Figs. 4, 5.
Having found group differences in phase correction in a stationary environment (α, Experiment 1) and in period correction in the changing-tempo protocol (β, Experiment 2) we asked whether these two parameters denote separate mechanisms, or, alternatively, both reflect the same mechanism of online error correction. In a tempo-change paradigm, the relative contributions of the processes of correction for phase error and for period error are difficult to dissociate, since these errors are temporally correlated 39,43 . The large errors immediately following the tempo change are always the summation of the error directly induced by the metronome's tempo change (which requires a genuine period correction), and the error induced by the participant's inability to predict the point of tempo change (inducing an additional step-change phase error at beat zero). To resolve this ambiguity, we assessed the cross-participant correlation between the parameter of phase correction in Experiment 1 (Fig. 3b), and period correction in Experiment 2 (Fig. 6c). We found significant positive correlations in each of the three groups separately (Spearman correlations: ρ CON = 0.44 (p < 0.002), ρ DYS = 0.5 (p < 0.005) and ρ ASD = 0.61 (p < 0.001), Fig. 7a-c) and when combining the groups (ρ ALL = 0.55 (p < 0:001)). By contrast, there were no significant correlations between the other error correction parameters in any of the three groups (all |ρ| < 0.18, p > 0.35, for the correlations between the two error terms of Experiment 2, and the two estimations of phase correction). This combined pattern of correlations suggests that phase correction in Experiment 1 and period correction in Experiment 2 are manifestations of a common mechanism of online error correction. We, therefore, formed a combined update rate score by averaging the correction parameters of both experiments (again after z-scoring with respect to the neurotypical group). Update rate showed a significant difference between Step size: 90ms CON (N=47) Fig. 7d). Post hoc comparisons revealed a significant difference between the neurotypical and ASD groups (p < 0.002, Cliff's delta = 0.45), and between dyslexia and ASD groups (p = 0.045, Cliff's delta = 0.35), with no difference between the neurotypical and dyslexia groups (p > 0.65, Cliff's delta = 0.12). Overall, the autism group had a substantially lower updating rate yielding slower correction rates in both fixed and changing environments.
Update rate is correlated with communication and mindreading skills. Since previous literature suggests that synchronization is associated with social skills 17,18 , we asked whether slower updating is correlated with these skills among our participants in the neurotypical and autism groups. We administered to Fig. 7 Rate of online error correction in stationary and in changing environments reflect a single updating mechanism. The estimated phase correction from Experiment 1 and the estimated period correction from Experiment 2 are highly correlated in all groups a neurotypical (CON control), b dyslexia (DYS), c autism (ASD), suggesting that both are manifestations of a common underlying mechanism of error correction, which determines the speed of integrating new sensory data to guide behavior. The significance of Spearman correlation was calculated using a two-sided test, p values are uncorrected. Overlayed regression lines, predicting phase correction (Experiment 1) from period correction (Experiment 2) with an intercept term. d The combined update rate is significantly smaller in the ASD group but does not differ between the neurotypical and dyslexia groups. The median of each group is denoted as a line of the same color; error bars around this median denote an interquartile range. Kruskal-Wallis H-statistic and the corresponding p value are plotted in the bottom-left corner; p values of comparisons between groups are plotted next to the line connecting the groups' medians. N = 108 subjects (N CON = 47, N DYS = 32, N ASD = 29), one ASD participant was excluded from the computational modeling of Experiment 1 due to a large number of missing taps (see Methods). Source data are provided as a Source Data file. ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-021-25740-y participants in both groups the AQ50 (Autism Quotient)-a selfreport questionnaire, aimed to assess the severity of autismrelated traits 44 . Nineteen of our participants with autism and 37 neurotypical participants filled the questionnaire. The questionnaire is composed of several subscales which together assess several traits associated with autism, including social and communication skills. Higher scores on the questionnaire indicate more autistic traits. Accordingly, we found significantly higher scores in the autism group (median [interquartile range]: 75 [23.5]) compared with the neurotypical group (median [interquartile range]: 52 [17.5]), Wilcoxon rank-sum test, p < 0.001, Cliff's delta = 0.69.
We hypothesized that a slower update rate (the combined zscore of α in Experiment 1 and β in Experiment 2, Fig. 7d) would correspond to poorer social and/or communication skills. We used the three-factor model of the AQ50 proposed by Austin 45 , which separates between individuals' cognitive social abilities (theory of mind-their ability to understand other people's thoughts, communication/mindreading factor), and their emotional propensities (joy from being with others and socializing, social skills factor). The combined update rate was not correlated with the social skills factor in either group (ρ < 0:12 for neurotypicals and p > 0.5 for individuals with ASD). However, it was significantly correlated with the communication factor in both the neurotypical (Fig. 8b, Spearman correlation ρ CON ¼ À0:36; p < 0.03) and the ASD groups (Fig. 8c, Spearman correlation ρ ASD ¼ À0:44; in a two-tailed test p = 0.058, in a onetailed p < 0.029, which is justified based on our a priori hypothesis). Importantly, despite the large group difference in the communication factor (Wilcoxon rank-sum test, p < 0.0002, Cliff's delta = 0.63, Fig. 8a), the neurotypical and ASD groups showed a similar pattern of correlation between communication skills and updating rate (Fig. 8b, c). Bootstrap permutations showed that the correlation values of the two groups were not significantly different (p = 0.78), and both could be approximated using the correlation in the combined group (see Methods, section AQ50 questionnaire). We therefore also assessed the correlation across both groups (Fig. 8d), which was highly significant (Spearman correlation ρ ALL ¼ À0:44, two-tailed test p < 0.0015 (Bonferroni corrected for two factors)).
Discussion
We found that individuals with autism fail to synchronize their movements to external cues, unlike individuals with dyslexia, who are able to synchronize adequately. Using trial-by-trial computational modeling, we were able to precisely pinpoint the underlying deficit: we found that the level of noise in both motor processing and internal timekeeping is sufficient in individuals with autism, yet they use recent sensory information to a lesser degree when compared with the other two groups. Consequently, they are slower to correct their synchronization errors (Experiment 1) and are slower to adapt their internal representation to changes in the environment (Experiment 2).
To understand the pattern of deficits found in the autism group we used a well-established model of sensorimotor synchronization 31,32 . In this model, each tap is informed by two distinct sources of prior information: a long-term source, the timekeeper, holding information about the distribution of interbeat-intervals accumulated over the experiment; and a short-term source, responsible for online error correction, that relies on the most recent asynchronies. Together, the mean value of the timekeeper (the metronome tempo) and the error of the most recent tap provide prior information for performance in the current trial. The long-term component (the mean value of the timekeeper) is reliably kept by the participants with autism, whereas the recent information, which needs to be quickly integrated into the timing of the next tap either due to inherent noise in motor execution or to a sudden tempo change, is used less in individuals with autism than in the neurotypical and dyslexia groups, suggesting a slower integration rate.
This observation suggests an underweighting of recent sensory information into a form that can be used to guide behavior, in line with the "slow updating" framework 11 . Importantly, in this study participants had a strong incentive to utilize recent sensory information, which always improved synchronization, but individuals with autism nonetheless failed to do so. The "slow updating" framework proposes that Bayesian integration will be impaired in autism when fast integration is needed but will otherwise be intact. This stands in contrast to the predictions of "increased volatility" accounts, which propose that individuals with autism overestimate the volatility of the environment 10,12 , or that individuals with autism overweigh their prediction errors 9 . According to these accounts, individuals with autism evaluate the environment's statistics as changing more frequently than it actually does, and therefore they would be expected to quickly update their internal model to meet their estimated rate of environmental change. We directly tested this prediction by using blocks with changing tempos and found reduced updating in the autism group, rather than accelerated updating, in line with the "slow updating" framework.
The slow-update conceptualization explains many seeming inconsistencies in the literature assessing motor performance, sensorimotor performance, and even finger tapping in ASD. This literature characterized motor skills but did not study the rate of online updating as the limiting bottleneck. For example, when individuals were asked to keep tapping after the metronome stops (unpaced tapping), the performance of the ASD group was comparable to neurotypicals' 46,47 . This seems surprising, since these conditions are more cognitively demanding 48 . However, the constraint on performance here is keeping the previous tempo, i.e., the robustness of working memory rather than synchronization with external stimuli. In such conditions, slow-update counterintuitively predicts that the performance of individuals with autism will be similar to that of neurotypical controls, since serial online error correction is not a limiting bottleneck. This is indeed the observation 46,47 and is also consistent with our finding of similar timekeeper noise in the three populations (Figs. 3c, 6d). Similarly, in demanding tasks that require more complicated learning mechanisms, and hence do not rely on online error correction, individuals with ASD are expected to show typical performance, which is indeed the case 49 . However, when test conditions require online synchronization, their performance manifests elevated variability 15 . Interestingly, in line with our findings of reduced serial error correction in the ASD group, error-related negativity (ERN) event potential has a lower amplitude and longer latency in ASD 50,51 . This ERP component is also associated with the correction of large asynchronies in finger tapping 52 .
Our analyses also suggest a mechanistic account to the motor "clumsiness", reported already in early descriptions of autism 53 , and commonly observed since 54,55 . We find that motor function is not inherently noisy in autism, but rather, that the process of integrating sensory information into motor plans is slower. Hence, while there is an essential sensory component to many movement forms, we expect individuals with autism to experience the greatest difficulty when fast integration is required. This prediction is supported by recent reviews analyzing the core difficulties underlying poor sensorimotor integration in autism 4,5 .
Whyatt & Craig 4 show that the motor deficit in autism is specific to tasks requiring fast sensorimotor integration, for example, individuals with autism show a deficit in catching a ball, which requires rapid integration of visual information, while they show intact throwing, which is internally driven. Both reviews suggest that impaired sensorimotor integration may underlie all deficits found in autism spectrum disorder. We propose that impaired sensorimotor integration stems from reduced use of sensory evidence to correct for errors, which is a specific manifestation of slow updating of internal models 11 .
The specific stage of processing which yields the slow update is difficult to pinpoint. The slower processing stage could transpire at the perceptual level, in which case the motor manifestations are inherited. Namely, the tapping task relies on fast and accurate error calculation, which require fast comparisons between the timing of the external metronome and the proprioception of the finger tap. If cross-modal integration is sloppier in autism, or temporal windows are less precise 3,56,57 , then perhaps occasionally no error is calculated, leading to a bias of underestimating the error, and consequently to reduced synchronization. In our model, it would lead to smaller alphas and betas. A recent study using Bayesian modeling to understand the deficits of individuals with autism in a visual path integration task can also be understood within this framework. Noel et al. 58 found significantly larger variability in motor execution in the autism group, and their modeling framework revealed that individuals with autism are impaired in scaling their sensory likelihood function when executing the next action. Inadequate scaling can be a sign of poor updating of priors but can also stem from impairments at the sensory level.
We should note however that in both Noel et al.'s path integration and our paced finger tapping task, the impaired use of sensory information was measured in conditions of serial actions, where adequate performance requires fast integration of sensory information to inform the next behavior. In conditions where trials are embedded in a setting that does not rely on fast crosstrial or cross response updates, the responses of individuals with autism are typically fast and temporally accurate 59,60 . For example, assessing temporal estimation, Edey et al. 61 presented participants with four auditory (or visual) stimuli with equal temporal intervals in each trial and asked participants to listen to the first two stimuli and press a button in temporal alignment with the third and fourth. The temporal accuracy of participants with autism was similar to that of neurotypical participants and even better than neurotypicals' in the visual task. Adequate perception of tempo is in line with our findings of adequate timekeeper noise. But importantly, their study did not assess serial dependence effects across trials. When serial effects were measured in a task of temporal reproduction, and the impact of previous trials' intervals was assessed, it was found that children with autism underuse previous intervals 62 , in line with the "slow updating" framework. A difference in serial dependency profiles between the groups may also underlie the higher accuracy of the autism group in the visual condition observed by Edey et al. It has been shown in several contexts that visual sensorimotor synchronization is noisier than auditory sensorimotor synchronization 23,48,63 , which may lead participants, particularly neurotypicals to increase the magnitude of serial dependency 64 , and perhaps consequently hamper their performance 11 .
Our observation of synchronization difficulties in a nonsocial context indicates that poor synchronization is not a unique outcome of a lack of social interest 2 . Rather, reduced synchronization may reduce the interest in other people's state of mind, though causality is likely to operate in both directions. We found a correlation between our measure of update rate and mindreading skills, in both neurotypicals and people with ASD, yet we did not find a significant correlation with social joy. There is also other evidence for distinct processes underlying the neurocognitive vs. affective influences on social skills 65 . Therefore, it is possible that the update rate taps onto one mechanism, but not all. Further studies, which include direct clinical measures, are needed to clarify the functional relations.
In contrast to the autism group, the dyslexia group had no difficulties in sensorimotor synchronization. This observation is at odds with the temporal sampling framework of dyslexia 66 , which posits that individuals with dyslexia have problems with oscillatory entrainment, specifically in the delta range (1.5-4 Hz). The temporal sampling theory predicts impairment in rhythmic motor performance at the tested range of 2 Hz. However, early studies of individuals with dyslexia found no deficit in simplepaced tapping tasks 67,68 . Follow-up studies 69,70 obtained mixed results in paced finger tapping, and difficulties depended on the exact tempo around 2 Hz. Still, we should note that we did find a subtle deficit in the dyslexia group in adapting to small tempo changes (50 ms), though not in the isochronous condition. The specificity of the very mild deficit in dyslexia to small changes in tempo suggests that it reflects a slightly reduced sensitivity to tempo, perhaps due to reduced benefits from interval repetition 41 , but we cannot rule out alternative accounts. Though the difference from the neurotypical group was not significant in any of our analyses, in the small tempo change the dyslexia group's performance also did not significantly differ from that of the ASD group.
To conclude, our study compared two prominent computational accounts of autism-the "increased volatility" account and the "slow updating" account. Our results support the "slow updating" account, which proposes that slow update of internal representations is a core deficit of autism, contributing to both perceptual and motor difficulties. More broadly, our study demonstrates how computational modeling can be used in order to better understand the dynamics of information processing in perception and action in both typical and atypical populations. This approach can lead to the novel integration of computationally informed methods for clinical applications.
Methods
Participants. Neurotypical participants and participants with dyslexia were recruited through advertisements at the Hebrew University of Jerusalem and colleges near the university. Participants with ASD were recruited through clinics (including author T.E.'s clinic), designated facilities, and support groups. Multiple recruitment sources were used to balance any potential biases that each single source might suffer from. All participants in the dyslexia group had been diagnosed by authorized clinicians as having a specific reading disability and all participants with ASD were diagnosed by authorized clinicians and were consequently entitled to Israeli government support aimed specifically for individuals with ASD. All participants were native Hebrew speakers (either born in Israel or immigrated to Israel before the age of 4 years), with no more than minimal musical education (less than 3 years of self-reported musical education). We added the latter restriction (as in ref. 11 ) since performance on sensorimotor tasks may be enhanced by musical background [71][72][73] , and may affect clinical groups to a different extent 74 . We recruited participants to all groups within a predefined time period, which was to be extended if one of the groups contained less than 20 participants. By the end of the recruitment period, all groups were larger than 20 participants. Participants with autism were recruited from multiple sources to ensure the sample is representative. All participants completed a set of cognitive assessments, which evaluated general reasoning skills by the standard Block Design task (WAIS-IV 75 ) and reading abilities by pseudoword and paragraph reading (details can be found in ref. 11 ). They all performed the same protocol of finger tapping-Experiments 1 and 2. Participants in all groups were randomly sampled.
Data were collected from 133 participants (56 neurotypical, 39 dyslexia, and 38 autism). Of these, N = 24 (N ASD = 8, N DYS = 7, N CON = 9) were excluded. Our exclusion policy (determined prior to data collection) was aimed to ensure that the general reasoning skills of all participants are no less than two SD below the general population mean (scaled Block Design scores > 6), age, and general reasoning scores are matched across the three groups and reading skills of the neurotypical and ASD groups are matched. Since the focus of this research was the ASD population, we excluded participants in a way which kept the largest number of participants with ASD. Excluding all participants with a Block Design score < 7 excluded one participant with dyslexia, and six with ASD. Matching Block Design scores, while keeping as many participants with ASD as we could, led us to exclude neurotypical and dyslexia participants with Block Design > 15: 7 neurotypical, and four with dyslexia. Reading-related measures (assessed in the lab) led to excluding one neurotypical participant with exceptionally low pseudoword reading (more than 2 SDs below group mean) and two participants with dyslexia exceptionally high pseudoword reading scores (> 2 SDs above group average). Finally, three participants were excluded due to extreme mean asynchrony values (> 3 SDs above the population mean, based on previous studies)-one neurotypical and two in the autism group. The final group consisted of 109 participants (47 neurotypical, 32 dyslexia, and 30 autism). These groups were matched in age and reasoning skills, measured by the standard Block Design task. Results of these assessments are reported in Supplementary Table 1. Importantly, this exclusion policy only weakened the results reported in the paper (namely, the population without the exclusion show larger effect sizes compared with what we report in the paper) since neurotypical participants with higher Block Design scores tend to be better tappers (lower SD, better error correction) and individuals with ASD with lower Block Design scores tend to be poorer tappers. All experiments were approved by the Ethics Committee of the Psychology Department of the Hebrew University and the Helsinki Ethics Committee of Sheba Hospital (required for testing individuals with ASD recruited through their adult clinic). All participants provided written informed consent and were financially compensated for their time and travel expenses.
Finger tapping experimental design. Participants heard a series of metronome beats and were asked to start tapping in synchrony with the metronome. To help participants synchronize, they were instructed to listen to the metronome first and tap after about three metronome beats 23 . The metronome beats were heard through headphones at a comfortable presentation level. Tapping was performed on a custom-made wooden box, including a microphone which recorded the participant's responses. We used either Focusrite Saffire 6 USB or Focusrite Scarlett 2i2 sound cards, which simultaneously recorded the output from the microphone installed inside the box and a split of the headphone signal using the open-source software audacity (https://www.audacityteam.org/), so that the playback latency and jitter could be estimated for each recording. Onsets were extracted from the stereo audio signal using a custom Matlab script. The overall latency and jitter obtained in this way, measured separately using calibration hardware, was about 2 ms 76 .
The task consisted of 12 blocks, each containing~100 metronome beats. Rhythmic patterns consisted of identical short percussive sounds ("clicks") lasting 55 ms with an attack time of 5 ms generated from amplitude modulated white noise. Blocks were separated by short pauses of 5 s. Participants had two breaks, after the third and eighth blocks. Prior to the test procedure, all participants completed one block of practice. Researchers were present during the demo block, but usually left the room for the experimental session, except in rare cases when the testing conditions did not enable this. The researchers were not blind to the hypothesis or condition during collection.
Blocks were separated into six conditions and each was repeated twice. The first condition (Experiment 1) had an isochronous tempo of 2 Hz-beats were presented with an inter-onset-interval (IOI) of 500 ms, known to be close to the optimal tempo for synchronization 23,77 . The other five conditions (Experiment 2) were composed of alternating tempos. In each block, the metronome tempo alternated between two options, which differed symmetrically from the baseline tempo of the isochronous condition (500 ms): one tempo was faster than this baseline and the other was slower. Metronome changes occurred randomly every 8 to 12 intervals, thus both changes were repeated several times in each block (the design was similar to ref. 43 ). We used five different conditions with deviations ranging from ±5 to ±45 ms, in steps of 10 ms: (1) 495 and 505 ms (±5 ms, step-size of 10 ms), (2) 485 and 515 ms (±15 ms, step-size of 30 ms), (3) 475 and 525 ms (±25 ms, step-size of 50 ms), (4) 465 and 535 ms (±35 ms, step-size of 70 ms), and (5) 455 and 555 ms (±55 ms, step-size of 90 ms). Each block contained two types of changes: acceleration (slow to fast tempo change) and deceleration (fast to slow tempo change). For example, in condition (3) the acceleration was a change from 525 to 475 ms and deceleration was the change from 475 to 525 ms. The 12 task blocks (including Experiment 1 and Experiment 2) were presented in one of four pseudorandomized orders.
As explained above, the tempo changes in Experiment 2 covered a broad range and were chosen based on previous literature, which tested musicians or trained participants 43,78 . Our novice, musically untrained participants had markedly higher tracking thresholds-the two smaller step changes were largely unnoticed by our participants (Supplementary Fig. 5). We, therefore, focused our analyses on the three larger step-sizes shown in Figs. 4-7. Importantly, the computational modeling results remain highly significant also when including the smaller tempo changes ( Finger tapping analyses. All analyses and statistical procedures were performed using Matlab (version 2019b). To measure synchronization, we used the time interval between the metronome stimulus and participant's responses (asynchrony, we denoted it by e k , see Fig. 1a). Participants usually anticipate the metronome beat resulting in a negative mean asynchrony 21,23 . We denote by r k and s k the inter-tapinterval and inter-stimulus-interval between taps k-1 and k, respectively. We denote by d k . the delay interval between metronome beat k-1 and the next participant tap (corresponding to beat k). Note that r k ¼ d k À e kÀ1 (see Fig. 1a).
A model-free characterization of tapping performance in a given block is given by the mean asynchrony and the SD of asynchronies in that block. In Experiment 2 perturbations of the metronome, tempo occurred at unexpected time points, therefore we computed the mean and SD after removing the contribution of the unexpected perturbation (s k À s kÀ1 ). Namely, we compute the mean and SD of e 0 k ¼ e k þ ðs k À s kÀ1 Þ. Results (Fig. 1b, c and Supplementary Figs. 2, 3) were averaged over the two repetitions of each condition. We excluded response taps that were outside a window of ±200 ms surrounding metronome beats 23 . Omitted taps are cases where the participant did not tap within a 400 ms window around the metronome beat. Overall, there was a small number of omitted or excluded taps-less than 5% of the taps (across experiments). In Experiment 1 the percentage of omitted or excluded taps was (median [interquartile range] (%)): neurotypical: 0.5 [1.3], dyslexia: 0.5 [1.5], autism: 1.2 [4.9]. The difference between the groups was marginally significant (Kruskal-Wallis test, H(2) = 5.8, p = 0.055), corresponding to our finding of more variable tapping in the autism group. In Experiment 2 the percentages were (median [interquartile range] (%)): neurotypical: 1.3 [2.5], dyslexia: 1.5 [3], autism: 2.8 [10], which is again marginally significant (Kruskal-Wallis test, H(2) = 5.58, p = 0.06). Computational modeling was performed only on blocks with less than 40% omitted or excluded taps. This excluded three blocks from Experiment 1, and NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-021-25740-y ARTICLE NATURE COMMUNICATIONS | (2021) 12:5439 | https://doi.org/10.1038/s41467-021-25740-y | www.nature.com/naturecommunications eight blocks from Experiment 2 (one block from 50 ms step-size, four blocks from the 70 ms step-size, and three blocks from the 90 ms step-size).
Autocorrelation analysis
As a first approach to assess the rate of error correction we computed the (Pearson) correlations between consecutive asynchronies (e k ). For this analysis we used perceived asynchronies, meaning the interval between the current asynchrony and the mean asynchrony (e k À mean k ðe k Þ), not the metronome beat. In the population analysis (Fig. 2a-c), we calculated the correlation in each group using data from all subjects together. In the single-subject analysis (Fig. 2d), we used data from both blocks. Supplementary Fig. 4) To study the timescale of serial dependence in tapping tasks, we used an autoregressive model, where each asynchrony is predicted by several previous asynchronies (with no intercepts, since we used the perceived asynchronies (e k À mean k ðe k Þ). To determine the number of previous asynchronies to use in the model, we ran a stepwise regression analysis both at the group level and for each participant separately. In each step of the regression, an additional asynchrony (going one tap back from the earliest asynchrony already incorporated into the model) was added if the F value of the SSE (sum of square errors) had a p value < 0.1. In the group model, we used separate predictors for each subject, but the F value was calculated based on adding an additional predictor for all subjects in the group. The final group-level model included three predictors in all three groups, and at the single-participant level, it included one to three predictors for 103/109 participants, indicating that phase correction is a rapid process. We, therefore, fit an autoregressive model with four predictors (for all participants, we tried to predict asynchrony k with asynchronies k-1, k-2, k-3, and k-4). Formally, our model can be written as:
Autoregressive model (Supplementary Note 2 and
where ξ k is independent Gaussian noise. The model combined data from both experiment blocks.
Computational model of sensorimotor synchronization
To test whether individuals with autism show noisier representations or "sloppier" motor production we used a computational model of sensorimotor synchronization 29,31,32 . The model assumes that the interval between two consecutive taps is influenced by three components: timekeeping of the metronome tempo, motor execution, and phase correction. Formally, the model can be written as follows (see Fig. 3a): Where r k is the time interval between the participant's k-1 and k taps and e kÀ1 is the perceived asynchrony at beat k-1 (relative to the mean asynchrony). T k is the representation of the metronome tempo (timekeeper), which is composed of two parts-a fixed mean (t 0 ) and a Gaussian noise component (n k ), assumed to have zero mean and variance σ 2 T (σ T is referred to as timekeeper noise). M k models the noise in the motor processing also assumed to be Gaussian with zero mean and variance σ 2 M (σ M is referred to as motor noise). Lastly, we denote by α the phase correction, which is the proportion of the previously perceived asynchrony that is corrected in the next tap. Optimally, positive asynchrony deviations should be followed by shorter intervals, therefore the phase correction parameter α appears with a negative sign. This way α = 1 corresponds to full correction, and similarly, α = 0 will mean that the participant's asynchrony is carried fully into the next tap. The contribution of the timekeeper and motor noise to performance can be separated since they influence the covariance of inter-tap-intervals differently-only the motor noise influences both r k and r kÀ1 . Ref. 28 showed that a naïve implementation of this approach results in biased estimates, but under the assumption of an upper bound on the magnitude of the motor noise (σ M < σ T ), the parameters of the model can be reliably estimated.
We fit the model for each block separately and averaged the two repetitions of the isochronous condition (Fig. 3). Blocks with more than 40% missing values (omitted or excluded taps) were excluded from this analysis (three blocks altogether, two from the same subject which was excluded from the computational model results). Parameter fit was performed using the bGLS method described in ref. 28 . Importantly, the version of the algorithm for parameter extraction in ref. 28 does not enable fitting with missing values. We adapted the algorithm to enable fitting missing data (Supplementary Note 5). Adequate parameter recovery using this method is shown in Supplementary Note 3 and Supplementary Fig. 6.
Response dynamics to changes in tempo
To assess how participants respond to changes in the tempo we aligned the participants' responses to the tempo change and averaged each participant's responses to acceleration and deceleration separately (Fig. 4). For presentation purposes, we aligned the baseline delay intervals to the metronome tempo by subtracting the average asynchrony in the two intervals before the change from the delay interval values of the entire segment. We included only transitions where all responses were available from two taps before the change (to establish a baseline asynchrony) to seven taps after the change (to assess the full progression of the adaptation procedure). Transitions with missing values in this range, or that were too close to the start or end of the block were excluded. Figure 4 shows only participants with at least two repetitions of a given transition magnitude and direction (for each step-size and transition direction between one-six participants were excluded across all groups).
Update to changes after several taps
We used the distributions of the delay intervals under each metronome tempo separately (using data from both repetitions of each condition). We excluded the four beats immediately following the change (including the moment of change; taking out two-six beats after the change produced similar results). If participants fully adapt to the change, the two distributions should be highly separable. We quantified this using three measures (section Individuals with ASD do not fully update to tempo changes even following several seconds, Fig. 5): 1. Sensitivity index, or d′: the difference between the means normalized by the pooled SDs: Where μ d 1 and μ d 2 are the means of distributions 1 and 2 and σ 2 d 1 and σ 2 d 2 are the variances. 2. AUC: we create a ROC curve by varying the threshold of a binary classifier designed to discriminate between the two distributions (such that a delay interval below the threshold is marked as short tempo, and a delay interval above the threshold is marked as long tempo). For each threshold, we calculate the percentage of true positives (TPR true positive rate, delay intervals in the short tempo that were classified correctly) and false positives (FPR false positive rate, delay intervals in the long tempo that were classified incorrectly as short tempo). AUC is defined as: 3. Difference between the means of the distributions (without normalizing): Where μ d 1 and μ d 2 are the means of distributions 1 and 2.
Extended computational model of sensorimotor synchronization
To understand whether individuals with autism manifest an impairment in their response to external changes, we used an extension to the computational model of Experiment 1 proposed by ref. 42 , by enabling the mean of the timekeeper to vary in each interval, i.e., The mean t k is expected to dynamically track the changes in tempo. This is implemented by adding the following dynamics: Where β denotes the period correction rate, which is the proportion of the previous asynchrony corrected in each interval. When the tempo suddenly gets slower (deceleration), this will create a large negative asynchrony, since the participant will tap too early, expecting the metronome at the time of the previous tempo. This change requires the internal period estimate to be elongated, and since the asynchrony, in this case, is negative β (the period correction parameter) appears with a negative sign. The full model is defined by the coupled equations (Eqs. (2) and (7)), substituting Eq. (6) (see Figs. 3a and 6a): To combine these into one equation and fit the model we use the difference between the model equation at time k and at time k-1: Note that: T k À T kÀ1 ¼ t k À t kÀ1 þ n k À n kÀ1 ¼ Àβe kÀ1 þ n k À n kÀ1 ð14Þ So from Eqs. (13) and (14) we get: r k À r kÀ1 ¼ À α þ β À Á e kÀ1 þ αe kÀ2 þ n k À n kÀ1 þ M k À 2M kÀ1 þ M kÀ2 ð15Þ As in Experiment 1, the covariance structure can be used to disentangle the noise terms (although the specific structure is different, see Appendix of ref. 28 ). To enhance the model's sensitivity to changes, we fit the model separately for each tempo change segment (from two beats before the change to seven beats following the change, see section Response dynamics to changes in tempo) and average the resulting model estimates within each block. Importantly, the mean asynchrony (needed to adjust the asynchronies relative to the participant's perception) are estimated based on the entire block 28 , therefore we excluded blocks with >40% missing values, as in Experiment 1. This led us to exclude eight blocks altogether: one block from 50 ms step-size, four blocks from the 70 ms step-size, and three blocks from the 90 ms step-size. Within the remaining blocks, we excluded segments with missing values, as in section Response dynamics to changes in tempo. Adequate parameter recovery using this fitting method (including the split into tempo change segments) is shown in Supplementary Note 3 and Supplementary Fig. 6.
Model comparison
The extended computational model was compared to a model without timekeeper dynamics, that is, a model defined according to Eq. (15), with period correction (β) set to zero: r k À r kÀ1 ¼ Àαðe kÀ1 À e kÀ2 Þ þ n k À n kÀ1 þ M k À 2M kÀ1 þ M kÀ2 ð16Þ The models were compared for each subject separately, using the likelihood ratio test and AIC.
Combined measures
For the model-free (Fig. 5) and model-based analyses (Fig. 6), combined measures were calculated by z-scoring each step-size separately, then averaging over the different stepsizes. This was done to account for the different scales of parameters estimated using different step-sizes. Z-scoring was performed using the mean and SD of the neurotypical group. Similarly, the combined update rate (Fig. 7) was formed by z-scoring the phase correction estimate from Experiment 1 (α), and the combined period correction estimate from Experiment 2 (β) and averaging them.
AQ50 questionnaire. Nineteen of 30 participants with ASD and 37 of 47 neurotypical participants completed the AQ50 questionnaire 44 . None of the participants with dyslexia were asked to fill the AQ50. AQ50 questionnaire data were not acquired for all neurotypical and ASD participants since it was added only after we began collecting other experimental data. The AQ50 is a self-report questionnaire, aimed to evaluate the presence of several traits which are characteristic of individuals with ASD, both in ASD and in neurotypical populations. It was recently shown that some questions in the AQ50 differentially bias neurotypical and individuals with ASD 79 , therefore we used the three-factor model of the AQ50 proposed by ref. 45 , which is less influenced by these biases. We compared our calculated update rate to the social skills factor and the communication/mindreading factor.
The items in the social skill factor are: 1. I am good at social chit-chat*. 2. I find social situations easy*. 3. I enjoy social occasions*. 4. I enjoy social chit-chat*. 5. I frequently find that I do not know how to keep a conversation going. 6. I enjoy meeting new people*. 7. I find it hard to make new friends. 8. When I was young, I used to enjoy playing games involving pretending with other children*. 9. I find myself drawn more strongly to people than to things*. 10. I enjoy doing things spontaneously*. 11. I find it very easy to play games with children that involve pretending*. 12. I would rather go to a library than to a party.
Notably, a large proportion of items in this factor (4/12) begin with the words "I enjoy", indicating a tendency to enjoy social situations, but not necessarily social skills. Individuals with autism often crave social situations, despite being judged as poor performers in this respect 2 .
The items in the communication/mindreading factor are: 1. People often tell me I keep going on and on about the same thing. 2. When I am reading a story, I find it difficult to work out the characters' intentions. 3. I find it difficult to work out people's intentions. 4. I am often the last to understand the point of a joke. 5. Other people frequently tell me that what I have said is impolite, even though I think it is polite. 6. If there is an interruption, I can switch back to what I was doing very quickly*. To determine whether we can combine the two groups (neurotypical and autism) to calculate the correlation between update rate and responses on the communication subscale we performed a bootstrap permutation analysis, designed to show that the correlation values in the two groups can be approximated using the correlation in the combined sample, that is, that the correlations in the two groups are not significantly different than the combined correlation, or different from each other. To do this, we created surrogate distributions by resampling (with replacement) data from the two participant groups separately, so that we formed one distribution for neurotypical values and another for autism values. We then calculated the Spearman correlations in each group separately, and on the combined sample, and calculated the differences between these correlations. This procedure was repeated 1000 times. Finally, we compared the resulting distributions of differences between correlation values to those of the experimental data, and in all cases the difference in the experimental data is well within the null distribution (all p > 0.5).
Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
All data generated in this study have been deposited in the OSF public repository (https://doi.org/10.17605/OSF.IO/83WNU) 80 . Source data are provided with this paper.
Code availability
The custom code used to analyze the data in this study (including the implementation used for the bGLS algorithm) and create all figures (except Figs. 1a, 3a, and 6a) is publicly available at Zenodo (https://doi.org/10.5281/zenodo.4930034) 81 . Source data are provided with this paper.
|
2021-09-16T06:23:24.334Z
|
2021-09-14T00:00:00.000
|
{
"year": 2021,
"sha1": "379ccc56aa1bb3a9ed8faaabda2e0048695202d1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-021-25740-y.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "46ac8388376be1ec81dd9a5bcb98c6187bfe550f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13983294
|
pes2o/s2orc
|
v3-fos-license
|
Cyclophilin A Protects Peg3 from Hypermethylation and Inactive Histone Modification*
Imprinted genes are expressed from only one of the parental alleles and are marked epigenetically by DNA methylation and histone modifications. Disruption of normal imprinting leads to abnormal embryogenesis, certain inherited diseases, and is associated with various cancers. In the context of screening for the gene(s) responsible for the alteration of phenotype in cyclophilin A knockdown (CypA-KD) P19 cells, we observed a silent paternally expressed gene, Peg3. Treatment of CypA-KD P19 cells with the DNA demethylating agent 5-aza-dC reversed the silencing of Peg3 biallelically. Genomic bisulfite sequencing and methylation-specific PCR revealed DNA hypermethylation in CypA-KD P19 cells, as the normally unmethylated paternal allele acquired methylation that resulted in biallelic methylation of Peg3. Chromatin immunoprecipitation assays indicated a loss of acetylation and a gain of lysine 9 trimethylation in histone 3, as well as enhanced DNA methyltransferase 1 and MBD2 binding on the cytosine-guanine dinucleotide (CpG) islands of Peg3. Our results indicate that DNA hypermethylation on the paternal allele and allele-specific acquisition of histone methylation leads to silencing of Peg3 in CypA-KD P19 cells. This study is the first demonstration of the epigenetic function of CypA in protecting the paternal allele of Peg3 from DNA methylation and inactive histone modifications.
DNA methylation regulates a number of biological processes, including genomic imprinting, X chromosome inactivation, silencing of tumor suppressor genes, and repression of retroviral elements (1,2). Genomic imprinting relies on establishing and maintaining the parent-specific methylation of DNA elements that control the differential expression of maternal and paternal alleles (3,4). Although the essential DNA methyltransferases and methyl-CpG-binding proteins have been discovered (5), proteins that regulate the establishment and maintenance of allele-specific methylation of DNA have not been identified. Nevertheless, data for an active role of DNA methylation in gene silencing are both correlative and functional. In addition, DNA methylation may occur in conjunction with histone modification to play a critical role in biallelic silencing through chromatin remodeling (6).
A number of human-inherited diseases linked to faulty methylation pathways and exhibiting abnormal development include Rett, immunodeficiency, centromeric heterochromatin instability, and facial anomalies, and X-linked ␣ thalassemia/ mental retardation syndromes (7)(8)(9). Moreover, aberrant methylation patterns are thought to be involved in tumorigenesis (10 -13), causing genomic instability, abnormal imprinting, and deregulated expression of oncogenes or tumor suppressor genes.
The Peg3 gene is one of several genes identified in an imprinted region mapped to human chromosome 19q13.4 (14). The mouse homolog of Peg3 was the first imprinted gene identified from the proximal region of mouse chromosome 7 (15). Its high conservation between mice and humans suggests that it possesses critical cellular functions. Peg3 appears to be ubiquitous, but the highest mRNA levels are found in placenta, uterus, ovary, brain, and testis (16). In mice, targeted disruption of the paternally inherited copy of Peg3 eliminates Peg3 expression. Peg3-negative heterozygous mice suffer growth impairment. Females display compromised nurturing behavior, resulting in a high death rate in their offspring (17). In humans, Peg3 biallelic silencing has been observed in endometrial and cervical cancer cell lines and in a number of ovarian cancer and glioma cell lines (10,18).
Cyclophilin A (CypA), 2 a member of the immunophilin family of proteins, mediates inhibition of calcineurin by the immunosuppressive drug cyclosporine A (CsA), but the other cellular functions of CypA have remained elusive. Recently, different aspects of biological functions of CypA have emerged, suggesting that CypA is involved in multiple signaling events of eukaryotic cells. It might either act as a catalyst for prolyl bond isomerization or form stoichiometric complexes with target proteins. CypA possesses enzymatic peptidylprolyl isomerase activity, which is essential to protein folding in vivo. It promotes proper subcellular localization of Zpr1p, regulates interleukin-2 tyrosine kinase activity, and is required for retinoic acid-induced neuronal differentiation in P19 embryonal carcinoma (EC) cells (19 -21).
It has been demonstrated that CypA specifically interacts with SIN3-Rpd3 histone deacetylase (HDAC) in vitro, suggest-* This work was supported by Grant CA66746 from the National Cancer Institute. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 To whom correspondence should be addressed. Tel.: 310-825-0535; E-mail: rchiu@dent.ucla.edu.
ing that CypA affects gene expression by physically interacting with HDAC (22). In screening for the gene(s) responsible for the alteration of phenotype in CypA-KD P19 cells (21), we observed a silent paternally expressed gene, Peg3. Subsequently, we found an inverse relationship between mRNA expression and DNA hypermethylation as well as Peg3 reactivation from CypA-KD P19 cells by demethylation reagents and HDAC inhibitor, suggesting that epigenetic mechanisms play an important role in the regulating of Peg3 expression in CypA-KD P19 cells. Chromatin immunoprecipitation (ChIP) assays indicated a loss of acetylation and a gain of lysine 9 trimethylation in histone 3, as well as enhanced DNA methyltransferase 1 (Dnmt1) and MBD2 binding on the CpG islands of Peg3 in CypA-KD P19 cells. Our data demonstrate the epigenetic function of CypA, which protects the paternal allele of Peg3 from DNA methylation and inactive histone modifications.
MATERIALS AND METHODS
RNA Isolation, cDNA Synthesis, and Quantitative Real-time (QRT)-PCR-Total RNA samples were isolated from 3 ϫ 10 6 cells by using the RNeasy Mini Kit (Qiagen) with on-column DNase digestion according to the manufacturer's protocols. Oligo(dT)-primed cDNA was synthesized from 3 g of RNA using SuperScript TM (Invitrogen). The 20-l products of reverse transcription were diluted to 40 l, and 2 l were used for each PCR reaction. PCR reactions were performed in a total volume of 25 l containing 2 M primers and 12.5 l of the Power SYBR green PCR master mix (Applied Biosystems). The primers used in real-time PCR were cPeg3-F (5Ј-GCCTAAAC-CAACCCAT-AATGTC-3Ј) and cPeg3-R (5Ј-CTGAAAGAG-T-CCCTGCGTTC-3Ј). As an input control, glyceraldehydes-3-phosphate dehydrogenase (GAPDH) was amplified using the following primers: GAPDH-F (5Ј-CAGTGGCAAAGTGG-A-GATTG-3Ј) and GAPDH-R (5Ј-AATTTGCCG-TGAGTGG-AGTC-3Ј)S. QRT-PCR was performed under the following conditions: 95°C for 10 min for the initial denaturing followed by 40 cycles of denaturing at 95°C for 20 s, annealing at 60°C for 30 s, and extension at 72°C for 30 s. The data were analyzed using the function 2 Ϫ⌬⌬CT , where ⌬⌬CT ϭ (CT, Target Ϫ CT, GAPDH) sample Ϫ (CT, Target Ϫ CT, GAPDH) calibrator . In our experiments, GAPDH was used as an internal control to normalize PCR for the amount of RNA added to the reverse transcription reactions. We arbitrarily used wild type (WT P19) cells as a calibrator while using KD (CypA knockdown P19) cells as a sample to indicate the relative difference. Primer and template designs followed the same criteria for each target, and primers and Mg 2ϩ concentrations had been optimized to render efficiency for each target near one per assumption underlying the 2 Ϫ⌬⌬CT method.
Genomic Bisulfite Sequencing and Methylation-specific PCR-Genomic DNA was isolated using the DNeasy Tissue Kit (Qiagen) according to the manufacturer's instructions. Two g of DNA were digested with EcoRI, extracted with phenol-chloroform, and then subjected to sodium bisulfite conversion, using the EZ DNA methylation Kit (ZYMO Research). The converted DNA was diluted to 20 l, and 4 l were used for each PCR reaction. To amplify all of the CpG islands for bisulfite sequenc-ing, regardless of methylation status, unbiased primers Peg3 S-F (forward, 5Ј-GTAGTTTGATTGGTAGGGTG-3Ј) and Peg3 S-R (reverse, 5Ј-CAATCTACAACCTTATCAATT-AC-3Ј) were used to perform PCR under the following conditions: 95°C for 10 min for the initial denaturing followed by 40 cycles of denaturing at 95°C for 20 s, annealing at 60°C for 30 s, and extension at 72°C for 30 s. To monitor the efficiency of bisulfite treatment, the PCR products were subcloned into the TA cloning vector, and 15 different clones were sequenced individually. If Ͼ95% of cytosine was converted into thymidine, we selected those DNA samples for bisulfite sequencing analyses. Methylation-specific PCR was performed using primers Peg3 M-F (5Ј-AGACGTTGGGGAGTTAGGAG-TCGC-3Ј) and Peg3 M-R (5Ј-TATAATCTACCG-CCCCTAACCCGCG-3Ј) for methylated DNA and primers Peg3 U-F (5Ј-AGATGTTGGGG-AGT-TAGGAGTTGT-3Ј) and Peg3 U-R (5Ј-TATAATCTACC-ACCCCTAACCCACA-3Ј) for unmethylated DNA. PCR conditions were 95°C for 3 min for the initial denaturing followed by 35 cycles of denaturing at 95°C for 30 s, annealing at 60°C for 1 min, and extension at 72°C for 1 min. To directly observe bands representing methylated and unmethylated DNA, PCR products were resolved on a 2% agarose gel and visualized by ethidium bromide staining.
ChIP Analysis-ChIP assays were carried out using a kit from Upstate according to the manufacturer's instructions. For Dnmt1 ChIP, cells were treated for 2 h with 5 M 5-aza-dC to arrest the fleeting covalent association of methyltransferase with the DNA substrate. Briefly, 1 ϫ 10 7 cells were used per ChIP assay. After 10 min of 1% formaldehyde treatment, the cells were harvested and sonicated for 3 ϫ 20 s using a Tekmar sonic disrupter set to 30% of maximum power to produce soluble chromatin, with average sizes between 300 and 1000 bp. The chromatin samples were then diluted 8-fold in the dilution buffer and precleaned for 1 h using 75 l of salmon sperm DNA/protein A-or G-agarose beads. Ten g of antibodies were then added to each sample and incubated overnight at 4°C. To collect the immunocomplex, 60 l of salmon sperm DNA/ protein A-or G-agarose beads were added to the samples for 1 h at 4°C. The beads were washed once in each of the following buffers, in order: low salt, high salt, and LiCl immune complex wash buffer; they were then washed twice in TE buffer (10 mM Tris-HCl, 1 mM EDTA, pH 8.0). The bound protein-DNA immunocomplexes were eluted twice with 250 l of elution buffer and subjected to reverse cross-linking at 65°C for 6 h. The reverse cross-linked chromatin DNA was further purified by proteinase K digestion and phenol-chloroform extraction. DNA was then precipitated in ethanol and dissolved in 20 l of TE buffer. Two microliters of DNA were used for each QRT-PCR with primers gPeg3-F (5Ј-ACCCTGAC-AAGGAGGTGTCCC-3Ј) and gPeg3-R (5Ј-GTCTAGTGCACCCACACTGAAC-3Ј). For a positive control, RNA polymerase II antibody was used to immunoprecipitate actively expressed promoter, and mouse GAPDH promoter was amplified by using primers mGAPDHpF (5Ј-TACTCGCGGCTTTACGGG-3Ј) and mGAPDHpR (5Ј-TGGAACAGGGAGGAGCAG-AGAGCA-3Ј). QRT-PCR was performed under the following conditions: 95°C for 10 min for the initial denaturing followed by 40 cycles of denaturing at 95°C for 20 s, annealing at 60°C for 30 s, and extension at 72°C for 30 s. The data were analyzed using the function 2 Ϫ⌬⌬CT , where ⌬⌬CT ϭ (CT, IP Ϫ CT, input) sample Ϫ (CT, IP Ϫ CT, input) calibrator . In our experiments, input was used as an internal control to normalize PCR for the amount of chromatin DNA added to the ChIP assay. We arbitrarily used WT P19 cells as a calibrator; whereas KD (CypA-knockdown P19) cells were compared with WT to generate the relative difference.
RESULTS
Cyclophilin A Knockdown (CypA-KD) Selectively Affects the Expression of Imprinted Genes-To identify genes that could be affected by the CypA-KD in P19 EC cells, we found repressed Peg3 in CypA-KD P19 EC cells (S1-7) by using subtraction-differential screening of the subtractive library. Undetectable Peg3 expression was also exhibited in the other stable CypA-KD clone S3-2, suggesting that suppression of Peg3 resulted from the knocking down of CypA (Fig. 1A).
To determine whether CypA-KD also affects the expression of other imprinted genes, we performed semiquantitative reverse transcription-PCR as well as QRT-PCR to detect Usp29, Peg1/Mest, Igf2, H19, and Igf2r, using specific primers (Fig. 1). Again, we observed that Usp29 expression levels were undetectable, suggesting that Usp29 shares the same imprinting control region with Peg3 and coordinates silencing of the imprinted transcript Usp29 with Peg3. H19 localized in a different locus was also undetectable, suggesting that cells lacking the CypA switch expressed H19 to silence. QRT-PCR analysis clearly demonstrated a 3-4-fold increase in Igf2, reduced Igf2r mRNA, but no effect upon Peg1/ Mest expression in S1-7 and S3-2 cell lines (Fig. 1B). Collectively, these findings indicate that CypA-KD can selectively affect the expression of certain imprinted genes.
Peg3 Expression Is CypA-dependent-Results from the stable CypA-KD clones S1-7 and S3-2 clearly demonstrated that knockdown of CypA resulted in suppression of Peg3. To rule out the possibility that the observed changes resulted from clonal selection, we performed transient transfection of pshRNA-CypA into WT P19 EC and F9 EC cells ( Fig. 2A). Nonspecific short hairpin RNA (shRNA) and an empty vector were used as negative controls. Our results demonstrated that the silencing of Peg3 is accompanied by transient CypA-KD in both P19 EC and F9 EC cells, confirming our hypothesis that CypA-KD alone causes Peg3 silencing and suggesting further that the mechanism may be a general phenomenon in EC cells.
To determine whether CypA-knockout (KO) also results in FIGURE 1. CypA-KD selectively affects the expression of imprinted genes. A, semiquantitative reverse transcription-PCR of Peg3 and other imprinted genes in WT P19 and CypA-KD P19 cells. PCR products were subjected to 2% agarose gel electrophoresis. GAPDH was used as an internal control. Shown are WT P19 cells, S1-7, and S3-2 CypA-KD stable cell lines. B, QRT-PCR analysis of the expression of imprinted genes. GAPDH was used as an internal control to normalize PCR for the amount of RNA added to the reverse transcription reactions. Expression of imprinted genes from WT cells was set as 1, whereas those from KD cells were relative to that of WT, as indicated on the top of each bar graph. repression of Peg3 expression, we measured Peg3 expression from CypA-KO Jürkat cells (a generous gift from Dr. Jeremy Luban) using semiquantitative reverse transcription-PCR. As shown in Fig. 2A, lane 9, we detected low levels of Peg3 as compared with those in wild type Jürkat cells. Together, these data clearly demonstrate that Peg3 expression is CypA-dependent.
CypA Isomerase Activity Is Required to Maintain Expression of Peg3-To determine whether CypA isomerase or FK506binding protein isomerase activity is required to maintain expression of Peg3, WT P19 cells were treated with 1 g/ml CsA and FK506 (100 ng/ml) or rapamycin (100 ng/ml), which inhibits the isomerase activity of CypA and FKBPs, respectively. After 72 h of CsA, FK506, or rapamycin treatment, the cells were harvested to prepare total RNAs. Two g of total RNA were used to perform RT-PCR to detect Peg3 using a pair of specific primers. Only CsA-treated WT P19 cells had undetectable Peg3 (Fig. 2B), indicating that the isomerase enzymatic activity of CypA is necessary for Peg3 expression.
Silencing of Peg3 Is Reactivated by Treating CypA-KD Cells with the DNA Methyltransferase Inhibitor-To determine whether DNA methylation is involved in silencing of Peg3 in CypA-KD P19 EC cells, we treated the S1-7 and S3-2 cell lines for 5 days with 1.0 M 5-aza-dC, a DNA methyltransferase inhibitor, followed by amplification of the Peg3 129-bp fragment using semiquantitative reverse transcription-PCR and QRT-PCR. Fig. 3 shows that, upon 5-aza-dC treatment, the silent Peg3 gene in S1-7 and S3-2 was reactivated. These results demonstrate an inverse relationship between DNA methylation and Peg3 expression and support the hypothesis that Peg3 transcription is regulated by promoter methylation.
CypA-KD Resulted in Biallelic Methylation of CpG Islands Encoded within the Promoter, First Exon, and First Intron of the Peg3 Gene-To directly show methylation of the Peg3 gene, we used bisulfite modification and DNA sequencing to analyze the methylation status of 445-bp CpG islands encoded within the promoter, first exon, and first intron of this gene (Fig. 4). Analysis of 15 individual clones revealed that, in CypA-KD P19 cells, 26 CpG dinucleotides were 98% methylated, indicating biallelic methylation, whereas in WT P19 cells, the methylated CpG islands were at 42.3% frequency, which is consistent with monoallelic methylation of the imprinted gene (Fig. 5A). We next performed methylation-specific PCR on sodium bisulfite-modified genomic DNA. Two pairs of primers (U and M) were used for annealing to unmethylated and methylated DNA, respectively. Primers were designed within the Peg3 CpG islands containing frequent cytosine to distinguish methylated from unmethylated DNA. A biallelic methylation pattern was observed in S1-7 and S3-2, whereas a monoallelic methylation represented by both unmethylated and methylated DNA bands was observed in WT P19 cells (Fig. 5B), correlating with the monoallelic methylation of the imprinted gene.
Dnmt1 Is Responsible for Methylation of the Unmethylated Allele of Peg3 in the CypA-KD Cells-To determine whether Dnmt1 is responsible for the methylation of selected DNA targets, an in vivo complex of methylation analysis was used as described previously to detect and quantify the physical interaction of Dnmt1 with substrate genomic DNA in a physiologi-cal setting in chromatin (23). QRT-PCR analysis of ChIP products generated by immunoprecipitation with antibody against Dnmt1 (Abcam) in the WT P19 and CypA-KD P19 cell lines revealed that the Dnmt1-bound DNA fraction in CypA-KD P19 cells was ϳ2.07-fold higher than that of wild type cells (Fig. 6). In contrast, RNA polymerase II-bound GAPDH promoter as a positive control showed no significant difference between the two cell lines (Fig. 6). These results further substantiate the hypothesis that CypA-KD mediates biallelic methylation of the Peg3 promoter and the first exon.
Partial Relief of the Repressed Peg3 in CypA-KD Cells by Treatment of Cells with the HDAC Inhibitor-It has been well established that the methyl-CpG-binding protein silences transcription by recruiting the HDAC-repressive machinery, which removes acetyl groups from histone, resulting in gene silencing (24,25). To determine whether histone deacetylation is involved in silencing Peg3 in CypA-KD P19 EC cells, because HDAC binds to CypA (Fig. 7A), the S1-7 and S3-2 cell lines were treated for 3 days with 10 or 20 ng/ml trichostatin A, a histone deacetylase inhibitor, followed by amplification of the Peg3 129-bp fragment using semiquantitative reverse transcription-PCR. Our data demonstrated that trichostatin A only partially relieves CypA-KD-mediated Peg3 repression (Fig. 7B). This partial relief indicates that additional mechanisms of repression by methyl-CpG repressory complexes might exist in addition to the recruitment of histone deacetylation. We therefore examined various histone modifications in the Peg3 promoter of WT P19 and CypA-KD P19 cells by ChIP assays, using antibodies against modified histones (acetyl Lys-H3 and trimethyl Lys-9-H3) followed by QRT-PCR analysis.
Reciprocal Pattern of Acetyl Lys-H3 and Trimethyl Lys-9-H3 Was Enriched in the CypA-KD Cells-Histone acetylation was observed in both WT P19 and CypA-KD P19 cells but with a 33% weaker signal in CypA-KD P19 cells (Fig. 8A). Our results suggest that a less-acetylated histone binds to the Peg3 promoter in the CypA-KD P19 cell line S1-7, which correlates with repressed Peg3 expression in this cell line. The level of trimethyl Lys-9-H3 in the CypA-KD cells was ϳ2.5ϫ greater than that in WT P19 cells, which were set as 50/50 for two parental alleles (Fig. 8A), indicating a gain of histone methylation on the Peg3 promoter. A positive control, GAPDH promoter bound by RNA polymerase II, showed no significant difference between WT-and CypA-KD cell lines (Fig. 8A). In a pattern reciprocal to that of acetyl Lys-H3, trimethyl Lys-9-H3 was enriched exclusively in the CypA-KD P19 cell line. This predominant enrichment of the trimethylation of Lys-9-H3 correlated with the inverse relationship of paternally expressed Peg3 in WT P19 cells and biallelic methylation and silencing in CypA-KD P19 cells. We conclude that silencing of Peg3 in the CypA-KD P19 cell line correlates with a gain of trimethyl Lys-9-H3 on the promoter region of the paternal allele.
MBD2 Is Involved in the Silencing of Peg3 Expression-Our data demonstrate that P19 EC cells have abundant MBD2, which also binds to CypA (Fig. 8B). It has been demonstrated that MBD2 is associated with HDACs in the MeCP1 repressor complex (26). To determine whether silencing of the hypermethylated Peg3 gene is consistent with a model involving methyl-CpG-binding proteins, ChIP analysis was used to study the occupancy of the methylated Peg3 promoter by MBD2 in the CypA-KD P19 cell line as compared with WT P19 EC cells. QRT-PCR analysis of ChIP products generated by immunoprecipitation with an antibody against MBD2 in the WT P19 and Input and ChIP products were analyzed by semiquantitative PCR. Antibody against RNA polymerase II and normal mouse IgG-immunoprecipitated DNA was amplified with primers of GAPDH promoter to serve as a positive and negative control, respectively. The relative amount of Dnmt1-bound Peg3 promoter and exonic CpG islands was measured by QRT-PCR. CypA-KD P19 cell lines revealed the MBD2-bound DNA fraction in the CypA-KD P19 cell lines was ϳ1.8-fold higher than that in WT-P19 cells (Fig. 8B), suggesting that MBD2 is involved in the silencing of Peg 3 expression.
DISCUSSION
Using a revolutionary RNA interference (RNAi) technique, we have been able to analyze loss-of-function phenotypes for the first time to define the function of CypA, which is required to maintain the differential methylation of the CpG islands and histone modification in the promoter and its extended exonic region of the Peg3 gene. Although off-target effects have been documented during RNAi experiments and integration of an RNAi vector, this is not the case in our presented data. Based on the BLAST sequence data base, we did not find any other sequence identical to our designed targeting site, as described previously (21). In addition, using a nonspecific sequence of shRNA ( Fig. 2A, NS1 and NS2) and an empty vector as a negative controls did not result in the silencing of Peg3, suggesting that our observation was of CypA-KD-mediated effects.
Double-stranded RNA derived from a processing of RNAi can also produce transcriptional gene silencing in Arabidopsis, Schizosaccharomyces pombe, and mammalian cells (27)(28)(29). Transcriptional gene silencing mediated by double-stranded RNAs was shown to be due to RNA-dependent DNA methylation. RNA-dependent DNA methylation requires a doublestranded RNA to target DNA and is subsequently processed to yield short RNAs. These short double-stranded RNAs happened to include sequences identical to genomic promoter regions and in turn proved capable of inducing methylation of the homologous promoter and subsequent transcriptional gene silencing. Once again, we conducted a sequence BLAST search, and there were no sequences identified in the promoter region and first exon of the Peg3 gene identical to S1, which was used to target CypA (21).
In this report, results from the stable CypA-KD clones S1-7 and S3-2 clearly demonstrated that suppression of Peg3 resulted in cells lacking CypA. Silencing of Peg3 was also accompanied by transient knockdown CypA in both P19 and F9 embryonal carcinoma cells, indicating that the observed silent Peg3 does not result from clonal selection. Furthermore, treatment of P19 cells with CsA, an isomerase inhibitor, resulted in silencing of Peg3, suggesting that the isomerase enzymatic activity of CypA is necessary for Peg3 expression.
In contrast, treatment of P19 cells with FK506, rapamycin, FKBP isomerase blocking agents, did not affect Peg3 expression. These results further substantiate the notion that the isomerase enzymatic activity of CypA (and not FKBPs or calcineurin phosphatase activity) is required for Peg3 expression. In addition, data obtained from CypA-KO Jürkat cells clearly demonstrated that Peg3 expression is CypA-dependent ( Fig. 2A). An attempt to rescue the silent Peg3 with a CypA cDNA expression plasmid that is not targetable by the RNAi-CypA failed (data not shown). This failure to rescue implicated that the irreversible covalent bond modification of DNA methylation has been established.
The inverse relationship between mRNA expression and DNA hypermethylation as well as our findings of Peg3 reactivation by demethylation agents suggest that this epigenetic mechanism plays an important role in Peg3 regulation in CypA-KD P19 cells. Epigenetic switches consist of both DNA methylation and histone methylation (6). Bisulfite genomic sequencing and MSP analysis clearly demonstrated a biallelic methylation of CpG islands within the promoter region, first exon, and first intron of the Peg3 gene in CypA-KD P19 cells, whereas methylation of this region in the WT P19 cells remains monoallelic (Fig. 5). CpG methylation at any critical site may increase the likelihood of binding of methylcytosine-binding proteins, which can recruit HDACs and H3-Lys-9 methyltransferase to mediate inactive chromatin remodeling. We hypothesized that MBD2 associates with methylated DNA within the promoter region of the repressed Peg3 gene. ChIP analysis with MBD2 antibody showed that more MBD2 was associated with the promoter region of Peg3 in the CypA-KD P19 cells (Fig. 8B). Additionally, a reciprocal pattern of acetyl Lys-H3 and trimethyl FIGURE 8. Reciprocal association of acetylated H3 and trimethylated Lys-9-H3 in WT P19 versus CypA-KD cells. A, relative amounts (immunoprecipitation (IP)/input) of acetylated H3 or trimethylated Lys-9-H3 bound to Peg3 promoter and exonic CpG islands are shown, with error bar indicating variation in triplicate experiments. Normal rabbit IgG bound to Peg3 promoter was used as a negative control. RNA polymerase II antibody-immunoprecipitated DNA was amplified with primers of GAPDH promoter to serve as a positive control. B, CypA physically interacts with MBD2 in vitro, and quantitative analysis of MBD2-bound Peg3 promoter and the first exon CpG islands in WT P19 and CypA-KD cell line, S1-7. Glutathione S-transferase (GST)-CypA pulldown of P19 whole cell extracts was performed. The resultant complex was detected with anti-MBD2 antibody (Upstate) using Western blot analysis 10% of the inputs was used as a positive control. Input and ChIP products were analyzed by semiquantitative PCR, and the relative amounts of MBD2-bound Peg3 promoter and exonic CpG islands were measured by QRT-PCR.
Lys-9-H3 was enriched in the CypA-KD cells compared with WT P19 cells, as indicated by ChIP assays (Fig. 8A). Trimethylation of histone 3 on Lys-9 provides a histone code indicative of inactive chromatin structure. Therefore, DNA methylation is both a cause for and a result of heterochromatinization. Methylation patterns depend upon the activity of DNA methyltransferases. A Dnmt activity assay using a synthetic template, poly(dI-dC)-poly(dI-dC), showed no increase in global Dnmt activities (data not shown), whereas the Dnmt1-bound Peg3 promoter is 2.07-fold higher in the CypA-KD cells than than in WT P19 cells (Fig. 6). Collectively, our data indicate that selective hypermethylation of DNA might have occurred in the CypA-KD cells, and Dnmt1 is involved in at least the maintenance of this hypermethylation. CypA may be necessary to retain a modulator, which is required for maintenance of imprinting gene expression, in the inactive cytoplasmic form as reported previously for the function of Hsp90 in chromatin remodeling (30,31). The precise temporal and spatial control of imprinting gene expression may be altered when cells lack CypA.
It has been reported that DNA methylation patterns are remarkably stable and change little with the in vitro culture of cancer cell lines (32). In this study, we have demonstrated a simple epigenetic switch for Peg3 by knocking down CypA. The precise mechanism underlying this switch remains to be elucidated. It is possible that the lack of isomerase activity of CypA leads to a global redistribution of factors required for epigenetic modifications. Inactivation of Peg3 by hypermethylation likely confers a survival advantage, as Peg3 regulates the translocation of the proapoptotic Bax from cytoplasm to the mitochondria (33). CypA-KD P19 cells are indeed less sensitive to retinoic acid plus BMP4-induced apoptosis as compared with wild type cells. 3 Peg3 hypermethylation has been reported in gliomas, and re-expression of a Peg3 cDNA in glioma cell lines resulted in a loss of tumorigenecity in nude mice, suggesting that this gene production functions as a tumor suppressor (34). Taken together, this epigenetic alteration would seem to provide CypA-KD P19 cells with cell survival advantages compared with wild type cells. This statement is also supported by our previous data indicating that CypA-KD cells have a faster growth rate than wild type P19 cells (21).
|
2016-10-26T03:31:20.546Z
|
2006-12-22T00:00:00.000
|
{
"year": 2006,
"sha1": "18879bfb35b24592edf0a85791cea1b27d178f92",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/281/51/39081.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "2211ccbb5d9aa5954cb5b89d4d084104c4f75719",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
229457297
|
pes2o/s2orc
|
v3-fos-license
|
Virtual Training Effectiveness among Disabled People: A Research Framework
Employees are the most important asset of any organization regardless of their capabilities. Even though the number of disabled people currently employed in Malaysia labour workforce still consider small, their rights for training and development opportunities cannot be ignored. Due to globalization, boundaryless world and current outbreak of Corona Virus (COVID-19), the need for virtual training and development have increased gradually. However, due to the limited number of reported training conducted, therefore, knowledge related to its effectiveness are also inadequate. This review of past studies from 2016-2020 has been conducted to fill in the gap by proposing a framework related to factors influencing training effectiveness among disabled people. Google Scholar has been used as a general searching platform that will direct researchers to scholarly academic electronic databases such as Emerald, Springer Link, and Wiley Online Library. Based on Training Engagement Theory developed by Sitzmann and Weinhardt (2015), this study has proposed a research framework for further investigation. Implications for future research also provided at the end of this article.
Introduction
Employees are the most important asset of any organization. Regardless of their disabilities, they can contribute to the performance of organizations (Luu, et al., 2020) particularly when were provided with suitable and ample training and development opportunities. Training, in general is really significant towards physical, social, intellectual, and productivity improvement (Grossman & Salas, 2011;Ganesh & Indradevi, 2015). And even, systematic training courses are needed to ensure all employees able to accomplish their job requirements (Armstrong & Taylor, 2020;Singh, 2016).
Employees abilities and skills need to be maintained and improved to prepare the workforce with competitive and challenging business environment (Arguinis & Kraiger, 2009).
Due to globalization, boundaryless world and current outbreak of Corona Virus , training and development deliveries also changed accordingly. From traditional face-to-face training to virtual training that can be accessed everywhere with the availability of gadget and internet. This shift benefits disabled employees to get continuous training and development opportunities. Among the advantages of virtual training programs to disabled employees include they are not bound to any physical location, and the training content can be reviewed or replayed as many as the trainee wants (Pleasant, Molinari, Dobbs, Meng, & Hyer, 2020;Kraiger, 2014). A well-developed system that met the need for different type of disabled employees could increase their loyalties toward organizations.
There is a growing demand for virtual training program among disabled employees (e.g., Lindsay, Cagliostro, Leck, Shen, & Stinson, 2019;Kim, 2015). Nevertheless, the reported studies on virtual training effectiveness is inadequate (Bertram, Moskaliuk, & Cress, 2015). This review of past studies has been conducted to fill in the gap by proposing a framework related to factors influencing virtual training effectiveness among disabled people. This paper has been arranged according to the following sequence; review of literature, methodology, findings and discussions, implications and recommendations.
Literature Review
Defining Virtual Training According to Training Industry (2020), virtual training refers to training done in a virtual or simulated environment, or when the learner and the instructor are in separate locations. It involves a wide range of desktop applications that run on standard computers. Moskaliuk, Bertram, and Cress (2013) added that virtual training allows a three-dimensional representation and two-way communications between trainer and trainees. It provides online training materials and repositories, courses interactions, communications and presentation through technology (Johnson, Hornik, & Salas, 2008;Allen & Seaman, 2013).
Virtual training refers to a broad set of application and process such as web-based learning, computer-based training, and digital collaborations (Bondarouk & Ruël, 2010;Vernadakis, Antoniou, Giannousi, Zetou, & Kioumourtzoglou, 2011) or training deliveries through computer technology either using intranet or internet. It could be implemented anywhere at any time (Welsh, Wanberg, Brown, & Simmering, 2003;Zhang & Nunamaker , 2003). It can also be in the form of virtual reality (VR) which refers to a type of computer-generated simulation that enables a trainee to interact within an artificial three-dimensional environment using electronic devices (i.e., goggles with a screen or gloves fitted with sensors) (Mitchell, 2020). Through this simulated artificial environment, the trainee can have a realistic-feeling experience. In a healthcare setting, this VR technology has been used widely as a visualization for diagnosis and rehabilitation purposes among physicians (Kuhlen & Dohle, 1995). This study, however, adapted a definition of e-learning given by Tavangarian, Leypold, Nölting, Röser, and Voigt (2004) into this context of the study. Tavangarian et al. (2004, p. 274) state that e-learning refers to "all forms of electronically supported learning and teaching, which are procedural in character and aim to affect the construction of knowledge concerning individual experience, practice and knowledge of the learner."
Virtual Training Effectiveness among Disabled People
In general, training effectiveness focuses on an overall learning system that highlights the macro outcome of a training (Alvarez, Salas, & Garofano, 2004). Sitzmann and Weinhardt (2015, p. 2) define training effectiveness as "the extent to which training produced the intended results." Every training programs was conducted to improve participants' or employees' knowledge, behaviour, skills, and attitude. It depends mostly on individual characteristics, training (e.g., training content), and organization (Alvarez et al., 2004). Training effectiveness does not rely on the number of training conducted, but more on how much productivity improvement recorded. Therefore, a training program is said to be useful if the elements of knowledge, behaviour, skills, and attitude learned during the training session were successfully transferred to the workplace. Virtual training conducted among people with different types of disabilities requires more flexibilities and concern compared to traditional face-to-face training.
Training Engagement Theory
Training Engagement Theory has supported this study that is related to factors influencing virtual training effectiveness among disabled people by Sitzmann and Weinhardt (2015). Training Engagement Theory by Sitzmann and Weinhardt (2015) proposed multilevel antecedents of training effectiveness. "Multilevel" in this context refers to the micro level (i.e., individual employee (withinperson) working in a group (between-person) within organization's environment) and macro-level (processes involve to complete a job task). Compared to existing theories which mostly focusing on the static evaluation and ignored the fact that various processes are interconnected to each other resulting in training effectiveness. This theory therefore go beyond the Kirkpatrick (1959) training evaluation (reaction, learning, behaviour and results) (Sitzmann & Weinhardt, 2015;(Sitzmann & Weinhardt, 2019) Training success depends on the interconnection of all processes at a different level in an organizational hierarchy. For example, if training is easily connected to organization's high-level goal, the trainee is likely to have higher intentions to transfer training. This theory asserted that all processes, which occur before (i.e., inclusivity in organizational decision making process), during (i.e, specific needs of virtual training platform), and after (i.e., combination of virtual training with coaching-on-the job) the completion of training should be considered as antecedents of training effectiveness. All these variables can interact with other categories of variables, including trainee characteristics (i.e., level of mindfulness) that may significantly impact training effectiveness (Roehling & Huang, 2018).
Methodology
This study is based on extensive reviews of past studies on factors influencing virtual training effectiveness among disabled people, covering international literature available through online databases. Google Scholar has been used as a general searching platform that will direct researchers to scholarly academic electronic databases such as Emerald, Springer Link, and Wiley Online Library. Aside from using online resources to obtain literature, relevant textbooks that are related to training effectiveness among disabled people are also utilized to support the findings of the past literature further. To ensure the information is recent and relevant, literature gathered (i.e., journals, conference proceedings, books, reports, and websites) are limited for publication in English literature from year 2016 to 2020 only. The literature from electronic databases are filtered according to keywords such as "virtual training effectiveness", "online training effectiveness", "virtual training for disabled people", and "online training for disabled people".
Findings and Discussion
This section will present the findings of the study related to factors influencing virtual training effectiveness among disabled people.
Factors Influencing Virtual Training Effectiveness among Disabled People Specific Needs of Virtual Training Platform
Types of disability will influence the best tools and methods to be used for virtual training sessions. Virtual training attracts the attention of organizations and training provider due to its potential to solve two key problems which include to optimize human resource potential and to address their specific needs (Ford, Piccolo, & Ford, 2017). A study conducted by Batanero, de-Marcos, Holvikivi, Hilera, and Otón (2019) found out that the adapted Moodle learning platform by adding content which were non-auditory and non-visual to deaf or blind students and students upload reusable profiles/metadata describing their specific accessibility needs to connect to suitably adjusted content have resulted in significant learning improvement across all groups (i.e., blind (45%), deaf (46.25%) and deaf-blind (87.5%)). Specific needs in this context refer to the development and designs of a virtual platform made fulfils the actual needs of the targeted group (i.e., deaf, blind).
Studies on visually impaired trainees as well as individuals with physical, sensory and intellectual disabilities), have concluded that specifically designed virtual training platform enabled more access and promised a higher tendency of transfer to an actual setting (Flamaropol, et al. , 2018a;Tariq, Rana, and Nawaz (2018) ;Sobota, et al. (2017);Pouvrasseau, et al. (2017);. Accordingly, this study proposed that: Proposition 1: Specific needs of virtual training platform will increase traning effectiveness among disabled people.
Combination of Virtual Training with Coaching-on-the Job
Researchers used a "Combination of virtual training with coaching-on-the job among disabled people" and "Combination of virtual training with coaching-on-the job among disabled people and training effectiveness" resulted in none relevant articles. Accordingly, researchers have refined the search keywords into more general and omitted the "disabled people" (i.e., Combination of virtual training with coaching-on-the job and training effectiveness). A study conducted by Towson, Taylor, and Tucker (2018) found that participants demonstrated improvement in communication skills after virtual simulation and coaching as intervention. Cheng, et al. (2018) asserted that coaching-on-the job is one of effective strategies to enable disabled people (i.e., physical, sensory and intellectual disabilities) achieve better employment outcomes. Knowledge obtained during the training sessions can be reinforced through direct coaching received from supervisor or mentor. Accordingly, the probaility of transferring knowledge to the workplace setting will be higher. Therefore, this study proposed that: Proposition 2: Combination of virtual training with coaching-on-the job will increase traning effectiveness among disabled people.
Inclusion in Organizational Change
Issue related to inclusion in organizational development and change mostly discussed in diversity management studies and workplace disability management (Roberson, Ryan, & Ragins, 2017). Organizational change refers to efforts targeted towards organizational goal achievement. It can be in either proactive -which is the organization initiate a change according to the current organizational strategy or reactive -which is the organization developed action plan according to featured needs. Involvement of disabled employees in organizations' decision-making process could increase their organizational citizenship and responsibilities. It can trigger the need for change and therefore reduced resistance for change (Gagnon & Collinson, 2017). As a result, they are more selfmotivated to transfer the knowledge gained during training to their workplace (Suresh & Dyaram, 2020). Therefore, it is proposed that: Proposition 3: Inclusion in organizational change will increase traning effectiveness among disabled people.
Moderating Role of Mindfulness
Mindfulness can be defined as non-judgmental acceptance of the current state of live (Kappen, Karremans, & Burk, 2019;Ramasubramanian, 2017;Rayan & Ahmad, 2016;Tan & Martin, 2016). A few more different definitions given by scholars include "the act of noticing new things, a process that promotes flexible response to the demands of the environment" (Pagnini, Bercovitz, & Langer, 2016, p. 91), related to "optimism, self-efficacy, and adaptability" (Malow & Austin, 2016, p. 81), and "paying attention to what's happening in the present moment in the present mind, body and external environment, with an attitude of curiosity and kindness (MAPG, 2015, p. 5 as cited in Porter, Bramham, & Thomas, 2017. Whilst not many studies has been conducted, researchers want to highlight the potential moderating effect of mindfulness due to a) a vast researchers have proposed the applicability of mindfulness at workplace (e.g., Krick & Felfe, 2020;Ramaci, et al., 2020;Wei, Fenfen , & Chen, 2020;Charoensukmongkol, 2016), and b) due to its attribute, which reflects individual self-awareness to respond to the demands of the environment. This study, therefore, proposed that: Proposition 4: Mindfulness moderates the relationships between all the proposed independent variables and training effectiveness among disabled people.
Implications
Virtual training is necessary as an ongoing organizational strategy to remain competitive in the marketplace. With all the limitations faced by organizations and businesses during COVID-19, virtual training seems to be a must, regardless of employees abilities. This study, which is related to identifying factors influencing virtual training effectiveness, has identified the potential moderating effect of mindfulness based on Training Engagement Theory. Since this study solely supported by literature, empirical based studies through quantitative research should be conducted in future to confirm the proposed model. In addition, a qualitative study should possibly be conducted to explore the processes involve and the interconnectedness between them to produce a more holistic training effectiveness model. Since its establishment in 2015, there are still limited empirical studies have been conducted to confirmed the Training Engagement Theory as most researchers and scholars still consider Kirkpatrick (1959), Baldwin & Ford (1988), and Baldwin, Ford, and Blume (2012) as the pioneer that provide the foundation of training evaluation measures. This study therefore, may attract more studies within the area to further expand the theory.
Training effectiveness does not solely depend on individual trainee responsibility. The outcomes of any training (i.e., knowledge, skill, and attitude) can fully be transferred through operational support at all levels in an organization before attending the training. During the sessions, the trainer and the training provider should develop appropriate platform accordingly to the targeted audience.
|
2020-12-27T10:09:45.379Z
|
2020-11-25T00:00:00.000
|
{
"year": 2020,
"sha1": "91cd57769f0a03866bc7eab60640eec47a705bde",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/8135/virtual-training-effectiveness-among-disabled-people-a-research-framework.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f638d1000ad7a40f29f989cbf10ed35c0593b38c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
256038499
|
pes2o/s2orc
|
v3-fos-license
|
Deterministic and probabilistic regularities underlying risky choices are acquired in a changing decision context
Predictions supporting risky decisions could become unreliable when outcome probabilities temporarily change, making adaptation more challenging. Therefore, this study investigated whether sensitivity to the temporal structure in outcome probabilities can develop and remain persistent in a changing decision environment. In a variant of the Balloon Analogue Risk Task with 90 balloons, outcomes (rewards or balloon bursts) were predictable in the task’s first and final 30 balloons and unpredictable in the middle 30 balloons. The temporal regularity underlying the predictable outcomes differed across three experimental conditions. In the deterministic condition, a repeating three-element sequence dictated the maximum number of pumps before a balloon burst. In the probabilistic condition, a single probabilistic regularity ensured that burst probability increased as a function of pumps. In the hybrid condition, a repeating sequence of three different probabilistic regularities increased burst probabilities. In every condition, the regularity was absent in the middle 30 balloons. Participants were not informed about the presence or absence of the regularity. Sensitivity to both the deterministic and hybrid regularities emerged and influenced risk taking. Unpredictable outcomes of the middle phase did not deteriorate this sensitivity. In conclusion, humans can adapt their risky choices in a changing decision environment by exploiting the statistical structure that controls how the environment changes.
Scientific Reports
| (2023) 13:1127 | https://doi.org/10.1038/s41598-023-27642-z www.nature.com/scientificreports/ representations of structurally different outcome probabilities are formed and remain stable if outcome probabilities unexpectedly change during risky decision making. A large portion of previous research on decision making has attempted to understand the differences between decisions from description and decisions from experience. In the descriptive paradigms, outcome probabilities are provided a priori and known by the individuals; in the experiential paradigms, the probability distribution of the outcomes is learned from trial-by-trial experience because of repeated choices 1,2,10 . Confirming this distinction, different sets of behavioral tendencies have been observed within the two types of decisions (e.g., [11][12][13][14]. These behavioral tendencies are assumed to be supported by different processes and accounted for by separate theories 2,3,10 . Meanwhile, successful modeling attempts have already connected the two decision domains and eventuated a better understanding of the underlying choice behavior (e.g., 10,[15][16][17]. Since, in day-to-day life, we make decisions mostly from experience in uncertain situations where the role of feedback is crucial, we used an experiential paradigm in this study. To investigate the acquisition of outcome probabilities in a dynamically changing decision context with feedback, we implement experimental manipulations into the Balloon Analogue Risk Task (BART). In its original form, the BART measures real-life risk taking by simulating an uncertain decision environment with probabilistic reward and loss outcomes [18][19][20][21][22] . Task completion involves repeated decisions on either to inflate a virtual balloon larger and run the risk of a balloon burst or to collect the already accumulated reward because of previous successful balloon inflations.
One aspect of task performance is the acquisition or learning of outcome probabilities. This develops by experiencing successful balloon inflations and balloon bursts resulting from the adjustments of pumping behavior. As a measure of learning, some studies have quantified trial-by-trial reactivity in the BART (e.g., [23][24][25][26][27] ). Moreover, current computational models have become increasingly successful in capturing the learning aspect of task performance 28,29 . However, with experimental methods, it has scarcely been investigated how the direct manipulation of outcome probabilities alters the learning process and thereby risk-taking behavior in the BART [30][31][32][33][34] . These efforts are summarized next.
Some of the earlier studies using experimental manipulations investigated how initial experience with lucky (bursts after several pumps) or unlucky (bursts after a few pumps) series of balloons changed risk-taking behavior later in the task when burst probabilities became unbiased [30][31][32] . According to the results, individuals smoothly adjusted their risk-taking behavior to the changed probabilities. Meanwhile, this adjustment was more modest after unlucky initial events, suggesting persistent risk aversion. Other studies manipulated burst probabilities over several balloons 33,34 . The different burst probabilities were signaled with different colors of the balloons (cf. 19 ), but participants were not informed about the exact burst probabilities related to each color. The order of colors was either randomized within each task block 34 or consistent across a set of balloons and changed at the end of each set 33 . Mapping of colors to burst probabilities was either stable 34 or changing over the task 33 . In the former case, participants had to learn the different burst probabilities, while in the latter case, they had to continuously update the learned probabilities in each set of balloons. According to the results, sensitivity emerged to the different burst probabilities, irrespective of whether the color mapping was stable or changing, as well as carryover effects were found across the burst probabilities 33 . To examine learning effects in the BART, the present study also applies different and changing burst probabilities, albeit in an unsignaled manner.
The deep structure of burst probabilities can be based on several types of regularities. Learning differently structured regularities has been extensively investigated in unsupervised learning environments, especially in the statistical-sequence learning literature [35][36][37][38] . In these environments, sensitivity to at least two types of sequential regularities-deterministic and probabilistic-has been found. In deterministic regularities, elements usually follow a fixed sequence. For instance, in a reaction time task, the location of each visual stimulus on the screen would follow the four-element sequence of "left, up, down, right" that repeats multiple times in a task block. Thus, consecutive sequence elements-the location of the next stimulus in the given example-can be predicted from the previous ones with 100% certainty. That is, "down" is always followed by "right" and "right" is always followed by "left".
In contrast, if some noise (e.g., random elements) is embedded within the sequence, the resulting regularity becomes probabilistic. In the example above, the repeating sequence can be modified by inserting a random location out of the four possible ones between each of the two successive elements (i.e., left, random, up, random, down, random, right, random). Therefore, in probabilistic regularities, the predictability of a given element is less than 100% 35,39,40 . Altogether, while sure predictions can be formed with deterministic regularities, uncertain predictions arise with probabilistic regularities 41 . Deterministic and probabilistic information might not be treated along a continuum, since their learning has been modeled by two distinct hypothesis spaces 41,42 , and risky decisions based on probabilistic and deterministic regularities have been supported by different prefrontal areas in a complex gambling task 43 .
Consequently, manipulating the predictability of outcomes in the BART with the help of deterministic and probabilistic regularities could be a method to alter the learning process. We follow this approach in the present study. In the context of the BART, the deterministic and probabilistic aspects of the regularity pertain to the probability of balloon bursts. Controlled by the deterministic regularity, balloons burst with certainty after fixed pump numbers. This fixed sequence of pump numbers is repeated throughout the task, creating a dependency across balloons. With the probabilistic regularity, the risk of a balloon burst increases with each successive pump, but a balloon burst is not a sure event until a fixed point. The latter are the features of the original BART 19 , which can also be labeled as the "probabilistic" task version.
The attempt to respond to repeating regularities (similarity-based learning) has been shown to account for the behavioral tendencies observed during experiential decisions 5 . According to the "contingent average and trend" model and its extension, when making decisions, humans are assumed to be sensitive mostly to the sequential pattern of outcomes but sometimes also to the local trend of outcomes (most recent experiences) 5 www.nature.com/scientificreports/ Thus, not all past experiences are considered as relevant, only the sequence of those outcomes that is similar to the most recent sequence. In other words, the similarity of current and previous events is judged according to sequences, and individuals expect the reappearance of these sequences 5 . Therefore, with the use of repeating deterministic and probabilistic sequences controlling balloon bursts, we could test whether individuals learn repeating regularities in the BART. By alternating different probabilistic regularities according to a fixed repeating pattern, the combination of deterministic and probabilistic aspects can be attained. This "hybrid" manipulation could make predictions even more uncertain, since it changes the within-balloon uncertainty of burst probabilities over the task 4,41 . Therefore, this manipulation is also used in the present study. Uncertainty of predictions can be further manipulated by introducing transitions across different regularities. Earlier studies investigated how individuals track the transitions between random and non-random (regular) sequences of sensory stimuli 41,45 . As suggested by the phasic pupil dilatation responses of Zhao et al. 45 , the regular to random transition has been represented as an abrupt change in the stimulus regularities, signaling unexpected uncertainty. Meanwhile, without actively monitoring the transition, the opposite, random to regular transition has induced a gradual update or refinement of representations. This usually occurs under expected uncertainty and depends on evidence accumulation 4,45,46 . When the random to regular transition was investigated with the consideration of the type of the regular sequence, deterministic regularities were detected abruptly, while probabilistic regularities were detected gradually 41 .
Relying on these studies, we insert two between-balloons change points in the BART: We use a regular sequence over the first and final thirds of the task while random burst probabilities over the middle third. One third of the task (i.e., task phase) consists of 30 balloons. Thus, by reintroducing the regular sequence in the final phase, we use regular to random as well as random to regular transitions. This design enables us to test whether the representations of predictable regularities emerged over the first third of the task would remain persistent and become reactivated when the regularities reappear (cf. 6,47 ). Hence, we examine whether the gradual effect of the random to regular transition alters due to the sensitivity to the reappearing regularity 4 . With both deterministic and probabilistic regularities, the differential effect of transitions as a function of uncertainty is also explored 41 .
Altogether, the present experiment manipulated burst probabilities in the BART by creating three types of predictable regularities (see Fig. 1). One group of participants completed the original, probabilistic BART where larger balloons were coupled with increased risks of burst (see Fig. 1b). Another group completed a deterministic version where a three-balloon-long sequence was repeated. This sequence ensured balloons to burst always after medium, low, and high number of pumps, respectively (see Fig. 1a). These pump values were a priori fixed and remained the same. The last group completed a "hybrid" version where the medium, low, and high values of the three-balloon-long sequence changed in a probabilistic manner instead of being fixed (see Fig. 1c). The experimental design of all groups (conditions) consisted of three task phases: While the regularity was present in the first and final task phases (predictable phases), the regularity was completely removed for the middle phase (unpredictable phase). Thus, outcome probabilities became temporally random. Importantly, participants were not informed about either the presence or the absence of the regularity, and they did not have to track any change in the regularities. Nevertheless, they were asked in a post-task interview whether they became aware of the regularities. They completed the same amount of balloon trials (i.e., 30) in the first, middle, and final task phases.
We hypothesized that risk-taking behavior measured by the number of successful pumps would be higher in the predictable (first and final) phases than in the unpredictable (middle) phase, irrespective of condition (type of the underlying regularity). This was based on the concept that the acquisition of predictable sequences increases performance related to the elements of the sequence 35,37 . The number of pumps would increase mostly during the early trials of the BART, as experience with outcome probabilities accumulates and participants become more prone to take risks. This is a usually observed behavioral pattern in the original BART (e.g., 22,48,49 ), which we also expected with the present manipulations.
As stronger (or at least less uncertain) predictions can be formed with deterministic than with probabilistic regularities 41 , we hypothesized that sensitivity to the repeating balloon sequence would emerge in the deterministic condition. Furthermore, as the probabilistic regularities repeat in a deterministic fashion in the hybrid condition, we expected the emergence of sensitivity even to this structure. According to sequence-based similarity as a mechanism of acquiring outcome probabilities 5 , the sensitivity or learning effect would be captured by the different number of successful pumps on the respective balloon sizes (medium, small, large) and a more optimal risk-taking behavior. Knowledge of deterministic and probabilistic regularities has appeared as persistent and resistant to interference in unsupervised learning environments 6,47 . Therefore, the differentiation of balloon sizes and optimal risk-taking behavior would persist or become even more emphasized by the end of the task (in the final phase). However, the learning effect in the hybrid condition would emerge only in a gradual manner as evidence accumulates, because of the uncertainty generated by the different probabilistic regularities (cf. 4,45,46 ). Based on the findings of Maheu, et al. 41 , we also hypothesized a gradual increase of pump number in the final phase of the probabilistic condition, after the random to regular transition occurred.
Methods
Participants. Altogether 141 healthy young adult participants were recruited from university courses. These undergraduate courses were explicitly dedicated for participation in different psychological experiments. Therefore, to fulfill the course requirements and receive course credit, students had to participate in some experiments during the semester. Beyond the partial course credit in exchange for participation in the present experiment, they were not given a bonus based on the total score gained in the task. However, according to the impressions of the experimenters during debriefing, they were still motivated to perform the task as if gains and losses were real. Participants were randomly assigned to three different experimental conditions labeled as deterministic (n = 46), probabilistic (n = 48), and hybrid (n = 47). These experimental conditions are described in detail below. www.nature.com/scientificreports/ One participant performed only one experimental condition to limit carryover effects across conditions. All participants had normal or corrected-to-normal vision and none of them reported a history of any neurological and/or psychiatric condition. None of them were excluded after participation. They provided written informed consent before enrollment. The experiment was approved by the United Ethical Review Committee for Research in Psychology (EPKEB) in Hungary and by the research ethics committee of Eötvös Loránd University, Budapest, Hungary; and it was conducted in accordance with the Declaration of Helsinki. Descriptive characteristics of participants are presented in Table 1.
Stimuli, task, and procedure. The detailed description of the task and procedure is provided in the Supplementary Methods; a summary can be read here. The surface structure and appearance of the BART were the same as described in previous studies 19,24,50-53 . Participants were instructed to achieve an as high score as possible Figure 1. Design of the experiment. In this variant of the BART, two aspects of the reward scheme were manipulated. First, while decision outcomes-rewards or balloon bursts-were predictable in the first and final phases of the task, these were random (unpredictable) in the middle phase. Second, the structure of the regularity controlling the outcomes was manipulated. (a) In the deterministic condition, decision outcomes of every three balloons were controlled by a repeating sequence of three step functions. This ensured that the first balloon of the sequence could be inflated up to a medium size, the second to a small size, and the third one to a large size. Therefore, while these balloons burst with certainty after the 11th, 5th, and 17th pumps, respectively, burst probability was zero at lower pump numbers. (b) In the probabilistic condition, decision outcomes were controlled by a single truncated power function. While balloon bursts at the 1st and 2nd pumps were disabled, burst probability increased with each successive pump until the 20th where a balloon burst was certain. (c) In the hybrid condition, decision outcomes of every three balloons were controlled by a repeating sequence of three truncated power functions. This facilitated that the first balloon of the sequence could be potentially inflated up to a medium size, the second to a small size, and the third one to a large size. Balloon bursts at the 1st and 2nd pumps were disable and burst probability increased with each successive pump differently for the three balloon sizes. A balloon burst was certain at the 20th pump for medium and large balloons and at the 10th pump for small balloons. Each task phase consisted of 30 balloons. One participant performed only one experimental condition out of the three. Participants were not informed about the regularity controlling the outcomes in any of the conditions. www.nature.com/scientificreports/ by inflating empty virtual balloons on the screen without bursting them. They were also told that they were free to pump as much as they felt like, however, the balloon might burst. Each successful pump increased the size of the given balloon and the gained score by one point. After each successful pump, participants decided whether to continue inflating the balloon or to finish the given balloon trial by collecting the accumulated score. In the latter case, the balloon trial ended, and the accumulated score was transferred to a virtual permanent bank. An unsuccessful pump resulted in a balloon burst. This also ended the balloon trial and the accumulated score on the given balloon was lost, but this was not subtracted from the score in the permanent bank. Participants had to inflate altogether 90 balloons that were assigned to three 30-balloon-long task phases. The first and final phases had the same deep structure within conditions, but they differed across the deterministic, probabilistic, and hybrid conditions. In the deterministic condition, a three-balloon-long sequence repeated 10 times in the first and final phases. The repeating sequence ensured balloon bursts to occur after fixed pump numbers (balloon tolerances). Thus, the first balloon of the sequence could be inflated up to a medium size, the second to a small size, and the third one to a large size (see Fig. 1a, Supplementary Table S1). In the probabilistic condition, the structure of the first and final phases followed that of the original task version 19 . Thus, each successive pump not only increased the chance to obtain a higher score but also the probability of a balloon burst and the accumulated score to be lost (see Fig. 1b, Supplementary Table S2). This contrasts with the deterministic condition where fixed balloon tolerances repeated. In the hybrid condition, again, a three-balloon-long sequence repeated 10 times in the first and final phases. However, instead of fixed balloon tolerances such as in the deterministic condition, three probabilistic regularities repeated to control balloon bursts. The three regularities facilitated the first balloon of the sequence to be potentially inflated up to a medium size, the second one to a small size, and the third one to a large size (see Fig. 1c, Supplementary Tables S2-S4). In the middle task phase of all conditions, tolerance values were random and burst probabilities did not increase within each balloon (across pumps). As these values were selected randomly for each balloon trial, the random "sequence" was not fixed across the conditions and participants.
Participants were told that they were going to inflate 30 balloons in each task phase at their own pace. They were also told that the starting score was zero in all phases; however, the overall total score at the end of the task was the sum of the total scores collected in each phase. The three phases were separated by short breaks in which participants could have had a few seconds to rest if needed. Importantly, participants were not informed about the regularity controlling the outcomes (balloon inflation or burst) in any of the conditions. Moreover, no information was provided about the change in this regularity across the phases, and they did not have to track this change. Therefore, this task measured decision making under uncertainty and experience-based risk, at least during the early trials 2,22,48,51,54 .
A short verbal interview with two questions was administered by the experimenters immediately after finishing the task to check whether participants gained awareness about the regularities guiding balloon bursts and/or the change in these regularities. The interview was recorded using the notebook's built-in recording software. It was asked (1) how they solved the task, how they tried to maximize their scores; and (2) whether they noticed any regularity in the sequence of balloon bursts. The interviews were rated for two aspects. First, they were evaluated for the gained awareness about the regularities underlying balloon bursts in the deterministic and hybrid conditions. Second, they were rated for detecting the change in the underlying structure between the three phases of the task. Further details of the rating protocol are described in the Supplementary Note (entitled as Post-task interviews on the awareness of the hidden structure). Table 1. Descriptive data of demographic variables and BART performance in the three conditions. Each task phase consisted of 30 balloons. First and final phases had the same structure within conditions, but these phases differed across conditions (see main text for details). In the random (middle) phase, random balloon tolerance values were used, without the increasing burst probabilities within each balloon. The conditions did not differ in gender (p = 0.324), age (p = 0.852), and education (p = 0.757). www.nature.com/scientificreports/ The experimental session took approximately one hour because other tasks measuring different aspects of cognitive performance (e.g., procedural learning, working memory) were also administered. Completion of the BART and the related verbal interview took 30-35 min. Results of the other tasks on a subsample of participants are reported in Zavecz et al. 55 .
M(SD)
Data analysis. Data analysis was performed in three steps to evaluate in detail whether the hidden task structure influenced participants' risk-taking behavior. Each step is described below in separate sections.
Analyzing the effects of outcome predictability and experience (Model 1). The first analysis step tested the change of risk-taking behavior across the conditions as a function of outcome predictability and experience with the task. In this analysis, outcome predictability corresponded to task phase. Each balloon was assigned to either the first or the second half (i.e., 15 balloons) of the given task phase to track and interpret how experience with outcome probabilities changed risk-taking behavior as the task progressed.
The number of pumps on each balloon that did not burst was used as the dependent variable in the first analysis. The number of pumps on non-burst balloons (adjusted pump number) has been regarded as an index of deliberate, unbiased risk-taking behavior; and it is conventionally used in the BART literature 19,25,30,56,57 . In the current experimental context, pumps on non-burst balloons can indicate the true value of risk taking, which participants intentionally choose because of the acquired sensitivity to balloon tolerances.
We performed linear mixed-effects analysis. It is beneficial to analyze the current data this way, because, with these models, the non-independence of observations nested within participants and balloon trials (i.e., balloon pumps are repeated-measures observations) can be accounted for, the dependent variable does not have to be aggregated at the level of participants or balloon trials, and missing data are treated more suitably than in repeated-measures or mixed analyses of variance [58][59][60] .
The analysis was performed using the lmer function implemented in the lme4 package 61 of R 62 . The factors Condition (deterministic, hybrid, probabilistic), Phase (first, final), Half (1st, 2nd), and their two-way and three-way interactions were entered as fixed effects into the model. These factors were treated as categorical predictors. Participants were modeled as random effects (random intercepts). The model was fit with restricted maximum likelihood parameter estimates (REML). The p-values for fixed effects were computed using Satterthwaite's degrees of freedom method with the lmerTest package 63 .
We used sum coding and the probabilistic condition, first phase, and 1st half were chosen as the reference ("baseline") levels of the given factors. Pair-wise comparisons were performed by the emmeans package 64 Analyzing the sensitivity to the repeating balloon sequence (Models 2 and 3). The second analysis step directly tested whether participants in the deterministic and hybrid conditions acquired sensitivity to the repeating balloon sequence and adjusted their risk-taking behavior accordingly. This step involved data registered only in the first and final phases of the deterministic and hybrid conditions, where the repeating regularity defined the different balloon tolerances. Thus, the probabilistic condition and the random phases were omitted from this analysis. The deterministic and hybrid conditions were analyzed separately.
Again, the number of pumps on each balloon that did not burst was used as the dependent variable. The factors Size (medium, small, large), Phase (first, final), and their two-way interaction were entered as fixed effects into the separate linear mixed-effects models. The reference levels of the factors were medium balloon size and first phase. Otherwise, modeling was performed in the same way as in the case of Analyzing the sensitivity to the optimal pump number (Model 4). As there was no repeating regularity in the probabilistic condition, previous models could not compare acquired sensitivity to the task's structure across the conditions. Therefore, the third analysis step tested whether sensitivity to the optimal pump number in the predictable phases of the task differed across the conditions. As described in Supplementary Tables S2-S4, in the hybrid condition, 13, 6, and 19 could be considered as the optimal pump numbers for the medium, small, and large balloons, respectively. Similarly, 13 could be the optimal pump number in the probabilistic condition (see Supplementary Table S2). In the deterministic condition, the fixed balloon tolerance values (medium-10 pumps, small-4 pumps, large-16 pumps) were considered as the optimal pump numbers, because the probability of a balloon burst was zero until reaching the tolerance value, but it was one for the next pump (see Fig. 1a, Supplementary Table S1).
As the optimal pump number is interpretable only in the non-random phases, this analysis involved data registered only in the first and final phases. To determine how close a participant approached the optimum on each balloon, an efficiency score was calculated as the difference between the optimal and actual pumps numbers (optimal minus actual) divided by the optimal pump number. While a positive signed value of this ratio score indicates that pump number remained below the optimum, a negative signed value reflects that the www.nature.com/scientificreports/ pump number exceeded the optimum. Lower absolute values mean less deviation from the optimum; thus, more efficient (more optimal) pumping behavior. Because of using a ratio score, sensitivity to the optimum could be more easily compared across the conditions. Different from the previous analyses, the dependent variable (efficiency score) was calculated based on the number of pumps on all balloons, including burst and non-burst balloons, as well. This measure could be more suitable for calculating an overall efficiency score, as it also contains those balloon trials in which participant could have intended to pump the balloons up to a larger, more optimal size but the balloons "accidentally" burst (i.e., in the probabilistic and hybrid conditions).
Regarding the analysis, the factors Condition (deterministic, hybrid, probabilistic), Phase (first, final), and their two-way interaction were entered as fixed effects into the linear mixed-effects model. The reference levels of the factors were probabilistic condition and first phase. Otherwise, modeling was performed in the same way as in the case of previous models. The schematic structure of Model 4 is summarized below: Model 4: Efficiency score on all balloons in the first and final phases ~ Condition, Phase, Condition * Phase + (1 | participant).
Results
To ease meta-analytic work, Table 1 shows the phase-wise performance (descriptive data) of the different conditions measured by classical indices of the BART such as the mean pumps on non-burst balloons, the number of balloon bursts, and the total score 25 . In the results section below, "pump number" refers to the number of pumps on non-burst balloons. The summaries of all effects included in the linear mixed-effects models (Models 1-4) are presented in Tables 2, 3 and 4. Only simplified statistics are provided in the main text. Since the post-task interviews were conducted and evaluated according to an unstandardized protocol and 12 of them were missing or inadequate, only the related descriptive results are provided below and in the Supplementary Note.
The effects of outcome predictability and experience on risk-taking behavior (Model 1). Model
1 tested the change of risk-taking behavior across the conditions as a function of outcome predictability and experience with the task. According to the results (see Table 2), pump number was significantly lower in the hybrid condition than the grand mean (β = − 0.53, t = − 2.11, p = 0.037). Furthermore, pump number was significantly higher in the random (β = 0.41, t = 9.06, p < 0.001) and final (β = 0.59, t = 12.95, p < 0.001) phases of the task, as well as in the 2nd half of the task phase (β = 0.29, t = 9.19, p < 0.001).
However, as shown by the significant Random Phase * 2nd Half (β = − 0.22, t = − 4.89, p < 0.001) and Final Phase * 2nd Half (β = − 0.29, t = − 6.37, p < 0.001) interactions and pair-wise comparisons, pump number increased significantly between the task halves only in the first phase (p < 0.001) and not in the random and final phases (ps ≥ 0.208). Thus, the steep rise of pump number characterized only the first task phase (see Fig. 2). Meanwhile, although outcomes were unpredictable (random) in the middle phase, this did not decrease overall pump number. Table 2) indicate that in the deterministic condition, pump number increased significantly from phase to phase, even from the random to the final phase (p < 0.001, see Fig. 2a). However, this final vs. random increase was not present in the hybrid and probabilistic conditions (ps ≥ 0.509, see Fig. 2b, c). In addition, although pump number abruptly increased from the first to the random phase in the hybrid condition, in the final phase, it was still lower than in the deterministic (p = 0.042) and probabilistic conditions (p = 0.055).
The three-way interaction of Deterministic * Random Phase * 2nd Half was also significant (β = − 0.14, t = − 2.14, p = 0.032). This suggests that the increase of pump number over the task phases differing across the conditions was modulated by task halves. These differences across the task halves are detailed in the caption of Fig. 2. In essence, the 2nd half of the random phase was comparable to that of the first phase in the deterministic condition, while it was comparable to that of the final phases in the other two conditions (see Fig. 2). This pattern of results emerged because the random phase as compared with the predictable phases was differently performed in the deterministic than in the probabilistic and hybrid conditions. Table 4. Summary of the linear mixed-effects model testing the sensitivity to the optimal pump number in the predictable task phases (Model 4). Dependent variable: efficiency score (the difference of optimal minus actual pump number divided by the optimal pump number). This is determined for all balloons (irrespective of balloon burst). Coding scheme: sum coding. The reference levels of the factors were probabilistic condition and first phase. Significant effects are in bold (except the intercept). SE: standard error. www.nature.com/scientificreports/ Altogether, the deterministic condition was characterized by a large increase in pumps between the first and final phases and comparable pump numbers between the 2nd half of the first phase and the entire random phase. This suggests that the regular to random transition might have influenced risk taking differently in the deterministic condition than in the others. When outcomes became predictable again in the final phase, only participants of the deterministic condition increased risk taking as compared with the random phase. However, these analyses do not show whether the repeating balloon sequence was indeed acquired and reactivated after the random to regular transition. The second set of analyses investigates this question.
Sensitivity to the repeating balloon sequence. Deterministic condition (Model 2). Model 2 tested
whether participants of the deterministic condition acquired sensitivity to the repeating balloon sequence and adjusted their risk-taking behavior to fit this sequence. Results showed (see Table 3) that they differentiated across the three balloon sizes: Small balloons were pumped significantly less (β = − 3.56, t = − 35.22, p < 0.001) and large balloons were pumped significantly more (β = 2.90, t = 37.48, p < 0.001) than the grand mean and the medium balloons (ps < 0.001). Furthermore, balloons were pumped to a significantly larger size in the final than in the first task phase (β = 0.75, t = 12.31, p < 0.001).
As per the significant Small * Final Phase (β = − 1.02, t = − 10.23, p < 0.001) and Large * Final Phase (β = 0.79, t = 10.27, p < 0.001) interactions, the difference between the balloon sizes increased by the final phase (see also Table 3). While both medium (p < 0.001) and large (p < 0.001) balloons were pumped to a significantly larger size, small balloons were pumped to a similar extent during the final task phase (see Supplementary Table S5). This suggests that knowledge of the repeating sequence became more consistent with experience and guided deliberate risk-taking behavior.
These results are also visible in Fig. 3a: In the first phase of the deterministic condition, pump numbers start to gradually follow the varying balloon tolerances of the medium-small-large repeating sequence. In the final phase, this pattern appears to become stable. Altogether, participants of the deterministic condition adjusted their risk-taking behavior to fit the balloon tolerances and differentiated across the three tolerance values.
Hybrid condition (Model 3). Model 3 tested whether participants of the hybrid condition acquired sensitivity to the repeating balloon sequence. Results showed (see Table 3) that small balloons were pumped significantly less than the grand mean (β = − 0.90, t = − 7.42, p < 0.001) and the medium balloons (p < 0.001). Although large balloons were pumped significantly more than the grand mean (β = 0.53, t = 5.89, p < 0.001), their pump numbers did not differ significantly from that of medium balloons (p = 0.449). Altogether, participants differentiated between small and medium balloons but did not between medium and large balloons. Regarding the task phase, significantly more pumps occurred in the final than in the first phase (β = 0.61, t = 8.48, p < 0.001). The Small * Final Phase and Large * Final Phase interactions were non-significant (see Table 3, Supplementary Table S5), indicating that any knowledge of the repeating sequence did not reliably strengthen by the final task phase. Error bars denote standard error of mean. Considering the 1st task half, pump number significantly increased across the first, random, and final phases in the deterministic (ps ≤ 0.001) but not in the other conditions (ps ≥ 0.855 between the random and final phases). Considering the 2nd task half, pump number was similar in the first and random phases (p = 0.575) and higher in the final phase (ps < 0.001) in the deterministic condition. Meanwhile, pump number was comparably higher in the random and final phases than in the first phase in the hybrid and probabilistic conditions (ps ≤ 0.002).
Scientific Reports
| (2023) 13:1127 | https://doi.org/10.1038/s41598-023-27642-z www.nature.com/scientificreports/ In sum, participants of the hybrid condition partially adjusted their risk-taking behavior to fit the repeating probabilistic regularities. It seems that they remained insensitive to the difference between medium and large balloons even by the final task phase (see also Fig. 3c). It is likely that participants acquired that "small" and "larger" balloons followed one another instead of fully tracking the medium-small-large repeating sequence.
Post-task interviews. Post-task interviews tested whether participants became aware of the hidden task structure. Therefore, these interview ratings could complement the behavioral findings on acquired sensitivity. In line with the behavioral results, post-task interviews (see Supplementary Note) suggest that while most participants (72.5%) reported explicit awareness (partial or full) of the repeating balloon sequence in the deterministic condition, only a few participants (19%) reported (partial) awareness in the hybrid condition. Furthermore, 60% as opposed to 16.7% explicitly reported a change between task phases in the deterministic vs. the hybrid condition; and, in the latter condition, no participant at all reported the actual change. www.nature.com/scientificreports/ Sensitivity to the optimal pump number (Model 4). Model 4 tested whether sensitivity to the optimal pump number in the predictable first and final phases of the task differed across the conditions. According to the results (see Table 4), while the deterministic condition showed significantly less deviation from the optimal pump values (β = − 0.195, t = − 14.09, p < 0.001), the hybrid condition showed significantly more (β = 0.096, t = 6.98, p < 0.001), as compared with the grand mean. Furthermore, pumping behavior was significantly more optimal in the deterministic condition than in the hybrid and probabilistic ones (ps < 0.001), while the latter two conditions did not differ from one another (p = 0.993; deterministic M = 0.123; hybrid M = 0.414; probabilistic M = 0.417).
Pumping behavior was significantly more optimal in the final than in the first phase (β = − 0.041, t = − 13.51, p < 0.001; first M = 0.359; final M = 0.277). As per the significant Deterministic * Final Phase (β = − 0.020, t = − 4.53, p < 0.001) and Hybrid * Final Phase (β = 0.012, t = 2.80, p = 0.005) interactions, the optimization (change) of pumping behavior from the first to the final phase occurred to a larger extent in the deterministic condition and to a lesser extent in the hybrid one (deterministic M = 0.121; hybrid M = 0.058; probabilistic M = 0.067).
Altogether, pumping behavior was the most optimal in the deterministic condition, which became more emphasized by the final task phase. Participants of the hybrid and probabilistic conditions showed comparable but less optimal pumping behavior, which strengthened by the final task phase to a lesser extent than in the deterministic condition.
Discussion
Summary of findings. This study investigated how the transitions between predictable and unpredictable outcomes and the deep structure of predictable outcomes influenced risk-taking behavior in an experiencebased risky decision-making environment. To this end, while outcomes were predictable in the first and final phases of the BART, these were unpredictable in the middle phase. The deep structure of predictable outcomes was also manipulated. Either a repeating balloon sequence with deterministic and probabilistic regularities or a single probabilistic regularity was present in one of the three experimental conditions. Participants were not informed about these regularities and that the transitions between task phases denoted a change in the predictability of outcomes.
Risk taking in the probabilistic and hybrid conditions increased in the first predictable phase and remained consistent in the remainder of the task, as usually observed in the original BART. This suggests a rapidly emerging general sensitivity to the predictable outcomes. When the predictable outcomes reappeared in the final phase, risk taking increased only in the deterministic condition. In this condition, specific sensitivity to the repeating balloon sequence also emerged, as shown by the successful differentiation of medium, small, and large balloons, which became more emphasized in the final phase. Most participants gained explicit knowledge of the deterministic regularity. This was also reflected by their risk-taking behavior approaching the optimal level, especially in the final task phase. In the hybrid condition, the specific sensitivity to the repeating balloon sequence was partial, as shown by the differentiation of small and "larger" balloons. This sensitivity did not change in the final phase. Knowledge of the hybrid regularity remained largely implicit. In line with these results, participants of the hybrid condition showed less optimal risk-taking behavior, which was comparable to that of the probabilistic condition. In the deterministic and hybrid conditions, unpredictable outcomes did not seem to influence risk taking, at least in terms of the number of successful pumps in the final phase. Relatedly, risk taking in the probabilistic condition was found to be insensitive to changes in predictability across the task phases.
Sensitivity to the underlying regularities. Earlier work used color cues to indicate the different burst probabilities in the BART and found sensitivity to these probabilities 33,34 . Importantly, the present study unveiled sensitivity even without signaling any characteristic of the underlying probabilities, at least when these followed a repeating pattern. This is in line with those studies that manipulated the early balloons in an unsignaled manner and found the adjustment of risk taking later in the task [30][31][32] .
The present results also suggest that the acquired knowledge of deterministic and probabilistic regularities is robust and resistant to interference triggered by the interposed unpredictable outcomes. Persistent knowledge of deterministic and probabilistic regularities has been observed in unsupervised learning environments, as well 6,47 . The learning of both deterministic and probabilistic regularities could occur implicitly or explicitly 35,[66][67][68] . In the present experiment, while knowledge of the deterministic regularity was mostly explicit, knowledge of the hybrid regularity remained mostly implicit.
Regarding the acquired knowledge, participants of the hybrid condition did not differentiate between large and medium balloons, as opposed to those in the deterministic condition. Since we used probabilistic regularities in the hybrid condition, bursts of both medium and large balloons could have been experienced even after comparable pump numbers. These negative events might have obscured the differentiation of medium and large balloons. Considering all task phases in the hybrid condition, outcomes varied highly. High variance of outcomes coupled with negative events could have resulted in decreased risk taking, which hindered the full exploration of the hybrid structure (cf. 14,26,69 ).
Note that because of a programming error, the outcomes of the 23rd balloons in the first and final phases of the hybrid condition were controlled by the probabilistic regularity of the medium balloons instead of the small ones. Thus, the 8th repetition (22nd, 23rd, 24th balloons) of the medium-small-large balloon sequence was violated in both phases, since a medium-medium-large sequence appeared instead. After excluding the 22nd, 23rd, and 24th balloons from the first and final phases, the Size by Phase model was refit to the data of the hybrid condition. Results nearly identical to the original ones were obtained, as summarized in Supplementary Table S6. In addition, it seems that the pattern of successful pumps followed the repeating sequence at its next www.nature.com/scientificreports/ repetition right after the sequence violation (see in Fig. 3c). Altogether, the partial sensitivity to the repeating balloon sequence in the hybrid condition remained robust even if the sequence was violated.
Interpretation of results.
In the deterministic condition, there were no reliable signals other than the repeating sequence that predicted balloon bursts. Therefore, in line with the similarity-based model of Plonsky et al. 5 , participants could follow the repeating sequence when comparing current and past experiences. When deciding on whether to pump the given balloon further, they might have recognized that current outcomes matched a particular sequence of the recalled past outcomes. As the reappearance of the repeating sequence was possibly expected, experiences gathered during the unpredictable phase could have been labeled as "irrelevant" and did not deteriorate the sequence-based decision strategy in the final phase of the deterministic condition. This interpretation of the observed risk-taking behavior would be in line with the notion that sensitivity to sequential regularities and the detection or search of environmental patterns, even they are not present (e.g., 70,71 ), could be fundamental aspects of learning 35,44 . In addition, this similarity-based learning model can provide a common ground for the interpretation of various phenomena derived from the research fields of experience-based decision making and unsupervised statistical-sequence learning 38,44 .
In the hybrid condition, the uncertainty of predictions was higher because of the probabilistic nature of the repeating sequence 41 . Thus, similarity functions other than sequence-based similarity and additional behavioral strategies could have been used when making decisions. Further research should attempt to clarify the processes underlying choice behavior observed in the hybrid condition. The combined reinforcement learning diffusion decision model (RLDDM) might be a good candidate for this purpose, because it could predict the interaction of choice preferences and response times to capture learning effects 15,72 . Analysis of response times in the BART-the time needed to initiate the next pump on a given balloon-could reveal the perception of elevated risk levels and uncertainty 58 . Hence, in future studies, response times can indicate sensitivity to changes in the underlying regularities.
Since the deep structure of the task was unknown in advance, explorative behavior characterized all conditions in the first phase, reflected by the rapid increase of successful pumps. In the deterministic condition, the regular to random transition induced fundamental changes in the underlying regularity and resulted in (possibly) unexpected outcomes 46 . This might explain why risk taking in the random phase stabilized at the level achieved by second half of the first phase. In the hybrid condition, however, it is not unequivocal whether this transition resulted in unexpected or expected outcomes. Although the use of different probabilistic regularities in the repeating sequence increases uncertainty, some sensitivity to the repeating regularity still emerged until this point (i.e., the differentiation of balloon sizes). Thus, even the first transition might have violated outcome expectations, but this cannot be clearly inferred from the current results. The second, random to regular transition resulted in abrupt behavior adjustment in both the deterministic and hybrid conditions to fit the repeating balloon sequence (cf. 41 ). The probabilistic regularities in the hybrid condition did not delay this behavior adjustment. The abrupt change in both conditions might be due to the task relevance of the random to regular transition 45 .
Particularly, although participants were not instructed to identify the underlying regularities, based on the post-task interviews, many of them were motivated to do so even in the unpredictable phase to achieve better performance on the task. This would be in line with sequence-based similarity as the process underlying choice behavior. Therefore, if they continuously built representations of the underlying regularities, the random to regular transition signaled the violation of expectations. This unexpected change in the underlying regularities could have promoted further explorative behavior 4,46 , which facilitated matching the current outcomes with the previously experienced repeating sequence of outcomes. However, after recognizing the stability and separability of outcomes in the final phase, participants of the deterministic condition could have started to exploit their reactivated knowledge of the repeating sequence. This was not that clear in the hybrid condition, as risk taking in the final phase was lower than in the deterministic condition.
Only cautious assumptions can be formed about how participants of the probabilistic condition processed the task structure. This condition could be characterized by expected uncertainty 4 , due to the structural similarities of the predictable and unpredictable phases. Particularly, balloon tolerance values varied between two and 19 in all task phases. Although burst probabilities did not increase with each pump in the unpredictable phase, the risk level still increased: Individuals had to consider the possibility to gain even more reward (the added points increased with each pump) or to lose the already accumulated reward 27,73 . Therefore, burst probabilities might have been perceived comparably across the task phases. This would be in accordance with the consistent pump number seen from the unpredictable phase throughout the remaining balloons (cf. Fig. 3b) and the subjective reports (post-task interviews). These reports suggest that the structure was "unchanged" throughout the task in the probabilistic condition (see Supplementary Note).
Limitations and directions for future research. Some limitations of the present work should be noted.
First, burst probabilities differed across the conditions because of how we constructed the regularities, which should be avoided in future designs. Second, it has been found that individuals with internalizing symptoms, such as in anxiety and depression, might show difficulties in the adaptation to uncertain decision-making environments because of altered learning rates 34,[74][75][76] . Therefore, it would be interesting to investigate whether and how, for instance, trait anxiety at the subclinical level and decision making are related when different and changing underlying regularities are applied in the BART.
Third, most of our participants were females because we recruited them from undergraduate courses that were also characterized by similar gender distributions. Previous studies found lower risk taking in females than in males and other gender differences when performing the BART (e.g., 19,34,77,78 ). Although the general level of risk taking can be different between the genders, in the present study, the full (deterministic condition) and www.nature.com/scientificreports/ partial (hybrid condition) sensitivity to the repeating balloon sequence emerged in both the male and female subsamples, similarly to the whole sample (analyses are not reported). Still, future work should strive for equal gender distribution to ensure that the observed effects of risk taking can be generalized to the entire population. Fourth, separating task phases with short breaks might have involuntarily signaled the introduction of new rules, although we intended to use unsignaled and unexpected transitions. The positions of breaks could have directly helped participants to search for the repeating sequence or part of the sequence in the final phase, contributing to the observed persistency of knowledge. A novel experimental design should present balloon trials in a single phase with short breaks that do not fall on the boundaries of the different contexts. Fifth, participants were not paid a bonus based on task performance because of the lack of resources. However, recent work has implied that not only the behavioral indices but also the electrophysiological correlates of negative feedback processing are enhanced if real money instead of hypothetical reward is used in the BART 79,80 . It would be interesting to see whether the degree of learning changes across the experimental conditions because of a paid bonus. Sixth, the number of pumps involving every balloon trial (both burst and non-burst balloons) could have been used in the first analysis. This index would increase the reliability of the data, because trials of those participants who inflate the balloons to a larger size and thereby more likely experience balloon bursts are also considered 25 . Nevertheless, results of the first analysis (Model 1) would have been similar even with this dependent variable (see Supplementary Table S7).
Finally, computational models that capture differences in learning as a function of the regularities' deep structure and the type of transition might be considered 15,28,57 , complemented by tracking the electrophysiological correlates of uncertainty and feedback processing 4,58 . Computational modeling would be particularly helpful to directly test the current interpretations of the findings (described in earlier sections) derived from existing theoretical approaches and previous experimental and modeling work. With a computational approach, the parameter differences across the conditions could be compared to investigate behavior adjustment.
Conclusions.
This study showed that sensitivity to repeating regularities underlying the outcomes of risky decisions can emerge even if the regularities are temporarily missing. This sensitivity develops without informing individuals about the presence, absence, and reappearance of the regularities. The emergence of this sensitivity depends mostly on the type of the regularities: While completely predictable deterministic regularities can be acquired easily, uncertain predictions based on probabilistic regularities are more challenging. Experiencing intermittent unpredictable outcomes does not seem to disrupt the acquired representations because of their resistance to interfering information. In sum, by the acquisition and expectation of sequential patterns, the present results suggest fast and robust adaptation to changing outcome probabilities in experience-based risky decision making. Moreover, the results also highlight how unsupervised and reward-based learning of structures can be linked.
|
2023-01-21T14:06:32.066Z
|
2023-01-20T00:00:00.000
|
{
"year": 2023,
"sha1": "6b0c6daad11a69779a3eafedf77808045b483f35",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d5842b904d915f139771bb6d976fdbafe2aa5283",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
20728010
|
pes2o/s2orc
|
v3-fos-license
|
The structure of the 1L-myo-inositol-1-phosphate synthase-NAD+-2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate complex demands a revision of the enzyme mechanism.
1l-myo-inositol 1-phosphate (MIP) synthase catalyzes the conversion of d-glucose 6-phosphate to 1l-myo-inositol 1-phosphate, the first and rate-limiting step in the biosynthesis of all inositol-containing compounds. It involves an oxidation, enolization, intramolecular aldol cyclization, and reduction. Here we present the structure of MIP synthase in complex with NAD(+) and a high-affinity inhibitor, 2-deoxy-d-glucitol 6-(E)-vinylhomophosphonate. This structure reveals interactions between the enzyme active site residues and the inhibitor that are significantly different from that proposed for 2-deoxy-d-glucitol 6-phosphate in the previously published structure of MIP synthase-NAD(+)-2-deoxy-d-glucitol 6-phosphate. There are several other conformational changes in NAD(+) and the enzyme active site as well. Based on the new structural data, we propose a new and completely different mechanism for MIP synthase.
Inositol-containing compounds play critical and diverse biological roles, including signal transduction, stress response, and cell wall biogenesis (1)(2)(3)(4). Though large quantities of inositol are available from the diet, significant biosynthesis of inositol has been detected in organs where a significant blood barrier exists, such as the testes and brain (5)(6)(7)(8)(9). In fact, reduction of the brain inositol pool by inhibition of myo-inositol (MI) 1 monophosphatase has been suggested to be the mode of action for lithium in the treatment of bipolar disorder (10 -13). Recent in vivo results in yeast (14) and dictyostelium (15) suggest that Valproate, a drug used in the treatment of depression, bipolar disorder, and seizure disorder, may act by inhibition of MIP synthase, thus lowering neuronal inositol pools similar to the action of lithium (14). Regulation of inositol biosynthesis itself may, therefore, play an important role in the regulation of second messenger signaling.
MIP synthase is remarkably conserved in eukaryotes, with better than 45% identity from yeast to humans (3, 16 -25). In all cases, the enzyme displays modest catalytic activity, with turnover numbers ranging from 3 to 13 M/min/mg enzyme and with substrate K m values in the 100 M to 1 mM range (3,17). The reaction path first proposed by Loewus et al. (shown in Fig. 1) remains the most likely (3, 26 -29). A series of inhibitor studies indicates that the enzyme first binds the open acyclic tautomer of the substrate D-glucose 6-phosphate, followed by oxidation to the C5 keto intermediate (30). Frost and coworkers propose that enolization is promoted by proton extraction by means of the substrate phosphate, which is consistent with the phosphate binding in a transoid conformation (30). They base this conclusion on the ability of 2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate, a substrate mimic of the enzyme that fixes the phosphonate trans to C5, to strongly inhibit the enzyme, whereas the equivalent Z-mimic displays no affinity for the enzyme. 2-Deoxy-D-glucitol 6-(E)-vinylhomophosphonate is, in fact, the most potent inhibitor of the enzyme known, having a K i of 0.6 M. Intramolecular aldol cyclization followed by reduction of the C5-ketone completes the formation of the product. None of the intermediates have been isolated or trapped, suggesting that all intermediates are tightly bound and not released until the final reduction to myo-inositol 1-phosphate (31)(32)(33).
Crystal structures of several MIP synthase enzymes have been determined, including Saccharomyces cerevisiae MIP synthase partially occupied with NAD ϩ and bound to fully occupied NAD ϩ and an inhibitor, 2-deoxy-D-glucitol 6-phosphate (34). The inhibitor seemed to be bound in a relatively extended conformation, which is inconsistent with intramolecular aldol cyclization. Based on this data, a conformation that would be consistent with the cyclization was modeled, and a mechanism for the transformation was proposed. In addition, a possible location for an ammonium ion was identified, and it was proposed that the ammonium ion performed a similar function to that of a divalent cation in type II aldolases, stabilizing the developing negative charge on the enolate oxygen atom. This supposition was based on data showing that ammonium ions significantly activate S. cerevisiae MIP synthase relative to other ions (17). The structure of Mycobacterium tuberculosis MIP synthase bound to NAD ϩ has also recently been reported and a position for a Zn 2ϩ proposed (35). However, the Zn 2ϩ appears to be located between the amide of the nicotinamide and the nicotinamide phosphodiester on NAD ϩ . Though apparently not in a position to be directly involved in catalysis, this Zn 2ϩ bridges the NAD ϩ and may help to define the nicotinamide position in the active site. Recently, we have determined the structure of S. cerevisiae MIP synthase in the complete absence of NAD ϩ and in the presence of fully occupied NAD ϩ (36). We have also determined the structure of S. cerevisiae MIP synthase bound with NADH, phosphate, and glycerol (36). In both the apo and NAD ϩ -bound structures, several active site residues were disordered. When the enzyme was bound with NADH, several dramatic conformational changes were observed. All active site residues became ordered in this structure, and the conformation of NADH changed significantly with its nicotinamide ring moving more than 1 Å away from its position in the NAD ϩ -bound structure. A possible divalent cation was also observed in a position similar to that seen in the M. tuberculosis structure, between the nicotinamide phosphodiester and amide. Two small molecules were bound in the enzyme active site in this structure, and a phosphate and glycerol were modeled into these positions based on the shape and size of the electron density. However, the position of the phosphate was significantly different from that of the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate complex structure (34), which is completely inconsistent with the position of the inhibitor in the active site and therefore calls into question the mechanism proposed based on the previous structure. Additionally, the putative ammonium ion proposed in this structure is one of the ligands for the putative metal ion, which is inconsistent with its identification as an ammonium ion.
These structural results call into question virtually the entire mechanism proposed for S. cerevisiae MIP synthase based on the 2-deoxyglucitol-6-phosphate-bound structure. Major issues include the location of the substrate phosphate, and in-deed the entire substrate molecule during active catalysis, and the role of ammonium for stabilization of the enolate.
To better answer these and other questions regarding the mechanism of MIP synthase, we have determined the structure of S. cerevisiae MIP synthase in complex with NAD ϩ and a high affinity inhibitor 2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate (K i ϭ 0.67 ϫ 10 Ϫ6 M) at pH 5.5, a pH at which the enzyme still displays significant catalytic activity.
EXPERIMENTAL PROCEDURES
Crystallization, Data Collection, and Refinement-S. cerevisiae MIP synthase was purified as reported previously (34,37). Protein was treated with activated charcoal for 30 min at 4°C to remove cofactors. Crystals of apo S. cerevisiae MIP synthase were then grown by using the same condition used to grow the crystals of S. cerevisiae MIP synthase with partially occupied NAD ϩ and the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate complex (37). Crystals of apo MIP synthase were then soaked in a stabilizer containing 5% polyethylene glycol 8000, 0.1 M NaAc, pH 5.5, 1 mM NAD ϩ , 13.5 mM 2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate for 24 h. For data collection, a single crystal was transferred to the cryo-protectant stabilizing solution (5% polyethylene glycol 8000, 0.1 M sodium acetate, pH 5.5, 30% glycerol) and flash-frozen. Data were collected at the synchrotron radiation source at the Industrial Macromolecular Crystallography Association Collaborative Access Team ID-17, Advanced Photon Source, Argonne National Laboratory. Diffraction data reduction and scaling were performed using HKL2000 (38). The structure was solved by using the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex structure (36) as an initial phasing model. The electron density maps were traced by using TURBO-FRODO, and multiple rounds of refine- Structure of 1L-myo-inositol ment were conducted by using CNS (39). Data collection and final refinement statistics are tabulated in Table I. The final refinement model contained residues 9 -533 in molecule A, residues 9 -464 and 472-533 in molecule B in the asymmetric unit, and it also contained 400 water molecules. The inhibitor was present and modeled into the active site of the A molecule. No electron density for the inhibitor was evident in molecule B, which is consistent with its less ordered active site. The final R factor and R free are 18.8 and 24.4%, respectively. Only 3 of 1041 residues are in the disallowed region in the Ramachandran plot (Asp-319 in molecule A, Asp-320 in molecules A and B) evaluated by PROCHECK (40).
Site-directed Mutagenesis and Enzyme Kinetic Assay-S. cerevisiae MIP synthase single mutants K369A, K412A, and K489A were constructed by using the QuikChange protocol (Stratagene). The primers used were: K369A, ATTTAGGTCTGCAGAGATTTCCAAA; K412A, GTCGGGGACTCAGCAGTGGCAATGGA; and K489A, AGTTACTGGT-TAGCAGCTCCATTAA. Mutant enzymes were over-expressed and purified identically to that of wild-type enzyme. For the activity assay, the enzyme was incubated in the assay solution containing 50 mM Tris, 2 mM NH 4 Cl, 0.2 mM dithiothreitol, pH 7.7, with 1.5 mM NAD ϩ and various concentrations of the substrate D-glucose 6-phosphate. Reactions were monitored over a time period of 20 min and stopped by adding a 20% trichloroacetic acid solution to the reaction mixture. The reaction mixture was incubated with 0.2 M NaIO 4 for one hour at 37°C and quenched by the addition of 1.5 M Na 2 SO 3 . Released inorganic phosphate was determined by the colorimetric method of Ames (41).
Molecular Modeling-All energy minimization calculations were performed by using the Insight II version 2000 software package (Molecular Simulations Inc., San Diego, CA). X-ray coordinates of the MIP synthase and 2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate were used to produce the starting coordinates of the substrate D-glucose 6-phosphate and the final reaction intermediate myo-2-inosose 1-phosphate. The structures were parameterized by using the Discover/Insight II Extensible Systematic Force Field (ESFF) and incorporating the partial charges calculated for both substrates and the reaction intermediate by an electrostatic fitting procedure. Explicit hydrogens were added, and the amino acid residues were protonated so as to be consistent with the experimental pH value. Energy minimization was performed until the atomic root-mean-square derivative reached its minimum.
RESULTS AND DISCUSSION
The Overall Structure of MIP Synthase-The overall structure of the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate complex is similar to that of the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex ( Fig. 2A; Ref. 36). The root-mean-square deviation between the two structures was only 0.49 Å. All of the active site residues are ordered in both molecules in the asymmetric unit, and the side chains of the active site residues overlap with those of the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex structure very well. Cys-436 is the only exception, because its C␣ and side chain sulfur atoms moved 2.4 Å and 3.5 Å, respectively, away from their positions in the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex structure. In fact, the position of Cys-436 is identical to that of the apo, NAD ϩbound, and the 2-deoxy-D-glucitol 6-phosphate-bound structures (34,36).
The Conformation of NAD ϩ -The conformation of NAD ϩ is very similar to that of NADH in the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex structure, differing significantly from that of the NAD ϩ -bound and the NAD ϩ -2-deoxy-Dglucitol 6-phosphate-bound structures. However, the distance between the amide and phosphodiester is a bit closer than that of the MIP synthase-NADH-phosphate-glycerol structure (3.2 Å versus 3.8 Å) (Fig. 2B). The putative divalent cation is present in this structure as well, but the fourth ligand seen in the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex structure (a water molecule) is not present. The fourth ligand is now Ser-439 O, identical to the coordination of the Zn 2ϩ in the M. tuberculosis MIP synthase structure. Interatomic distances between the putative divalent cation and its four ligands are 2.26, 2.58, 2.52, and 2.83 Å, respectively; bond angles are 95.0, 104.7, 92.01, and 112.8°. The conclusion to be drawn from these observations is that divalent cation binding is variable in S. cerevisiae MIP synthase, at least at low cation concentration. It also correlates with a well folded active site, as metal binding is only seen in structures of the S. cerevisiae MIP synthase when the active site is fully ordered. Therefore, this conformation of the cofactor probably represents the enzyme in its active state. Also, the conformational change of the cofactor is not solely due to the difference in the charge of the cofactor, as the present structure contains NAD ϩ , whereas NADH is bound in the previous structure.
The Structure of the Inhibitor 2-Deoxy-D-glucitol 6-(E)-Vinylhomophosphonate-The inhibitor 2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate is bound in the enzyme active site in an extended conformation, as shown in the 1.8 simulated annealing omit map shown in Fig. 3A. The phosphate group is in a transoid conformation relative to the inhibitor carbon backbone, fixed by the double bond between the phosphonate carbon and C6. The distance from the inhibitor C5 to nicotinamide C4 is 3.8 Å, a bit long for a direct hydride transfer. The inhibitor molecule is well nestled within the enzyme active site by hydrogen bond interactions with the active site residues (Fig. 3B). The phosphonate moiety is in an identical position to that of the phosphate in the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex structure (Fig. 2B) but is different from the position of the phosphate moiety of the inhibitor in the previously published S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate complex structure (34). In fact, the entire inhibitor molecule is rotated end to end, with the phosphate on the opposite side of the active site relative to the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate complex structure (Fig. 4A). The phosphate makes hydrogen bonds with the main chain nitrogen atoms of Ser-323, Gly-324, Gln-325, and Thr-326. This motif is conserved among eukaryotes, but not in prokaryotes (it is SQVGAT in M. tuberculosis and TGET in Archaeoglobus fulgidus). Conserved lysine residues Lys-412 and Lys-373 also make hydrogen bonds with the phosphonate oxygens. All of the hydroxyl groups of the inhibitor except O1 make hydrogen bonds with side chains of conserved active site residues. There are hydrogen bonds between O3 and both Asp-356 OD1 and Lys-369 NZ; O4 and Asp-438 OD2; O5 and both Lys-369 NZ and Lys-489 NZ; O1P and Ser-323 N, Gly-324 N and Gln-325 N; O2P and Gly-324 N, Gln-325N and Lys-412 NZ; O3P and Thr-326 N, Thr-326 OG1 and Lys-373 NZ. All of these residues are absolutely conserved among eukaryotes from yeast to human. Residues Asp-356, Lys-369, Lys-373, Lys-412, Asp-438, Lys-489 are also conserved among MIP synthases from A. fulgidus and M. tuberculosis. Fig. 3B depicts all of the interactions between the inhibitor molecule and the active site residues. It is important to note that the putative divalent cation chelates the water molecule as part of a hydrogen bond network that holds O3 and O4 of the inhibitor in their positions. However, the divalent cation position is quite far from O5, on the opposite side of the active site and seems not to be directly involved in the catalytic mechanism of the enzyme, ruling out a type II, metal-mediated aldol cyclization mechanism. The water molecule that is chelated to this metal is also on the opposite side of the active site but plays an important role in stabilizing residues in the active site (Asp-356 and Asp-438), both of which make hydrogen bonds to the hydroxyl groups of the inhibitor (Fig. 3B). Given the sequence similarity of the A. fulgidus MIP synthase to our S. cerevisiae MIP synthase enzyme, we conclude that this metal ion will likely be present in the A. fulgidus MIP synthase (17), though given the sensitivity of the Archaebacteria enzyme toward EDTA and metal ions, the possibility of a second metal ion in or near the active site cannot be ruled out. The surprising result is the apparent lack of a requirement for the divalent in S. cerevisiae MIP synthase, because our structures together indicate that the nicotinamide is improperly positioned for catalysis in the absence of the divalent cation, as shown in our structures of the NAD ϩ -bound enzyme and the EDTA-treated NADH-bound enzymes (36). Clearly, more detailed analysis of the cationic requirements of S. cerevisiae MIP synthase is necessary to answer this question.
The constellation of new data from the S. cerevisiae MIP synthase-NAD ϩ complex, the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex, and the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate complex structures leads to the conclusion that the previous modeling of the inhibitor 2-deoxy-D-glucitol 6-phosphate is not consistent with substrate binding at neutral pH and that the mechanism proposed must, therefore, also be revised (34). an identical position to that of the phosphate in the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex structure (Fig. 2B), which is quite different from that of the inhibitor from the previously published S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate structure ( Fig. 4A; Ref. 34). (iii) As shown in Fig. 4B, an analysis of surface charge distribution of the substrate-binding site indicates that the phosphate-binding pocket is positively charged for encapsulation of the substrate. There must be strong polar interactions between the substrate and the substrate-binding site of MIP synthase at physiological pH. In both the S. cerevisiae MIP synthase-NADH-phosphate-glycerol complex structure and the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate complex structure, the phosphate moieties are located in this positively charged region. However, in the previously published structure of the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate complex, the phosphate is located on the opposite side of the active site, where the surface is, in fact, negatively charged. (iv) The structure of the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate complex was determined at pH 5.5, closer to the optimal pH (7.2-7.7) than pH 4.5, at which the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate complex structure was determined. MIP synthase activity assays at different pH indicate that, at pH 4.5, the enzyme has very low activity, but at pH 5.5, the enzyme recovers about 50% of its optimum activity (Fig. 5). Finally, in the structure of the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)vinylhomophosphonate complex, the conformations of the substrate-interacting residues that are different from the previous S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate complex structure (34) agree well with that of the recently reported structure of M. tuberculosis MIP synthase (35), especially the region surrounding the phosphate-binding pocket. Specifically, Ser-323 is flipped into the active site in the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-phosphate complex structure (Fig. 4A), effectively blocking phosphate binding, whereas it is flipped out in the M. tuberculosis MIP synthase, S. cerevisiae MIP synthase-NADH-phosphateglycerol complex, and S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate complex structures, allowing room for phosphate binding. This strongly indicates that Ser-323 belongs in the "flipped out" conformation when the enzyme is binding substrate, as we see in the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate complex structure.
Modeling of the Substrate and Reaction Intermediates-As described above, the inhibitor 2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate is bound in the active site of the enzyme in an extended conformation and is incapable of cyclization (Fig. 3). When the substrate is modeled in the same conformation in the active site, a steric collision occurs between the 2-hydroxyl group and the residue Leu-360 in the active site (the distance from the 2-hydroxyl oxygen and Leu-360 carbon is 2 Å). This indicates that the substrate must bind differently in the active site than does the inhibitor in the vicinity of C1 and C2 (42). Based on the location and conformation of the inhibitor 2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate, the substrate D-glucose 6-phosphate was modeled in a conformation consistent with cyclization. The phosphate portion was overlaid onto that of the inhibitor molecule with O6 at the position of the inhibitor C7; C6, C5, and C4 were overlaid onto the inhibitor C6, C5, and C4, respectively. Energy minimization using the program Insight II was performed to model the rest of the substrate in a minimum energy conformation. The result of modeling was such that none of the backbone atoms and the hydroxyl oxygen Structure of 1L-myo-inositol atoms would collide with the side chains of active site residues (Fig. 6A). The result of this modeling provided abundant information regarding potential interactions between the substrate in its pseudocyclic conformation and the enzyme active site. All but O2 and O3 of the substrate hydroxyl groups make hydrogen bonds with active site residue side chains. Fig. 6B depicts most of the interactions seen. The final reaction intermediate, myo-2-inosose 1-phosphate, was also modeled, and its energy minimized in the active site (Fig. 6C). This cyclic intermediate makes an additional hydrogen bond between O3 and Asp-438, which contributes to the stabilization of the cyclic conformation.
Proposed Mechanism of MIP Synthase-The structure of the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)-vinylhomophosphonate described above (34) is inconsistent with the mechanism proposed previously, which was based on the structure of the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-Dglucitol 6-phosphate complex. Based on the new structure of the S. cerevisiae MIP synthase-NAD ϩ -2-deoxy-D-glucitol 6-(E)vinylhomophosphonate complex and modeling of the substrate and reaction intermediates, a new mechanism can be proposed (Fig. 7).
In the first step, the substrate is oxidized at C5 by NAD ϩ . This involves direct hydride transfer from C5 of D-glucose 6-phosphate to C4 of the nicotinamide moiety of NAD ϩ ; this is consistent with the present crystal structure where the nicotinamide is located in a suitable orientation for hydride transfer to occur. In concert, a proton is lost from the C5 hydroxyl group of D-glucose 6-phosphate. This proton can be transferred to the Lys-369 terminal nitrogen atom, which is 2.8 Å away from the substrate O5. Asp-320, adjacent to Lys-369, could then accept this proton in a proton-shuffling system.
The second step is the enolization. During the enolization, the pro-R hydrogen of C6 is eliminated (43). From the crystal structure, either the phosphate monoester or Lys-489 may act as the base at the enolization step. The distance between C6 and the phosphate oxygen is 3 Å, and between Lys-489 NZ and C6 is 3.4 Å. The phosphate monoester acting as the base at the enolization step has precedent in the dehydroquinate synthase mechanism (44,45). This hypothesis requires the phosphate to be in a transoid conformation relative to the carbon backbone of the substrate, which is consistent with the present structure (Fig. 3, A and B). From the modeling of the 5-keto-D-glucose 6-phosphate intermediate in the active site, Lys-489 is also in a suitable position to remove the pro-R hydrogen of C6. The developing negative charge on the enolate oxygen is stabilized by two lysines, Lys-369 and Lys-489.
In the aldol condensation step, the phosphate could transfer the proton abstracted from C6 to O1. Lys-412 could also transfer a proton to O1 in the aldol cyclization step; the developing negative charge on O1 would then be stabilized by Lys-373. From the crystal structure and the substrate modeling, a type I aldolase mechanism where Lys-369 or Lys-489 could form a Schiff base with C5 of the substrate also can not be ruled out. However, Schiff base formation would require significant conformational changes of either the enzyme main chain or the substrate, because neither Lys-369 nor Lys-489 can reach C5 in its present position in our structure.
The last step is the reduction by NADH. The hydride that was transferred in the first step to the nicotinamide C4 returns to the C5 of the intermediate myo-2-inosose 1-phosphate. Using the same proton-shuffling system, a proton could then be transferred to the C5 ketone oxygen from Asp-320, via Lys-369. It is important to realize that our mechanistic proposal is based largely on our inhibitor-bound structure and not our modeling, as the C5-O5 bond is already oriented properly for hydride transfer from NAD ϩ . On the other hand, our hypotheses re- garding activation of the O1 aldehyde by Lys-412 and Lys-373 are based on our modeling in the active site, though it is important to realize that both Lys-373 and Lys-412 are absolutely conserved in all MIP synthase sequences and that our mutagenesis results are consistent with this hypothesis in that the K412A mutation is completely inactive.
Mutagenesis-To verify the importance of residues Lys-369, Lys-412, Lys-489 in the mechanism, we have mutated these three lysine residues to alanine and investigated the activity of these mutants. All three mutations have resulted in complete loss of activity, as none of these mutants was able to turn the substrate over detectably. This result strongly suggests that these lysine residues indeed play very important mechanistic roles.
Though the mechanism described above is reasonable, it is by no means definitive. Verification of the mechanism proposed above requires further structural investigations of MIP synthase in complex with various structural analogues of reaction intermediates. At this point, there is still not enough data regarding whether the substrate binds in its cyclic form, followed by ring opening catalyzed by the enzyme, or binds in its acyclic form, which constitutes less than 0.4% of D-glucose 6-phosphate in solution. It is particularly important to produce a structure of a product-like inhibitor, which is already cyclized, to validate our modeling of the cyclic conformation in the active site.
|
2018-04-03T02:31:40.929Z
|
2004-04-02T00:00:00.000
|
{
"year": 2004,
"sha1": "a7cff2ba5357b2f120f85dcfa089a29562075e01",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/14/13889.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1c951a1f64e7af50cb97e8275fa4ed90a49df63c",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
258428093
|
pes2o/s2orc
|
v3-fos-license
|
Pitfalls in evaluating FDG‐PET/CT results in melanoma patients – A case series
Dear Editors, melanoma is the leading cause of skin cancer related deaths. Positron emission tomography/computed tomography (PET/CT) with 2-[18F]fluoro-2-deoxy-D-glucose (FDG) has the highest sensitivity in detecting extracerebral distant melanoma metastases. As FDG lacks specificity and marks all tissues with high glucose uptake, false positive results are not unusual.1 A case series of patients from the Dermatological Clinic at Biederstein with advanced melanoma and benign FDG-positive lesions was compiled to raise awareness of false positive findings as FDG-PET is not comprehensively available to patients in Germany due to regulatory restrictions.2 Case 1: A 62-year-old female melanoma patient (submammary left, Breslow thickness 6.4 mm, positive sentinel lymph node [SLN]) received adjuvant interferon-α therapy. Eighteen months later PET/CT revealed FDG-avid hilar lymph nodes, suspicious of metastatic disease (Figure 1a). Serum levels of S100 and angiotensin-converting enzyme were in normal range, lactate dehydrogenase and soluble interleukin-2 receptor were elevated. Interferon-gamma release testing for tuberculosis was positive. An endobronchial lymphnodebiopsy showednoncaseatingepithelioid granulomas leading to the diagnosis of sarcoidosis (Figure 2a). Case 2: A 34-year-old male melanoma patient (right shoulder, Breslow thickness 0,8mm) received adjuvant IFNα after resection of a lymph node metastasis in the right axillary region. One year into the treatment, routine PET/CT revealed a cutaneous FDG-avid lesion in the right axilla sus-
Dear Editors, melanoma is the leading cause of skin cancer related deaths. Positron emission tomography/computed tomography (PET/CT) with 2-[18F]fluoro-2-deoxy-D-glucose (FDG) has the highest sensitivity in detecting extracerebral distant melanoma metastases. As FDG lacks specificity and marks all tissues with high glucose uptake, false positive results are not unusual. 1 A case series of patients from the Dermatological Clinic at Biederstein with advanced melanoma and benign FDG-positive lesions was compiled to raise awareness of false positive findings as FDG-PET is not comprehensively available to patients in Germany due to regulatory restrictions. 2 Case 1: A 62-year-old female melanoma patient (submammary left, Breslow thickness 6.4 mm, positive sentinel lymph node [SLN]) received adjuvant interferon-α therapy. Eighteen months later PET/CT revealed FDG-avid hilar lymph nodes, suspicious of metastatic disease (Figure 1a). Serum levels of S100 and angiotensin-converting enzyme were in normal range, lactate dehydrogenase and soluble interleukin-2 receptor were elevated. Interferon-gamma release testing for tuberculosis was positive. An endobronchial lymph node biopsy showed noncaseating epithelioid granulomas leading to the diagnosis of sarcoidosis ( Figure 2a). Case 3: A 41-year-old woman with a history of two melanomas of the right neck (Breslow depth 1.5 mm positive cervical right SLN) and six years later of the right subscapular region (ulcerated, Breslow thickness 1.2 mm, positive SLN in the right axilla) was diagnosed with two new FDG-avid lymph nodes on the right side of the neck in follow-up PET/CT (Figure 1c). Serum S100 was elevated, LDH was normal. Extirpation of both lymph nodes revealed reactive lymphadenopathy without malignancy (Figure 2c).
Case 4: A 78-year-old man was diagnosed with ulcerated nodular melanoma of the left thoracic paravertebral region (Breslow thickness 5.1 mm, negative SLN). PET/CT revealed a nodal structure of the right parotid gland with intensive FDG uptake (Figure 1d). On histopathological examination the diagnosis of (Warthin's tumor) papillary cystadenoma lymphomatosum was made (Figure 2d).
The specificity of PET/CT in melanoma patients for detection of metastases is approximately 90% and varies depending on primary or follow-up imaging and on stage of the disease. False positive results are more common in lower disease stages; hence the diagnostic value of PET/CT in melanoma patients is most accurate from stage IIC and upward. 2 However, detailed discussions of underlying causes for false positive results are sparse, 3 and the lack of awareness in everyday clinical practice tempted us to compile this case series. Sarcoidosis is a systemic granulomatous disease, which often manifests in bihilar lymphadenopathy with pulmonary infiltration. Antineoplastic treatment (for example, IFN, checkpoint inhibitors [CPI]) can trigger or exacerbate sarcoidosis and sarcoid-like reactions. 4 It is challenging to differentiate between sarcoidosis and malignant lesions in PET/CT alone due to similar average standardized uptake values. 5 Figure 2 a exemplifies glucose uptake by GLUT-1 staining in the corresponding lesions. The false positive PET/CT result in the second case report was caused by a foreign body reaction after surgery. As activated inflammatory cells have a higher glucose uptake than resting cells (Figure 2b), acute or chronic inflammatory reactions are difficult to differentiate from tumor progression in PET/CT. 6 In line with this, reactive lymph nodes may also lead to false positive results. Viral or bacterial infections induce an adaptive immune response with the activation of B and T cells, causing a higher glucose uptake (Figure 2c). 7 Warthin's tumor (case 4) as a benign tumor of the parotid gland, 8 , may be visible in PET CT due to enhanced glucose uptake of the proliferating oncocytes (Figure 2d). Focal FDG-uptake in the parotid gland should raise the suspicion of Warthin's tumor as a differential diagnosis. In the era of CPI therapy, pseudoprogression as a cause for false positive PET/CT results in melanoma patients is an important consideration: Tumor invasion by T cells as a response to immunotherapy leads to the impression of disease progression in imaging. Furthermore, the side effects of immunotherapy (for example, thyroiditis, colitis, adrenalitis, hepatitis) can lead to errors in the response assessment. 9 New assessment criteria such as PECRIT or PERCIMT were introduced in recent years so as not to misinterpret this particular response pattern. 10 In summary, there is a risk of false positive FDG-PET/CT results, especially in oncological patients, as clinicians may have a biased focus on metastatic disease. To ensure correct interpretation, it is necessary to be aware of the possibility of false positive results and the causes behind them. We encourage biopsy sampling when FDG-PET/CT results are unclear or contradict clinical findings.
A C K N O W L E D G E M E N T
We thank Mona Mustafa, Technical University of Munich, Department of Nuclear Medicine and Yize Zhuwu for their contribution to the selection of the PET/CT-images and their helpful perspectives.
Open access funding enabled and organized by Projekt DEAL.
CO N F L I C T O F I N T E R E S T
None.
|
2023-05-02T06:17:42.864Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c45c345e56d0ac0af3ea9054b75b3186da14c1f5",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ddg.15073",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "663874e3c48e56740556d35d58fda9c8b4e49a61",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240509072
|
pes2o/s2orc
|
v3-fos-license
|
Multiresolution Forecasting for Industrial Applications
: The forecasting of univariate time series poses challenges in industrial applications if the seasonality varies. Typically, a non-varying seasonality of a time series is treated with a model based on Fourier theory or the aggregation of forecasts from multiple resolution levels. If the seasonality changes with time, various wavelet approaches for univariate forecasting are proposed with promising potential but without accessible software or a systematic evaluation of different wavelet models compared to state-of-the-art methods. In contrast, the advantage of the specific multiresolution forecasting proposed here is the convenience of a swiftly accessible implementation in R and Python combined with coefficient selection through evolutionary optimization which is evaluated in four different applications: scheduling of a call center, planning electricity demand, and predicting stocks and prices. The systematic benchmarking is based on out-of-sample forecasts resulting from multiple cross-validations with the error measure MASE and SMAPE for which the error distribution of each method and dataset is estimated and visualized with the mirrored density plot. The multiresolution forecasting performs equal to or better than twelve comparable state-of-the-art methods but does not require users to set parameters contrary to prior wavelet forecasting frameworks. This makes the method suitable for industrial applications.
Introduction
Seasonal time series forecasting with computers had early success with seasonal adjusted methods as proposed in 1978 [1] or with the the X-11 method [2]. Over the years, improved versions, such as the X-13 [3], various variants of such models and new techniques in the area of statistical models, were developed [4], and a new field emerged in computational intelligence dealing with seasonal time series forecasting arose [5]. Recently, a method based on Fourier analysis was introduced [6]. Fourier analysis can be used for estimating seasonal components in a time series. The frequency content of a time series over its complete range can be approximated with Fourier analysis. Varying seasonality imposes a problem for forecasting methods that assume a non-varying seasonality, because Fourier analysis gives only a representation of the frequency content of a time series and does not give a relation of frequency patterns to specific time intervals [7]. For example, the relationship of high frequency peaks at a specific time interval cannot be captured by Fourier analysis [7]. Different frequencies can be captured with Fourier analysis as part of the whole time series without a specific relation to time intervals. The recognition of high frequencies is constrained by the sample rate (Nyquist-Shannon sampling theorem [8]), whereas for low frequencies, it is constrained by the total length of the time series itself.
In contrast, the wavelet analysis uses short time intervals for capturing high frequencies in order to relate them to specific time intervals (localization in time) more precisely than Fourier analysis. Low frequencies are treated with longer time intervals (compared to 1. An open-source and application-oriented wavelet framework combined with an automatic model selection through differential evolutionary optimization with standardized access in Python and R.
2.
Contrary to prior works, a systematic comparison of state-of-the-art methods and open-source accessible seasonal univariate forecasting methods to our framework. 3.
Wavelet forecasting performs equally well on short-term and long-term forecasting.
This work is structured as follows. Section 2 explains the forecasting setting and presents the multiresolution forecasting method for which the software is available on pypi and CRAN [17,18]. The cross validation designed for time series forecasting is outlined, datasets are introduced, the quality measures are defined and justified, and an estimation and visualization technique for the distribution is presented. Section 3 shows the results with a scaled quality measure used for benchmarking. The results with a relative error are in Appendix B. Section 4 discusses the results and Section 5 closes with a conclusion.
Materials and Methods
The first Section 2.1 outlines the selection of open-source available time series forecasting algorithms. Section 2.2 explains an adaptation of cross validation for out-of-sample forecasts, which is used for the computations and whose results are presented in Section 3. Section 2.3 illustrates the datasets which are used for evaluation and outlines their industrial application. Sections 2.4 and 2.5 define the quality measures used here to evaluate the forecast errors computed with Section 2.2. Section 2.6 presents an estimation and visualization technique for probability density functions, which is used to analyze the samples obtained with the two quality measures from Sections 2.4 and 2.5. The last Section 2.7 presents the multiresolution method.
Related Work
The focus of this work is the forecasting of seasonal univariate time series. Therefore, open access forecasting methods are selected that are specially designed for dealing with seasonality (periodic patterns) independently of the classification into short-and long-term methods. Although in the general case different forecast horizons require different methods [41], Fourier decomposition of time series allows to use methods, such as Prophet, in both cases [6]. Moreover, short-and long-term forecasting strategies usually depend strongly on the resolution of the time series [42,43], whereas periodic patterns do not necessarily depend on the resolution. Hence, the performance of all methods over all 14 horizons are computed and grouped by horizon one versus horizon above one (multi-step). Implicitly, the performance of the one-step forecasts represents a short-term evaluation, and the multi-step forecasts represent a long-term evaluation. In both cases, the distribution is estimated visualized separately with the MD plot [44]. The performance of the method is evaluated with rolling forecasting origins [34] on multiple datasets competing against 10 other forecasting techniques. To the knowledge of the authors, no systematic benchmarking using wavelet forecasting was performed so far. The 12 forecasting techniques were selected, according to their potential of dealing with seasonal time series and their open-source accessibility.
As it is a common issue in data science that published techniques are not implemented (e.g., swarm-based techniques [45]), this work focuses on methods for which open-source implementations are provided through either the Comprehensive R Archive Network (CRAN) or the Python Package Index (pypi). The current state-of-the-art method, according to accessible open-sources for time series forecasting in Python and R, contain the following techniques: ARIMA, Cubic Spline extrapolation, decomposition models, exponential smoothing, Croston, MAPA, naive/random walks, neural networks, Prophet and the theta method. Two automatized forecasting methods are used to represent the current state of the art for ARIMA models: the first one is RJDemetra, which is an ARIMA model with seasonal adjustment, according to the "ESS Guidelines on Seasonal Adjustment" [46] available from the National Bank of Belgium, using two leading concepts TRAMO-SEATS+ and X-12ARIMA/X-13ARIMA-SEATS [3] and referred to as "SARIMA" (or short "SA") [47] and an automatized ARIMA referred to as "AutoARIMA" (or short "AA") [48,49]. Modeling ARIMA for time series forecasting follows an objective and thus can be completely automatized by optimizing an information criterion for which AutoARIMA and SARIMA are two different approaches [48]. Cubic spline extrapolation is a special case of the ARIMA (0,2,2) model [50], and is represented by the ARIMA models. Croston is not used since it does only provide one-step forecasts [51]. The Multi Aggregation Prediction Algorithm ("MAPA") uses exponential smoothing as a forecasting technique on multiple resolution levels, which are recombined into one forecast on one specified resolution level [9,10,52]. Since the combination of forecasts tends to yield better results [53], MAPA represents the exponential smoothing methodology. The naïve method or random walk is incorporated in the quality measure used in the results section. In order to represent neural networks, a feed-forward neural network ("NN"), a multilayer perceptron ("MLP") and a long short-term memory ("LSTM") with one hidden layer is used since neural networks were recommended as robust forecasting techniques if they have at least one hidden layer [54]. "Prophet" is a decomposition model, using Fourier theory, specially designed for seasonal time series forecasting [6,55]. Therefore, no other decomposition model is used besides Prophet. Forecasts with the Theta method "are equivalent to simple exponential smoothing with drift" [56] and are here represented with MAPA. Furthermore, XGBoost ("XGB") [57] is included since it was recommended as a robust algorithm for general machine learning tasks [58].
There are two related forecasting frameworks using wavelets: [28,30], proposed independently of each other and so far not compared to each other. Both methods are based on the redundant Haar wavelet decomposition. In both methods, the wavelets and the last smooth approximation levels are forecasted on each level, separately. Ref. [30] incorporates artificial neural networks for that purpose, whereas [28] incorporates an ARIMA framework in contrast to our method, which uses regression optimized with least squares [59]. In these two comparable methods, the final forecasts are obtained by exploiting the additive nature of the reconstruction of the redundant Haar wavelet decomposition. Thus, a forecast is created by forecasting each level of this decomposition separately and by reconstructing the time series value (forecast) from it. Their methodology is opposed to the approach we take in this work. However, both methods do not provide a framework for model selection. Instead, the parameters, such as the wavelet levels, have to be specified by the user. Hence, in this study, their parameters remain in the default setting specified by the documentation of the packages, i.e., require parameters indicating the number of decomposition levels ("Waveletlevels"), a boundary condition ("boundary"), the maximum non-seasonal order ("nonseaslag"), and the maximum seasonal order ("seaslag"). Such parameters are not necessary in our proposed multiresolution framework. In the following sections, the wavelet method using an artificial neural network is denoted as "MRANN", whereas the method using ARIMA is denoted as "MRA". The abbreviation "MR" stands for "multiresolution".
Rolling Forecast
A specific cross-validation approach to compute out-of-sample forecasts is explained here as follows. The data are repeatedly split into training and evaluation sets. The training data are used for fitting a model, and the evaluation set is used for model evaluation. A cross validation is performed by dividing the data multiple times. However, time series data cannot be split arbitrarily because the training set always consists of observations occurring in time prior to the evaluation set. Fitting a forecasting method to the data yields a forecasting model. Suppose k values are required for creating a forecasting model. A cross validation for time series forecasting known as "rolling forecasting origin" with a forecast horizon h is obtained as follows:
1.
Split time series into training and test set: Fit the forecasting method on the training dataset and compute the error of the resulting forecasting model for k + h + i − 1 on the evaluation dataset. 3.
Repeat the above steps for i = 1, . . ., k + h + i − 1, with t being the total length of the time series. 4.
Compute a quality measure based on the errors obtained from the previous steps.
This procedure is applied for the general case of multi-step forecasts. For one-step forecasts, h = 1 is applied. The model is trained and evaluated multiple times in order to determine its ability for generalization. The large set of out-of-sample forecasts ensure a more accurate total error measurement and sufficient samples for statistical analysis. The forecasts in Section 3 are computed with a rolling forecasting origin for a forecast horizon from h = 1 to h = 14 for the last 365 days. Hence, this work assumes that on the investigated datasets, the longest seasonality is not longer than a year.
The rolling forecast uses a training set to adapt the parameters of a specific forecasting method to create a model and an evaluation set to estimate the models forecasting performance. In order to obtain a supposedly best model, the choice of the parameters can be estimated with criteria based on the training data. One possibility is to split the training dataset a second time as explained above for the rolling forecast and to determine the model with the best out-of-sample forecast on the test dataset, which can be called model selection. The model selection could be either done once in order to determine one global model for the complete rolling forecast on the evaluation dataset, or it could be done in each iteration step of the rolling forecast in order to obtain a dynamic model selection approach. Let the two model selections be named fixed and dynamic parameter selections, respectively. The dynamic parameter selection is used in the automatized forecast techniques AARIMA [48] and SARIMA [47]. The fixed parameter selection is used for the four methods multiresolution method with the regression (MRR) and neural network (MRNN), MAPA and Prophet. The neural networks MLP and LSTM, and the gradient boosting machine XGBoost (each with input size 3) are fitted on training data and tested on the test set without any selection since they are recommended as robust methods.
Datasets
There are four datasets used in this work. All are common-use cases for time series forecasting in recent literature [12,[60][61][62]. The datasets treat electric load demand, stocks, prices and calls arriving in a call center. The European Network of Transmission System Operators for Electricity time series data describes the daily load values in Germany covering a time range from 2006 to 2015. The time series is in an hourly format per country and was aggregated to a daily basis. It contains 3652 data points. This time series has very strong seasonal components; therefore, a Fourier-based approach is expected to yield positive results. The Stocks time series contains stock values for the corporation key American Airlines from Standard and Poor's (SAP 500). It ranges from the start of 2013 to June 2017, containing 1259 data points. The Scandinavian Electricity Price time series data provides hourly prices per Scandinavian country. The system price is chosen here. The data range from the start of 2013 to the start of 2016, containing 2473 data points. The call center data provide the number of issues in a call center per day. The observations reach from 1 April 2013 to the 15 December 2018, containing 2082 data points. This time series has very strong seasonal components, comparably to the dataset of Electricity. Therefore, a Fourier-based approach is expected to yield positive results. The time series are called in the appearing order "Electricity", "Stocks", "Prices", and "Callcenter". The Electricity and Callcenter datasets are seasonal time series. The Price dataset has varying seasonal components. External influences on electricity prices cause varying "seasonality at the daily, weekly and annual levels and abrupt, short-lived and generally unanticipated price spikes" [43]. The Stocks dataset is based on a stock time series, and it can be assumed that future stock prices depend largely on external information not included in the data. Moreover, according to the Random Walk Theory, stocks time series result from a purely stochastic process [63]. The time series all have a daily resolution, and there are no missing values. The datasets were taken from the web with the exception of the callcenter data, which were provided by Tino Gehlert. Electricity can be found on [64]. The Stocks dataset is part of the SAP500 and can be found on [65]. Prices (Scandinavian Electricity Prices) can be found on [66].
Mean Average Scaled Error
The Mean Average Scaled Error (MASE) was proposed by [35]. An absolute scaled error is defined by the following: Each forecast error is scaled by the Mean Absolute Error (MAE) of the in-sample forecasts created with the naïve method.In the case of seasonal data, the seasonal naïve method is used, and the formula changes from |y i − y i−1 | to |y i − y i−p |, where p denotes the strongest period of the time series. The strongest period is evaluated by computing the MAE for all periods and selecting the argument of the minimum.
In this work, the naïve method was used to scale the MASE on datasets Prices and Stocks, and the seasonal naïve method was used in the case of datasets Electricity and Callcenter. Positive and negative forecast errors are penalized equally. Values greater than one indicate worse out-of-sample one-step forecasts, compared to the average onestep naïve forecast computed in-sample, whereas values smaller than one indicate better forecasts. Hence, MASE divides in outperforming and underperforming methods. The computation of the MASE results in a quality measure independent from the data scale, and thus enables a forecast error comparison between different methods and across different time series. The only case in which the computation of MASE is critical, meaning infinite or undefined values, is when all time points in the data are identical. For multi-step forecasts, the absolute scaled error from Equation (1) The Mean Absolute Scaled Error is the mean of the preceding scaled error definition (1): In this work, the MASE is used to provide a quality measure to compare different methods across different datasets. The scaling is the main reason for the choice of this measure. Furthermore, the scaling with the naïve method provides a benchmark with a simple method.
MASE scales the forecast errors so that the performance of each method can be compared with all other performances of methods across different datasets.
Symmetric Mean Absolute Percentage Error
Ref. [67] proposed the Symmetric Mean Absolute Percentage Error (SMAPE). Armstrongs original definition is without absolute values in the dominator: The difference to the MAE is the division by (|y i | − |ŷ i |)/2 and the multiplication with 100%. Forecasts higher than the actual time series value are penalized less than forecasts, which lie below the actual time series value. SMAPE scales the forecast errors so that the error value is relative to the actual data value.
MD Plot
A large sample of forecasts are obtained with the rolling forecasting origin. The forecasts are analyzed with an appropriate quality measure and yield a new view on the forecast performance. Especially, the resulting distribution is of interest. The distribution defined by the empirical probability density function (pdf) can be visualized with a special density estimation tool called mirrored density plot (MD plot), which was proposed by [44]. This empirical distribution provides important information about the underlying process. Conventional methods, such as classic histograms, violin or bean plots with their default parameter settings, showed to have difficulties visualizing distributions correctly [44]. In the same work, it is shown that MD plots can outperform them if there is no parameter adjustment. For the MD plot, the probability density function is estimated to be parameterfree with the Pareto density estimation [68]. With some simplification, the visualization of the MD plot is obtained by mirroring the pdf.
Multiresolution Method
The multiresolution method of a time series y t with N observations is realized with wavelet theory, following the work of [11][12][13][14][15][16]. Wavelets are a standard tool for multiresolution analysis and widely used in data mining [69]. The wavelet decomposition yields smooth approximations, which are different resolution levels of the time series and wavelet scales, which capture the frequency bands of the time series. Usually, a non-redundant discrete wavelet transform is used for computation, but some points need to be considered when adapting wavelets to the task of time series forecasting. First, a redundant scheme is necessary since at each time point, the information of every wavelet scale must be available [14]. Second, the wavelet decomposition should not be invariant since a shift in the time series would yield different time series forecasting models [14]. Third, asymmetric wavelets need to be used in order to allow only information of the past or present to be processed for creating estimations of the future [14]. Haar wavelets are used for edge detection in data mining [69]. Furthermore, Haar wavelets yield an orthogonal wavelet decomposition [14]. This implies a reconstruction formula (4), which uses only wavelet coefficients from scale j = 1 to J and the coefficient of the last smooth scale (J) at time point t in order to recover the value of the original time series at time t [69].
w j,t denotes the wavelet coefficient at scale j, c 0,t is the value of the original time series, and c J,t is the last smooth approximation coefficient each at time translation t. The resolution level or smooth approximations of the redundant Haar wavelet decomposition are computed with a filter h = (0.5, 0.5); see (5), [14]. The first approximation level at j = 1 with coefficients c 1,t is obtained by filtering the original time series c 0,t . In general, the following formula is applicable: The computation of the wavelet coefficients follow from (4) and (5) and are the differences between the original time series and the smooth approximations, respectively. This, again, starts with the original time series c 0,t and the successive approximation levels from j = 1 to J: The maximum attainable number of levels for the redundant Haar wavelet decomposition is constrained by two mechanisms. First, an offset is created at the start of the decomposition, which grows with the power of two as 2 j in dependence of the scale j. In other words, the length of the required support for constructing the wavelet and resolution levels can, at maximum, adopt the largest power of two smaller than the length of the time series: 2 max < N, with max being the maximum attainable level size. Second, the last smooth approximation is a filter, which has growing support with increasing scale. The filter with growing support creates a more and more constant resolution level. Eventually, it must become constant at the latest when the maximum possible carrier is chosen. This effect can occur before the maximum level is attained. Then, the last smooth approximation level, which can be regarded as a trend estimation, does not carry any information any more.
The coefficients from all obtained wavelet scales and from the last smooth approximation need to be chosen in order to compute a one-step forecast. The coefficients can be processed with linear and nonlinear methods. Here, a regression optimized with least squares [59] and a multilayer perceptron with one hidden layer (neural network) [70] are chosen, denoted as MRR and MRNN respectively. The challenge of model selection can be decided with a criterion, such as AIC [71], but is computationally complex; he work of [14] proposed a more straightforward solution. Since the redundant Haar wavelet decomposition is an orthogonal projection, the coefficients can be chosen in a way to form an orthogonal basis [14]: Selecting an orthogonal basis follows a lagged scheme, where the step size for selecting coefficients is 2 j for scale j (lagged coefficient selection) [14]. The selection of coefficients can be reduced to subsets of the basis with a total of A j coefficients per scale j. So, the wavelet coefficients for forecasting time point N + 1 are w j,N−2 j (k−1) for k = 1, . . . , A j and the smooth coefficients are c J,N−2 J (k−1) for k = 1, . . . , A J+1 . The choice of each number A j , j ∈ {1, . . . , J + 1} is part of the model selection. From this scheme, a constrained selection of coefficients can be made, which can be again decided with a criterion, such as AIC. The Markov property states that the optimal prediction can be obtained only from finite past values [72]. If the Markov property applies, then the lagged coefficient selection is able to return the optimal prediction [14].
The lagged coefficient selection described above has a complexity of denotes the maximum possible number of coefficients at scale j. Here, finding the best wavelet model means to find the best combination of number of decomposition levels and, at the same time, finding the best number of coefficients per each level, which fits historical data with the goal to forecast future time points [73]. There is a potentially large set of possible input parameters, which define the model of our framework, and the output for each input is obtained by a potentially complex computation (e.g., rolling forecasting origin [73]). This can be viewed as a search problem. A simple but complex solution would be the search through all possible inputs, which we found to be not practical for such complexity. Therefore, a more sophisticated approach is required.
The approach used in this work is a "differential evolutionary optimization". In this work, multiple coefficients of a prediction scheme with varying decomposition depth are optimized, using EA. In the following, a rough outline of the evolutionary optimization is given. A population is randomly initialized, which stands in competition, forcing a selected reproduction based on a fitness function (survival of the fittest) [73]. The starting set of candidates can be randomly initialized [73]. The fitness is based on a quality measure (e.g., for measuring the forecast performance). The best candidates are chosen (survival of the fittest) [73]. Those candidates (parents) are used to generate the next generation [73]. The two operations for building the next generation are recombination and mutation [73]. The new set of candidates is called children [73]. The new selection is based on a fitness function based on the quality measure and the age of the candidates. This procedure is iterated until a stopping condition is reached. This can be, for example, a sufficient quality level or a maximum number of steps.
In our framework, we evaluate each possible decomposition with J + 1 levels, separately. The vector x = {A 1 , . . . , A J , A J+1 } carries the number of coefficients associated with the respective wavelet level J or the last smooth approximation level. The difference between classical evolutionary optimization and differential optimization is that the candidate solutions are vectors x ∈ N J+1 , and the new mutantx is produced by adding a perturbation vector p ∈ N J+1 to an existing one: where p is a scaled vector difference of two already existing, randomly chosen candidates, which are rounded to yield the following integer vectors: and F > 0 is a real number, which controls the evolution rate.
Here, two to five decomposition levels in the model selection procedure for the framework are used which allows one to fifteen coefficients per level for the regression method, and one to eight coefficients per level for the neural network. The multiresolution forecasting framework with a neural network is denoted as "MRNN" and with a regression as "MRR". The difference for the multiresolution method in [28,30] is the lagged coefficient selection and the computation of the forecast based on a prediction scheme, using the wavelet decomposition as one unit without the reconstruction scheme. Furthermore, the proposed multiresolution framework does not require the user to set any parameters since this will be completely undertaken by the model selection based on the differential evolutionary optimization.
Typically, variations of evolutionary algorithms (EA) are used in time series forecasting in order to adapt complex models to the training data. For example, [74] employ EA to optimize the time delay and architectural factors of an (adaptive) time-delayed neural network (GA-ATNN and GA-TDNN). EA is used to optimize artificial neural networks' architectural factors for time series forecasting [75]. Ref. [76] utilize EA to optimize the parameters of support vector regressions for time series forecasting. Alternatively, a prediction scheme with a matrix system of equations is constructed that incorporates the time-series sequence piecewise by exploiting algebraic techniques, using various control and penalty coefficients, which are optimized, using EA [77]. Further, an improvement of this approach is proposed in [78].
Results
The forecasting performance of 12 different forecasting methods are compared with each other across four different datasets. Different forecast horizons require different methods [41]. Nevertheless, the performance of all methods over all horizons is computed; the selected horizons and two summarized periods are presented in Tables 1, 2 The MD plots show various properties of the quality measures at once. The distribution of the quality measure is visualized, which, in most cases, is a unimodal distribution, except for AA, MLP, and LSTM in Figure 2, AA and SA in Figure 5, Prophet in Figure 6, LSTMM in Figure A2, AA and LSTM in Figure A3, MRR in Figure A4, and Prophet in Figure A7. The fat or thin long tails give a measure of uncertainty. The outliers show the worst possible performance. A high variance of the distribution indicates underfitting. Tables A1 and A2 with the exception of LSTM, MRA, NNetar and XGBoost on dataset Electricity for horizon 1, LSTM and NNetar on dataset Electricity for overall horizons, XGBoost on dataset Callcenter for horizon 1, and SARIMA and AARIMA on dataset Stocks on horizon 1. All other 87 cases had a significant value for the Kolmogorov-Smirnov test, testing for an F distribution. Assuming an F distribution, the central tendency can be computed and is used to compare the overall performance of the methods. Tables 1 and 2 show the MASE for all methods on all four datasets for various horizons and for the mean over horizons from 1 to 7 and 1 to 14. This could not be done for SMAPE since there were no distributions, fitting most of the samples significantly. In the case of SMAPE and unimodal distributions, the median was used instead of the mean. Tables 1 and 2 provide insight about the forecast performance for each method across various horizons. The progress of the performance can be tracked along the horizon. The overall measurement summarizes the quality measure over all horizons from 1 to 7 (one week) and from 1 to 14 (two weeks) as the mean over all samples. Tables A3 and A4 show the same computation but for SMAPE and with the median.
The Kolmogorov-Smirnov tests indicate a F distribution of the MASE for almost all cases, shown in
The MASEs of Prophet, MAPA, MRANN, MRNN and MRR on dataset Electricity are the only ones below or equal to 1, and therefore, is better than the seasonal naïve method; see Table 1. In the case of dataset Electricity, the four multiresolution methods outperform the seasonal naive method.
On dataset Callcenter, the two multiresolution methods with neural networks (MRNN) and with an artificial neural network (MRANN) outperform every other method. The only other method performing better than the seasonal naive method for horizons larger than 1 is Prophet, which can be seen by the explicit value of the central tendency of the MASE distribution that serves as a comparison (see the overview for MASE in Table 1).
However, the QQ plot in Figure A11 shows that the full MASE distribution of errors of Prophet and MRNN are equal, which indicates similar performance. For the first horizon on dataset Prices and Stocks, no method outperforms the naïve method. Only the LSTM and the MLP have better performance than the naïve method, with multi-step forecasts on dataset Prices and Stocks. Table 1. Average MASE of multi-step forecasts from Prophet, MAPA, the two multiresolution forecasting methods MRR using regression, and MRNN using a neural network, seasonal adapted ARIMA (SA), automatized ARIMA (AA), multilayer perceptron (MLP), long short-term memory (LSTM) and gradient boosting machine (XGB) for different horizons and for the mean over horizons from 1 to 7 and 1 to 14.
Discussion
This work follows the argumentation in [33,79]: "There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown" [79]. "Forecasters generally agree that forecasting methods should be assessed for accuracy using out-of-sample tests rather than goodness of fit to past data (in-sample tests)" [33]. Here, the quality of the predictive models for practical purposes is evaluated for 365 time steps. With such a sample size, the MD plot is able to discover fine details of the underlying distribution (Thrun et al., 2020) under the assumption that the largest seasonality lies within a year [44]. It should be noted that restricting evaluation to specific datasets makes it challenging to provide evidential results that can be generalized, which, in our opinion, remains a great challenge in forecasting. In general, the learnability of machine learning methods for data cannot be proven [80]. Hence, we follow the typical approach of supervised methods by dividing the data and estimating the learnability on test data as described in Section 2.1. The quality indicators are evaluated on a large enough sample (here, 365 steps) for the distribution analysis [44].
The visualization of the forecasting error distribution with the MD plot can give insights about the statistics of the errors. The long and fat tails of the distributions indicate high uncertainty when estimating forecasts. Thin tails on the other hand indicate more outliers. Outliers may happen if external variables influence the time series values on specific time points. Specific errors can be related to time and potential causes and context. For example, Electricity Prices may be influenced on extreme weather situations that require higher usage of cooling or heating devices. In practice, outliers can be removed from the overall measurement when judgmental reasoning is applicable [34].
In our evaluation of the test data with MASE, lower than one implicates a better Mean Average Error than the naïve method. Thus, our results indicate that the multiresolution method and Prophet are able to forecast the seasonal datasets more successfully than naïve forecast and other methods. The performance for one-step forecast is different to that of multi-step forecasts for recursive multi-step forecasting methods. The error of preceding computations propagates with increasing horizon. This is not the case for Prophet, which uses curve fitting. The multi-step forecasts of Prophet increase in performance in comparison to the one-step forecast for datasets Electricity and Callcenter.
SMAPE has almost only multimodal distributions, according to the Hartigans' dip test for which the Bonferroni correction for multiple testing was applied. Therefore, it is critical to investigate the distribution of SMAPE errors instead of providing only table of averages. However, in many time series forecasting evaluations, SMAPE is applied regardless of an evaluation of the SMAPE distribution and therefore the averages are also discussed here [37,38]. The median is statistically more robust than the mean and is therefore applied here.
For datasets Electricity and Callcenter, the multiresolution method and Prophet outperform the other methods in regard to MASE and SMAPE; see Tables 1, 2, A3 and A4. On dataset Callcenter, Prophet has equal performance to the seasonal naïve method. The multiresolution method has the best one-step forecasts. Prophet does perform slightly worse than the multiresolution method for multi-step forecasts on Callcenter and Electricity. However, the performance of the multiresolution method and Prophet is equally good for multi-step forecasts, which is shown by the approximately straight lines in QQplots of the MASE distributions of Prophet versus the multiresolution method in Figures A10-A13. Straight lines mean that the error distributions are approximately equal. The MASE lower than one implicates a better Mean Average Error than the naïve method and thus infers that only the multiresolution method and Prophet are able to forecast the seasonal datasets more successfully than the naïve forecast. The multiresolution method outperforms every other methods on the given datasets on one-step forecasts, with the exception of the Stocks dataset. This exception is not surprising since Stock values follow the Random Walk theory and are stochastic in nature. The resulting progress should not be predictable. The best performance of MLP and LSTM on Stocks is doubtful. Neural network forecasting methods are prone to overfitting and at least on Stocks, it is questionable whether the performance would be stable with a longer evaluation period. The small SMAPE values do not indicate good performance on the Stocks dataset since the changes within the Stocks time series are marginal in nature. MASE values lower than one do not necessarily indicate good performance for dataset Stocks either since MASE relies on the benchmark method, which is the naïve method for the Stocks time series. The naïve method uses the last value, which is neither a good indicator for changes in the Stocks dataset, nor a stable prediction for multi-step forecasts in that case. Hence, seemingly good performance can be obtained in this case, which does not apply in the real world.
The MRR as well as MRNN (proposed multiresolution framework), and the MRANN [30] method perform similarly across horizons and for the overall measurements (see Tables 1, 2, A3 and A4). For the datasets investigated, the multiresolution method with neural networks (MRNN and MRANN) tends to have larger outliers than the MRR. The MRNN has better one-step forecast performance than the MRR, whereas it is the opposite for multi-step forecasts. Computing the mean average of the MASE, it could be deduced that MRANN performs better than MRR or MRNN in Tables 1 and 2 Tables 1 and 2).
The performance of the forecasting method combining the concepts of multiresolution and ARIMA [28] does not perform comparably well to the investigated methods here, although [28] based their work on [13], similar to our proposed work. However, they used ARIMA as the underlying forecasting method and achieved worse results (for both seasonal datasets, MASE for MRA is above 2). In contrast, MRANN [30] forecasted each level (wavelet and the last smooth approximation level) separately with an ANN, and then reconstructing the forecast by applying the reconstruction formula on the forecasted wavelet decomposition. Note, that their proposition overlaps with [28] by using the Haar wavelet decomposition [30] that is also used here. Yet in their work, no comparison with a method comparable to the approach proposed by [13,14] is made.
The overall measurements are always quite close on the seasonal datasets Electricity and Callcenter (see Tables 1 and 2). The performance of the forecasting method combining the concepts of multiresolution and ARIMA [28] does not yield any positive results and thus, does not require any further discussions.
XGBoost was recommended as a robust method for general machine learning tasks [58], and neural networks with at least one hidden layer were recommended for time series forecasting [54]. The computed results were unable to verify this claim. The performance on seasonal time series was worse than the seasonal naïve in the datasets Electricity Demand and Callcenter. However, the results of the neural network, using wavelet coefficients, showed relatively good performance on the seasonal time series, indicating the potential use of neural networks in time series forecasting. It could be argued that preprocessing is necessary prior to the use of neural networks, which is supported by the results of [81]. However, there was no elaborate testing of different parameter settings for the multilayer perceptron and long short-term memory, such as the input size. This could apply for XGBoost as well but was not investigated in this study.
The benchmarking performed in this work indicates that statistical approaches, such as seasonal adapted ARIMA, performed quite poorly in comparison to the Fourier and multiresolution-based methods. The machine learning algorithms also performed better than the ARIMA frameworks used here in at least two cases (comparing the results from the best performing method from each field: Callcenter: MLP = 1. In sum, the results showed that reportedly suitable methods, such as the seasonal adapted ARIMA methods as well as machine learning methods, performed insufficiently on the investigated seasonal time series. It serves as an indication for a more careful selection in practice and an adaptation of the algorithm to the task. The performance evaluation uses a test set covering a whole year. Therefore, models are selected, which performed best over the whole year, under the assumption that the largest seasonality is not longer than a year. This forces the model to have an overall best performance without considering temporal differences in performance within the year. Allowing repeated model fitting throughout the evaluation would enable a dynamic approach to incorporate changes in the data and could increase performance. Hence, better models could be obtained by allowing regular updates of the model itself and its parameter settings. The multiresolution method could benefit from this effect, especially regarding the dataset Prices, which has varying seasonal components. Furthermore, the automated forecast methods used here, such as ARIMA or SARIMA, perform model fitting at each forecasting origin, meaning a continuous adaptation to the data and disabling the possibility of a nested cross validation (only simple cross validation is possible). Prophet does partially adapt to the time series by updating some parameters automatically, such as the seasonal component, due to potential break point changes. This may be the reason that Prophet performs similarly to multiresolution methods MRR and MRNN on the datasets Electricity Demand and Callcenter because for these methods, the adaption was only performed once. Further work is required to improve this disadvantage. Since time series can potentially change their behavior at any time, a dynamic approach to forecasting time series is recommended, despite the possibility of overfitting. Adaptive model fitting could be used for allowing a dynamic adaptation to temporal high frequent changes.This could especially improve the performance of the wavelet method.
Further improvements in the future could be made by integrating multivariate data into the wavelet framework by including the multivariate data to the linear equation system.
As an alternative, Coarse grain time series analysis techniques could be used to model time series [82] and create short-term predictions [83]. Here, the evaluation is restricted to a linear and nonlinear strategy in order to investigate the potential of wavelets for time series forecasting, although the wavelets used could be processed in many other methods as well [14].
Conclusions
The work brings a wavelet method to the point of automatized application for industrial tasks without the need to set any parameters and provides a comparison of performance with state-of-the art methods. The presented multiresolution method is an appropriate method for seasonal forecasting and performs equally or better in comparison to the state-of-the-art methods, such as Prophet or MAPA for forecasting horizons higher than one. For one-step forecasts, the multiresolution methods MRR, MRNN and MRANN outperform almost every other method on three seasonal time series datasets and perform as expected based on the random walk theory on the Stocks dataset. Multiresolution methods perform on the seasonal datasets even better than Prophet. Surprisingly, the automatized seasonal adjusted ARIMA (RJDemetra+) and automatized ARIMA did not perform well for the datasets used in this work. Additionally, our benchmarking could not verify that XGBoost and neural networks are robust methods for time series forecasting. However, combining wavelets with ANN (MRANN and MRNN method) improves the forecasting quality considerately. In sum, we conclude, based on our benchmarking, to use MRNN for short-term forecasting and MRR for long-term forecasting. In future work, further wavelets (orthogonal and bi-orthogonal) should be evaluated for seasonal time series forecating.
Data Availability Statement:
Restrictions apply to the availability of datasets Electricity, Stocks, and Prices. Electricity was obtained from [64]. Stocks (SAP500 AAL) was obtained from [65]. Prices (Scandinavian Electricity Prices) was obtained from [66]. Callcenter was provided by Tino Gehlert and is not publicly available, due to privacy concerns.
Conflicts of Interest:
The authors declare no conflict of interest. Table Table A3. Median SMAPE of multi-step forecasts from Prophet, MAPA, the two multiresolution forecasting methods MRR using regression and MRNN using a neural network, seasonal adapted ARIMA (SA), automatized ARIMA (AA), multilayer perceptron (MLP), long short-term memory (LSTM) and gradient boosting machine (XGB) for different horizons and for the median over horizons from 1 to 7 and 1 to 14. The forecasting techniques denoted with a start (*) did yield a non-significant result for unimodal distributions Hartigans' dip test with Bonferroni correction meaning that otherwise the distribution is assumed to be non-unimodal. Table A4. Median SMAPE of multi-step forecasts from Prophet, MAPA, the two multiresolution forecasting methods MRR using regression and MRNN using a neural network, seasonal adapted ARIMA (SA), automatized ARIMA (AA), multilayer perceptron (MLP), long short-term memory (LSTM) and gradient boosting machine (XGB) for different horizons and for the median over horizons from 1 to 7 and 1 to 14. The forecasting techniques denoted with a start (*) did yield a non-significant result for unimodal distributions Hartigans' dip test with Bonferroni correction meaning that otherwise the distribution is assumed to be non-unimodal. All Hartigans' Test in this table yielded non-significant results.
|
2021-10-19T16:00:27.113Z
|
2021-09-22T00:00:00.000
|
{
"year": 2021,
"sha1": "8d4829823f9095ec5d2cf9d5af1dc918b09314a6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/9/10/1697/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "95d9b3f178557d476c7a8abd37b77ea7d0d43a9f",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
258535231
|
pes2o/s2orc
|
v3-fos-license
|
Predicting Football Team Performance with Explainable AI: Leveraging SHAP to Identify Key Team-Level Performance Metrics
: Understanding the performance indicators that contribute to the final score of a football match is crucial for directing the training process towards specific goals. This paper presents a pipeline for identifying key team-level performance variables in football using explainable ML techniques. The input data includes various team-specific features such as ball possession and pass behaviors, with the target output being the average scoring performance of each team over a season. The pipeline includes data preprocessing, sequential forward feature selection, model training, prediction, and explainability using SHapley Additive exPlanations (SHAP). Results show that 14 variables have the greatest contribution to the outcome of a match, with 12 having a positive effect and 2 having a negative effect. The study also identified the importance of certain performance indicators, such as shots, chances, passing, and ball possession, to the final score. This pipeline provides valuable insights for coaches and sports analysts to understand which aspects of a team’s performance need improvement and enable targeted interventions to improve performance. The use of explainable ML techniques allows for a deeper understanding of the factors contributing to the predicted average team score performance.
Introduction
Artificial intelligence (AI) and specifically machine learning (ML) are quickly becoming popular methods for predicting the average scoring performance of European football teams [1]. This is because the technical data collected during football matches can provide valuable insights into a team's playing style and tactics [2]. By analyzing this data, coaches and analysts can gain a deeper understanding of a team's strengths and weaknesses, and use this information to make more informed decisions about player recruitment and opposition analysis [3].
One of the key challenges in analyzing this data is that it comes in a variety of forms, including match sheets, ball events, and tracking data [4,5]. These data types differ in their granularity and availability, but data collection companies are increasingly annotating more types of events and providing information about each event [6]. To effectively analyze team behavior, it is necessary to summarize its playing style in a way that is both humanly interpretable and suitable for data analysis [7]. This typically involves constructing a "fingerprint" of the team's behavior, capturing characteristics such as the types of actions they tend to perform and the types of gameplay patterns the team's players participate Moreover, one of the major challenges with using AI in sports performance analysis is the lack of transparency and interpretability of the results. Traditional AI models, such as neural networks, can be difficult to understand and interpret, making it hard to explain how and why a particular decision or prediction was made. This is where explainable AI (XAI) comes in. Explainable AI is a subfield of AI that focuses on creating models that can provide clear and interpretable explanations for their predictions and decisions [27]. One of the most popular methods for explainable AI is SHapley Additive exPlanations (SHAP), which is a unified method for interpreting the predictions of any machine learning model [28]. It is based on the concept of Shapley values from cooperative game theory, which provides a way to fairly distribute a value among a group of individuals, such as the players on a football team. Recently, two studies [29,30] have employed SHAP as a post-hoc explainability tool to evaluate the impact of each feature on the final outcome, specifically in the context of match-specific score prediction. Both studies primarily concentrated on individual match predictions and were limited to data from a single league, utilizing a moderately sized feature set.
The objective of this paper is to use XAI to identify the key team-level performance metrics that are most important in predicting a football team's performance. By concentrating on overall performance and identifying critical parameters influencing scoring performance (average goal difference over the season), this paper seeks to offer a broader understanding of the factors that determine success in football. The use of an expansive dataset from multiple European leagues, combined with an extensive set of features, sets this study apart from previous research and enhances its potential to uncover novel insights and patterns in football performance. This approach allows for a more transparent and interpretable understanding of how a machine learning model is making its predictions, which can be particularly useful in high-stakes decision-making scenarios such as predicting a team's performance in a football match. By leveraging explainable AI, coaches and analysts can gain a deeper understanding of team performance and make more informed decisions about player recruitment and opposition analysis.
The structure of the paper is as follows: In Section 2, the proposed methodology and characteristics of the dataset are presented. The results of the study, including the performance of the proposed ML model and the explanations generated at both the global and local levels, are discussed in Section 3. The implications and limitations of the study are discussed in Section 4, and conclusions are presented in Section 5.
Materials and Methods
In this article, we aim to predict the average goal difference of football teams over an entire season using machine learning and subsequently explain the predictions made by the model. Our input data consists of various team-specific features such as ball possession and pass behaviors. The target output is the average scoring performance (goal difference) of each football team over the season. The proposed pipeline ( Figure 1) includes three main steps: (1) data preprocessing, (2) model training and prediction, and (3) explainability. We first preprocess and clean the data to ensure that it is suitable for training and testing our model. Next, we use XGBoost, a powerful and widely used machine learning algorithm, to train the model and make predictions. Finally, to ensure the explainability of the predictions, we will use SHAP to provide an explanation for each prediction at a team-specific or overall level. This will allow us to understand the factors that drive the prediction and the contribution of each feature to the predicted average team score performance.
Dataset
Our dataset includes all matches played during the regular season of the top division in 11 European countries for the 2021-2022 season (Table 1). For each match, data were recorded for both teams, resulting in a total of 5992 observations. However, data was unavailable or incomplete for eight matches, as recorded by Instatscout (https://football.instatscout.com/ (accessed on 20 June 2022)).
Specifically, this study involved collecting 160 variables, either directly through In-statScout or indirectly calculated by the authors using data from this platform. These variables were recorded in a Microsoft Excel spreadsheet (full description of the variables is given in the Appendix A). Prior research has shown that the indicators obtained through Instat Scout have high reliability, with K values ranging from 0.90 to 0.98, as per studies by Casal et al. (2019), Castellano and Echeazarra (2019) and Gómez et al. (2018) [31][32][33].
Data Pre-Processing
To ensure the quality and consistency of our dataset, we performed several pre-processing steps before conducting the analysis: (1) Data cleaning: We first cleaned the raw data by removing any missing values or erroneous records to ensure the accuracy and
Dataset
Our dataset includes all matches played during the regular season of the top division in 11 European countries for the 2021-2022 season (Table 1). For each match, data were recorded for both teams, resulting in a total of 5992 observations. However, data was unavailable or incomplete for eight matches, as recorded by Instatscout (https://football. instatscout.com/ (accessed on 20 June 2022)).
Specifically, this study involved collecting 160 variables, either directly through In-statScout or indirectly calculated by the authors using data from this platform. These variables were recorded in a Microsoft Excel spreadsheet (full description of the variables is given in the Appendix A). Prior research has shown that the indicators obtained through Instat Scout have high reliability, with K values ranging from 0.90 to 0.98, as per studies by Casal et al. (2019), Castellano and Echeazarra (2019) and Gómez et al. (2018) [31][32][33].
Data Pre-Processing
To ensure the quality and consistency of our dataset, we performed several preprocessing steps before conducting the analysis: (1) Data cleaning: We first cleaned the raw data by removing any missing values or erroneous records to ensure the accuracy and reliability of the dataset. (2) Averaging variables: All variables, including the output variable, were averaged over the course of the season to provide a general representation of the team's performance throughout the year. For instance, an average team score performance of +2.1 indicates that on average, the team scored 2.1 more goals than the goals they conceded. (3) Feature scaling: To ensure that all of the features are on the same scale Feature scaling was implemented, using the StandardScaler library [34],. This is important for many ML algorithms, as it can help prevent one feature from dominating the others during the training process.
Machine Learning
Just prior to the model training process, we applied a feature selection technique, sequential forward selection, to identify the most important features for the task at hand [35]. This is a wrapper-based feature selection method, where we used XGBoost as the model and R-squared as the selection criterion. The algorithm iteratively adds features to the model one by one and evaluates their impact on its performance. This helps us identify the most relevant features that contribute the most to the model's accuracy and avoid overfitting.
Once the relevant features have been selected, we trained and tested an XGBoost regressor on the data by using a ten-fold cross-validation strategy and internal hyperparameter tuning in the training phase [36]. XGBoost is a powerful gradient-boosting algorithm that has been shown to perform well on a wide range of tasks. We also used the SHAP model to understand the contribution of each feature to the global and local predictions. The performance of the proposed model was compared to that of three other well-known regression algorithms: Support Vector Regression (SVR) [37], Random Forest (RF) [38], and the k-Nearest Neighbor Regressor (kNN) [39].
Explainability
In order to understand and explain the predictions made by our machine learning model, we used the SHAP library [28,40]. SHAP values provide a unified measure of feature importance that can be used for both linear and non-linear models. SHAP is a powerful and unified measure for interpreting the output of machine learning models, offering a consistent approach to understanding the impact of features on model predictions. SHAP values are derived from cooperative game theory and provide an interpretable allocation of each feature's contribution to a prediction, while ensuring that the sum of all feature attributions equals the difference between the predicted outcome and the average baseline prediction. This approach allows for a fair distribution of each feature's influence on the prediction, accounting for potential interactions and dependencies among features. In our study, we employ SHAP as a post-hoc explainability tool to quantify the effects of each feature on the final outcome, helping us identify the key parameters that contribute to a team's overall scoring performance. For a given prediction, SHAP values attribute a contribution value to each feature, with positive values indicating that the feature pushed the prediction higher and negative values indicating that the feature pushed the prediction lower. This allows us to understand how each feature contributed to the final prediction and how they compare to one another. Overall, the use of SHAP values provides a detailed, accurate, and easily interpretable explanation of the inner workings of our regression model.
In addition to the SHAP values employed in our study, there are other explainable AI methods, such as Local Interpretable Model-agnostic Explanations (LIME) [41], that can be used to provide insights into the importance of features in complex models. LIME is a popular technique that explains individual predictions by fitting a locally interpretable model around the specific data point. While both SHAP and LIME aim to increase the interpretability of machine learning models, they differ in their approach. SHAP values are grounded in cooperative game theory and provide a unified measure of feature importance that is both locally and globally accurate. In comparison, LIME focuses on local interpretability and may not provide the same level of global accuracy. Additionally, SHAP values maintain consistency, which means that the order of feature importance will remain the same across different models, while LIME does not guarantee this property. For our study, we chose to use SHAP values because they provide a more consistent and accurate Future Internet 2023, 15, 174 6 of 18 measure of feature importance. However, future work could explore the use of LIME or other XAI methods to analyze football team performance and compare the resulting insights with our findings.
Results
This section presents the results of the proposed explainable machine learning pipeline, including the explanations generated by the SHAP algorithm, which provides insight into the factors that influence the model's predictions.
Based on the sequential forward selection method, we identified 141 out of the 159 initial variables as the most relevant features for predicting football team performance. The selected variables are listed in Appendix A, and the importance of the top 15 variables is visualized in Figure 2. By using these 141 features, our model was able to achieve a satisfactory performance in terms of both accuracy and interpretability. Table 2 and Figure 3 present the results of the model's performance in predicting the average team score over a year. The scatter plot in Figure 3 compares the actual values (x-axis) with the predicted values (y-axis). Each point in the scatter plot represents a team, with the x-coordinate denoting the actual averaged team score performance and the ycoordinate denoting the predicted averaged team score performance. The line of best fit is a visual representation of how closely the predictions align with the actual results, with a slope of 1 indicating a perfect fit. The distribution of the points around the line of best fit demonstrates the accuracy and balance of the predictions. Additionally, the reported results in Table 2 The next step in analyzing the model's performance is to examine its explainability. This analysis aims to understand the factors that influence the model's predictions and how they relate to the actual outcome (the team's average score performance). By understanding the underlying relationships and patterns, we can gain insight into the behavior of the model and identify areas for improvement (modifiable key team-level performance metrics). This can also provide valuable information for stakeholders (e.g., coaches, sport analysts) to understand the decision-making process of the model and the rationale behind its predictions. The next step in analyzing the model's performance is to examine its explainability. This analysis aims to understand the factors that influence the model's predictions and how they relate to the actual outcome (the team's average score performance). By understanding the underlying relationships and patterns, we can gain insight into the behavior of the model and identify areas for improvement (modifiable key team-level performance metrics). This can also provide valuable information for stakeholders (e.g., coaches, sport analysts) to understand the decision-making process of the model and the rationale behind its predictions. The next step in analyzing the model's performance is to examine its explainability. This analysis aims to understand the factors that influence the model's predictions and how they relate to the actual outcome (the team's average score performance). By understanding the underlying relationships and patterns, we can gain insight into the behavior of the model and identify areas for improvement (modifiable key team-level performance metrics). This can also provide valuable information for stakeholders (e.g., coaches, sport analysts) to understand the decision-making process of the model and the rationale behind its predictions. Figure 3. As depicted, variables such as shots per possession percentage, missed chances, entries into the penalty box, conversion percentage of chances, and passes have a positive impact on the team's predicted score performance. Conversely, variables such as lost balls in the team's own half and the ratio of dribbles per minute of possession have a negative effect on the score, indicating that an increase in these variables leads to a decrease in the team's score.
Local explanations (team-specific): Figures 4-7 are actual SHAP force plots that allow us to see how the different variables contributed to the model's prediction f(x) for specific teams. The higher the score, the more the model is likely to predict a positive outcome (good score performance), and the lower the score, the more the model is likely to predict a negative outcome (bad score performance). The variables that were important to making the prediction for this team are shown in red and blue, with red representing features that pushed the score higher, and blue representing features that pushed the score lower. The features that had more of an impact on the score are placed higher, and the size of that impact is represented by the size of the bar.
In the case of Liverpool FC, all the variables pushed the score higher (as indicated by the red bars), indicating that they are important for the model's prediction of a positive outcome. Similar findings were obtained for Manchester City FC, where the team performed well in all key team-level performance variables. On the other hand, using SHAP force plots, it is possible to identify which variables have a negative effect on the team's performance. For example, four key variables (shots per quantity of possession percent, chances percent of conversion, accurate passes, and high pressing percent) were identified as negatively impacting the scoring performance (average goal difference) of West Ham FC. Similarly, lost balls in their own half, offsides, and corners were identified as key performance variables for Lazio FC that have a negative effect on the scoring performance and would require improvement. In summary, SHAP force plots allow stakeholders such as coaches or sports analysts to see which aspects of a specific team's game performance are satisfactory and which need improvement, enabling targeted interventions and adjustments to be made to improve the team's performance.
Discussion
Recognizing the performance indicators that contribute to the final score of a match is important in order to direct the training process toward specific goals. Consequently, the purpose of the current study was to identify and measure the contribution of each performance indicator to the final score of a match. We managed: (i) to predict the goal difference between teams in a match and (ii) to identify the contribution of each performance indicator to the match score both for the teams as a whole and for each team individually. The results showed that for the teams as a whole, fourteen variables had the greatest contribution to the outcome of the match. Of these, twelve (shots per quantity of
Discussion
Recognizing the performance indicators that contribute to the final score of a match is important in order to direct the training process toward specific goals. Consequently, the purpose of the current study was to identify and measure the contribution of each performance indicator to the final score of a match. We managed: (i) to predict the goal difference between teams in a match and (ii) to identify the contribution of each performance indicator to the match score both for the teams as a whole and for each team individually. The results showed that for the teams as a whole, fourteen variables had the greatest contribution to the outcome of the match. Of these, twelve (shots per quantity of possession percent, missed chances, entrance to the penalty box, chances percent of conversion, key passes accurate, passes, key passes, accurate passes, ratio passes per lost balls, high pressing percent, positional attacks with shots, sum duration of ball possession) had a positive effect, while two (lost balls in own half, ratio dribbles per minute of ball possession) had a negative effect. When we looked at each team separately, the variables that contribute the most to shaping the scores in their matches differ.
Shots per quantity of possession percent is the variable with the biggest contribution. In addition, among the fourteen most important performance indicators is the variable positional attacks with shots. Both of the above variables show that the ability of teams to make shots has a significant positive contribution to the final score in their favor. This finding is in agreement with other research that showed that the total number of shots made by a team is an important factor in determining the match outcome [42][43][44], but also with the research of Castellano et al. (2012) which showed that successful teams make more shots [45].
However, besides the shots, there are also other variables that contribute positively to the final score of the match. Firstly, our research showed that the creation of chances, even if they are lost (chances missed), but also the ability to convert the chances into goals (chances percent of conversion) had a significant positive contribution. Although chances are the factor that determines the variable xG [46], only one study conducted on beach soccer [47] has examined their effect on match score and found that chance creation is a factor that can distinguish winners from defeated teams. Secondly, four variables related to passing and ball possession (passes, passes accurate, ratio passes per lost balls, sum duration with ball possession) are among the fourteen most important. This finding confirms almost all previous research that has examined the contribution of ball possession to match outcome [14,[42][43][44][48][49][50]. On the contrary, the research of Harrop and Nevill (2014) showed that only successful passes help distinguish the games that a team wins [51], while total passes showed the opposite. However, it should be pointed out that this research was carried out with data that concerned only one team. Thirdly, entrance to the penalty box is another variable that we found to significantly contribute to a positive score in a match and this is in agreement with research that had similar objectives [52]. Finally, key passes and high pressing percent (high pressing success rate) have not been examined by relevant research for their contribution to the match outcome. However, other studies showed the importance of key passes to the playing effectiveness of a team [53][54][55], but also the usefulness of a successful high pressing because defending near the opponent's goal seems to be associated with success in soccer [56][57][58].
On the other hand, among the fourteen variables that affect the outcome of the match, there are two variables (ratio dribbles per min of possession, lost balls in own half) that have a negative contribution. Liu et al. (2015) and Harrop and Nevill (2014) had already shown that dribbles had clearly negative effects on the probability of winning [43,51], which agrees with our own finding. The variable lost balls in own half has not been considered in research investigating the contribution of performance indicators to the match outcome. However, both among coaches and in the scientific literature, it is commonly accepted that the closer to the rival goal the start of the offensive action, the greater the probability of success in ball possessions [59][60][61].
In addition to applying our methods to all teams as a whole, we also applied them to some teams separately. In these cases, there were differences in the fourteen variables that had a greater contribution to shaping the outcome of their matches depending on the philosophy of their coach and the tactical principles they adopted. For example, Liverpool manager Jurgen Klopp's preference for the "high press" is well known [62][63][64]. This is reflected in the results of our research, since three of the fourteen variables (high pressing percent, ball recoveries in opponent's half, ratio defensive challenges attacking 3rd plus defensive challenges midfield 3rd per defensive challenges) for Liverpool are related to this particular philosophy, while for the teams as a whole only one of them appeared.
On the other hand, the style of Guardiola's teams (tiki-taka) is characterized by high percentages of possession with many and short passes [65][66][67]. The results of our research showed that in Pep's team (Manchester City) five of the fourteen variables that have the greatest contribution to the final score of the matches are related to this style of play (passes, accurate passes, sum duration with ball possession, ratio passes per lost balls, ball possession percent). We looked at one more English Premier League team (West Ham). When they attempted to press their opponent high, they usually did so in a 4-2-4 formation and the players in the front four line often had long distances from the remaining six players. This made them vulnerable in the given situations. This particular observation was made after a qualitative analysis of West Ham's games by one of our authors, who is a certified soccer performance analyst. The results of our quantitative research confirm this particular observation, since although in the teams as a whole the "high pressing percentage" variable contributes positively to the result, in West Ham it contributes negatively.
In addition to the three English teams, we also looked at an Italian team (Lazio) whose manager (Maurizio Sarri) has given his name to a style of play called Sarribal [68]. Sarribal is characterized by persistence in building up from the back even if the opponent presses high with many players. That is, he uses many small passes in the defensive half with the aim of drawing the opponent high. When this is done, the players are instructed to make vertical forward passes to the back of the opposing defensive line. The results of our study fully reflect this specific style of play. In particular, (a) many short passes increase the number of passes, accurate passes and ball possession percentage, (b) the persistence in the build up and the big number of passes in the team's half can increase the opponent's recoveries closer to the team's goal (lost balls in own half), (c) the vertical passes are often key passes that increase the number of final actions (shots per quantity of possession percentage), while (d) the movements attempted by players at the back of the opposing defensive line (to receive vertical passes) can also increase offsides.
In this paper, we presented a pipeline for predicting the average team score performance of football teams using machine learning, data preprocessing, and explainability. However, there are certain limitations to the study that should be acknowledged. First, the data used in this study is limited to one season, the 2021-2022 season, which may not fully capture the dynamics and complexities of team performance over time. Additionally, while our prediction is focused on the average team score performance over the year, it is not able to predict individual team score performance per match, as this prediction would not have a good performance. This limits the scope of the study and the potential applications of the proposed pipeline. To improve the model's performance and to provide more robust predictions, it would be beneficial to gather data from multiple seasons, and also work on predicting individual match scores' performances.
In addition to the proposed sequential forward selection technique, there are other robust feature selection techniques, such as BORUTA, which is a wrapper-based method built around the random forest algorithm [69]. BORUTA iteratively compares the importance of features to that of shadow features, which are shuffled copies of the original features, to determine their relevance. While BORUTA is considered more robust and can handle non-linear relationships better than sequential forward selection, it may be computationally more expensive. We acknowledge that comparing different feature selection methods could provide further insights into the best approach for our specific task. Future work could investigate the performance of BORUTA and other feature selection techniques in the context of predicting football team performance.
Finally, while our model's primary objective is to understand the importance of various team-level performance metrics within the current season, we acknowledge that the pipeline does not predict future performance. The input and output are simultaneous in time, which means that the model cannot be used as a predictor for subsequent seasons. Future work could explore the possibility of incorporating lagged variables or historical data to enable predictions for upcoming seasons. However, our current approach still provides valuable insights into the factors that contribute to a team's performance, helping stakeholders make informed decisions based on these insights.
Conclusions
This paper aimed to identify and measure the contribution of various performance indicators to the final score of a football match. Through the use of explainable machine learning techniques, we were able to identify the contribution of each team-level performance indicator to the match score for all teams as a whole and for each team individually. The results provided valuable insights into which performance indicators had the greatest impact on the outcome of a match. This information can be used by coaches and sports analysts to make targeted interventions and adjustments to improve the performance of teams. It is important to note that the results of this study are based on data from one season and are not able to predict individual match scores, which are limitations that should be considered when interpreting the findings. Despite this, the study provides a useful framework for understanding the key factors that contribute to a team's performance and can be applied to future research using data from multiple seasons. Table A1. Full list of variables used in our analysis.
Sum_long_passes
Passes with a length of at least 40 m, regardless of the area from which they were made Pass_long_def_3rd Passes made in the defensive third that were at least 40 m long Pass_long_mid_3rd Passes made in the midfield third that were at least 40 m long Pass_long_att_3rd Passes made in the attacking third that were at least 40 m long RATIO_long_passes_PER_passes Passes with a length of at least 40 m/total number of passes Defensive_challenges Duels involving the players of the defending team
Def_challenges_def_3rd
Duels involving the players of the defending team and taking place in the defensive third of that team Def_challenges_mid_3rd Duels involving the players of the defending team and taking place in the midfield third of that team Def_challenges_att_3rd Duels involving the players of the defending team and taking place in the attacking third of that team Air_challenges Duels in which the ball is above shoulder height and players try to play with their heads Air_challenges_won Successful air challenges Air_challenges_missed Unsuccessful air challenges Air_challenges_won__percent Air challenges won/air challenges (%) Air_challenges_def_3rd Air challenges in the team's defensive third Air_challenges_mid_3rd Air challenges in the team's midfield third Air_challenges_att_3rd Air challenges in the team's attacking third Challenges Total number of duels Duels involving the players of the defending team and taking place in the defensive third of that team/total duels involving the players of the defending team
RATIO_def_challenges_mid_3rd_PER_defensive_challenges
Duels involving the players of the defending team and taking place in the midfield third of that team/total duels involving the players of the defending team
RATIO_def_challenges_att_3rd_PER_defensive_challenges
Duels involving the players of the defending team and taking place in the attacking third of that team/total duels involving the players of the defending team
RATIO_def_challenges_att_3rd__def_chall_mid_3rd_PER_defensive_c
Duels involving the players of the defending team and taking place in the midfield and attacking third of that team/total duels involving the players of the defending team
DIFFERENCE_air_challenges_att_3rd_MINUS_air_challenges_def_3rd
Air challenges in the team's attacking third minus air challenges in the team's defensive third
RATIO_air_challenges_att_3rd___air_challenges_def_3rd_PER_air_c
Duels involving the players of the defending team and taking place in the defensive and attacking third of that team/total duels involving the players of the defending team Chances A goal-scoring opportunity Missed_chances A goal-scoring opportunity which did not result in a goal
Fouls
An action that is not compatible with the rules of the game and is used to stop the progress of the opponent's attack Yellow_cards An illegal action punishable by a yellow card from the referee Average passes per minute of possession AVERAGE_passes_PER_ball_possession Average passes per possession Ball_possessions__quantity The number of ball possessions Average_duration_of_ball_possession_sec The average duration of each ball possession Sum_duration_with_ball_possession The total duration of possession for a team Ball_possession__percent The percentage of ball possession for a team Opponent_s_ball_possession_percent The
|
2023-05-07T15:18:08.859Z
|
2023-05-05T00:00:00.000
|
{
"year": 2023,
"sha1": "c2b698f3427b87f384c0a7c79d630c5e12ccc5c1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/fi15050174",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4ceb91418debbcaedcab201c4398a5ace44338e8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
244954496
|
pes2o/s2orc
|
v3-fos-license
|
Edge spin transport in the disordered two-dimensional topological insulator WTe$_2$
The spin conductance of two-dimensional topological insulators (2D TIs) is not expected to be quantized in the presence of perturbations that break the spin-rotational symmetry. However, the deviation from the pristine-limit quantization has yet to be studied in detail. In this paper, we define the spin current operator for the helical edge modes of a 2D TI and introduce a four-terminal setup to measure spin conductances. Using the developed formalism, we consider the effects of disorder terms that break spin-rotational symmetry or give rise to edge-to-edge coupling. We identify a key role played by spin torque in an out-of-equilibrium edge. We then utilize a tight-binding model of topological monolayer WTe$_2$ and scattering matrix formalism to numerically study spin transport in a four-terminal 2D TI device. In particular, we calculate the spin conductances and characteristic spin decay length in the presence of magnetic disorder. In addition, we study the effects of inter-edge scattering in a quantum point contact geometry. We find that the spin Hall conductance is surprisingly robust to spin symmetry-breaking perturbations, as long as time-reversal symmetry is preserved and inter-edge scattering is weak.
Electrical control of spins is one of the central objectives in the field of spintronics [1]. Topological insulators (TIs) are materials with strong spin-orbit coupling and host spin-momentum locked gapless modes confined to the boundary of an insulating bulk [2,3]. These helical boundary modes offer new possibilities to generate spin polarization and spin currents with electrical means [4][5][6]. So far, most studies of topological insulators from a spintronics point of view have focused on 3D TIs [7][8][9][10], whose 2D surface hosts a massless helical Dirac fermion. (This surface is somewhat similar to graphene, which hosts two Dirac cones and has also been subject to extensive spintronics research [11,12].) However, impurity scattering limits the potential of using the 3D TI surface states for spintronics. Even though direct backscattering k → −k of the Dirac electrons is forbidden by time-reversal symmetry (since k and −k are oppositely spin-polarized), scattering by any other angle is allowed, which leads to the loss of momentum and spin conservation at a scale set by the elastic mean free path [4]. By the same token, current-induced spin accumulation is similarly limited by the mean free path [13].
Impurity scattering is much more restricted in 2D TIs whose boundary modes are confined to 1D. These helical modes have only 2 momentum directions, left and right, and time-reversal symmetry (TRS) forbids elastic backscattering between the two. The modes therefore remain ballistic (and retain their spin) at distances below the inelastic mean free path [14][15][16][17][18][19][20][21]. Likewise, current-induced out-of-equilibrium spin polarization of a 2D TI edge is not limited by elastic non-magnetic impurity scattering. Indeed, a bias voltage V (or charge current e 2 V /h) leads to a spin accumulation per density S z /n = eV /(4E F ) on a 2D TI edge, independent of scalar disorder (the opposite edge would have the oppo-site spin polarization). Here we denote z the spin quantization axis at the Fermi level, assuming it does not vary on the scale eV .
Spin transport on the one-dimensional edge states of a 2D TI was first considered in Refs. [22,23] where the spin Hall conductance was calculated in the ideal case with the conservation of spin-z projection. In this case, the spin Hall conductance is found to be quantized to e/(4π). Upon breaking the spin conservation, the spin Hall conductance is generally finite but not expected to be quantized [24][25][26].
In this paper we formulate the low-energy scattering theory of spin transport in 2D TI edge states and use numerical simulations to go beyond the effective model. Fo-arXiv:2112.04394v3 [cond-mat.mes-hall] 25 Mar 2022 cusing on the recently discovered monolayer WTe 2 topological insulator [36,37,[46][47][48][49][50] as an example, we carry out an extensive numerical study of disorder effects on spin transport. We consider both spin-conserving and explicitly spin-symmetry-breaking terms such as random scalar on-site disorder, spin-non-conserving disorder in the spin-orbit coupling strength, TRS breaking magnetic impurities, as well as inter-edge scattering in a quantum point contact geometry.
Our analytical theory clarifies how the spin conductance quantization gets broken by spin non-conserving perturbations. We identify a crucial role played by local equilibrium or non-equilibrium on the TI edge. Namely, the non-conservation of edge spin current (and a resulting non-quantized spin conductance) arises from a spin torque generated by the spin non-conserving disorder. As we will show, the spin torque vanishes if the edge is in local equilibrium, and is generally non-zero when the edge is out of equilibrium (and can have a non-zero S z ). As a result, when using a 4-terminal measurement of the spin conductances, the bias configuration is of key importance: when the edge has no voltage drop, it can carry a conserved spin current, see Figs. 1-2 and Table I.
The outline of our paper is as follows. We first introduce an effective 1D model for the helical edge modes (Sec. I). We derive the spin current operator and discuss how intra-and inter-edge backscattering perturbations modify the average spin current. In Sec. II, we introduce the spin-resolved Landauer-Büttiker formula to define the spin conductances for a multiterminal setup. In Sec. III, we present our numerical simulations for spin transport in disordered multiterminal systems and in Sec. IV we draw our conclusions.
I. EFFECTIVE DESCRIPTION OF EDGE SPIN TRANSPORT
In this section we develop a low-energy effective Hamiltonian which describes the propagation of the helical edge states in a 2D TI. We then utilize this model to study the effects of localized magnetic disorder and inter-edge scattering on the spin transport properties of the material.
The characteristic feature of a 2D TI is the presence of a pair of helical edge modes and a gapped bulk. On a given edge and at a fixed energy, the helical modes have opposite spin-polarizations and velocities. At low energies, we can approximate the edge spectrum by a linear dispersion and ignore any momentum space spin rotation [31]. Denoting z the spin quantization axis of the TI, we obtain the 1D effective Hamiltonian of a single edge, where v is the velocity of the edge modes, µ is the chemical potential, σ i denotes the spin Pauli matrices, and Ψ(x) = (ψ ↑ , ψ ↓ ) T is the electron field operator. While the effective Hamiltonian (1) does not have full spin-rotational symmetry, it does have a U (1) spinrotational symmetry about the z-axis; we can therefore define a conserved spin current along this axis. Starting from the spin density S z (x) = 2 Ψ † (x)σ z Ψ(x), we obtain the spin-z current operator by using the continuity equation [51] [52,53]: The time derivative in Eq. (2) can be evaluated using the Heisenberg equation of motion: The commutator can then be expressed in terms of the gradient of the density operator ρ(x) = Ψ † (x)Ψ(x). Remarkably, the spin current along the conserved axis is thus tied to the local density: This simple result is a direct consequence of spinmomentum locking: left and right moving electrons carry equal spin currents since they have opposite velocities and spin projections [54]. This is in contrast to conduction by spin degenerate states that are not spinmomentum locked and carry no net spin current. Importantly, we note that any local perturbation which does not break the U (1) spin symmetry of Eq. (1) will not modify the spin current. We will see below that the spin current is indeed robust against such perturbations. One might expect even greater robustness of the spin current since I s z , Eq. (3), commutes with any particle number conserving operator. This robustness is manifest in the quantization of the spin Hall conductance of a two-edge system, as long as inter-edge scattering (which breaks the conservation of particle number on a given edge) is absent and each edge is at a local equilibrium, see Fig. 1a. However, random spin-orbit coupling or magnetic disorder terms δH in the Hamiltonian can break the S z conservation, leading to a spin-torque term on the right-hand-side of Eq. (2), In general, this spin torque breaks the conservation of the spin current defined by Eq. (3) [55]. We will see that in an out-of-equilibrium situation the spin torque can be on average non-zero and lead to a deviation of the spin conductance from the quantized value, see Fig. 1b.
To study the effect of S z -non-conserving magnetic perturbations, we begin by adding a spatially-dependent disorder term to Eq. (1): The σ x operator in Eq. (5) breaks time-reversal (TR) symmetry and the U (1) spin-symmetry, coupling rightand left-movers and resulting in spin-flipping reflections. We will assume that m(x) is non-zero only in the region between 0 and x 0 so that we may treat the system as a scattering problem.
In the presence of the magnetic disorder, the spin torque term, Eq. (4), is non-zero. Thus, the spin current as defined in Eq. (3) is no longer conserved in the disordered region. This leads to a discontinuity in the current due to the perturbation: This discontinuity can be evaluated explicitly by using the scattering matrix to calculate the spin current in the left and right regions due to, say, an incident right-mover with unit amplitude. The transmission and reflection coefficients t and r corresponding to Eq. (5) are given by (see Appendix A) where η m =´x 0 0 m(x) dx/( v) and we neglect the energydependence of the scattering amplitudes (assuming scattering states near the Dirac point). We can then use the scattering matrix S to relate the coefficients of the incoming modes Ψ in to the outgoing modes Ψ out by Ψ out = SΨ in , where S = r t t r .
For our incident right-mover of unit amplitude, the spin current in the left (x < 0) and right (x > x 0 ) regions are related to the transmission and reflection coefficients by We see that the jump, or loss, in the spin current is then We note that for large η m , the "transmitted" spin current I s z (x 0 ) becomes exponentially small, i.e.
where l 0 = x 0 /(2η m ) is a characteristic spin decay length. The transmitted spin current therefore decreases in the same way that transmitted charge current (and conductance) would. The analysis leading to Eqs. (10)- (11) applied to an incident left-mover from the right shows spin currents with the values of I s z (0) and I s z (x 0 ) interchanged, i.e., a spin current I s z (0), Eq. (10), on the right of the barrier. Hence, in general spin-flipping reflections lead to an increase in the spin current on the incident side and a decrease of equal magnitude on the transmitted side. In particular, when edge modes are incident with the same amplitude from both sides, the spin current per unit momentum is equal on both sides of the barrier, I s z (0) = I s z (x 0 ) = v, independent of the strength of spin-flip scattering. In this case the spin torque, Eq. (6), vanishes; the magnetic impurities experience no spin torque in equilibrium [56]. This is a key observation that leads to the robustness of the spin Hall conductance in a four-terminal system when the edge is in local equilibrium, as will be discussed below.
Above, we evaluated the spin current carried by a single scattering state on a helical edge. The thermally averaged spin current for a single edge [obtained by averaging Eq. (3)] is not mathematically well-defined (without a UV cutoff) nor physical. In an actual two-terminal device, there are two edges carrying opposite spin currents, which ensures that the total spin current vanishes at equilibrium. The single-edge Hamiltonian of Eq. (1) can be extended to include both edges of a 2D TI ribbon by introducing another set of Pauli matrices τ i that act on the edge degree of freedom. The effective Hamiltonian of two uncoupled edges at the same chemical potential µ is given by whereΨ = (Ψ 1 , Ψ 2 ) T denotes the two-edge field operator and Ψ i = (ψ i,↑ , ψ i,↓ ). The matrix τ z in the kinetic energy term ensures that the two edges carry edge modes with opposite helicities. Generalizing Eq. (3) to the two-edge system, we obtain the spin current operator which consists of counter-propagating spin currents on the two edges 1 and 2.
A spin Hall current can be driven if the two edges of the ribbon are held at different, constant chemical potentials. This can be modeled by setting µ → µ + τ z eV /2 in Eq. (13). Such an inter-edge bias can be achieved, for example, by using four terminals (see Fig. 1a and Sec. II). Since each edge is at a constant potential, each edge carries a spin current ± v per momentum, as detailed above. Taking the thermal average of the total spin current in the low-temperature limit gives Only perturbations which cause bulk conduction or couple the top and bottom edges will cause a deviation from the quantized conductance value. In the absence of such perturbations, the spin current is conserved since each edge is at a local equilibrium and spin torque vanishes. b) Voltage setup producing non-quantized spin conductances when Sz non-conserving disorder is present. Due to the non-equilibrium distribution on each edge, there is a non-zero spin torque which breaks the conservation of spin current, I s z,L = I s z,R . The lack of spin current conservation requires the definition of separate incident and transmitted spin conductances given by G s Here, voltage distribution of each edge is the same, resulting in no net horizontal spin current.
where f is the Fermi function and ν 0 = 1/(π v) is the edge density of states per length. In this setup with a transverse voltage, we define the spin Hall conductance as G s H = I s z /V . Since each edge is at a constant potential (Fig. 1a), the spin Hall conductance is quantized, G s H = e/(2π), even in the presence of spinnon-conserving perturbations. This quantization can be traced back to the fact that the spin current operator is determined by the local electron density, which does not change upon intra-edge backscattering at equilibrium.
While the spin Hall conductance is robust against intra-edge backscattering, perturbations that couple modes on separate edges (inter-edge scattering) may result in reflections without a corresponding spin flip. The transfer of charge between the two edges changes the spin current, Eq. (14). Hence, such perturbations will lead to a decrease in the spin Hall conductance. To demonstrate this, we add an inter-edge scattering term to the two-edge Hamiltonian, This perturbation conserves S z and therefore does not give rise to spin-torque. Nevertheless, since it does not conserve the number of particles on a given edge, it will lead to a non-quantized spin conductance. As before, in order to define a scattering problem, we will assume that γ(x) is non-zero only in the interval 0 < x < x 0 . Since there are four edge modes in the twoedge system, we can promote r and t in the scattering matrix S in Eq. (9) to 2×2 matrices. In this case, r ij (t ij ) denote the amplitude of an incoming state from edge j reflecting (transmitting) into an outgoing state on edge i. The nonzero components of r and t are where η γ =´x 0 0 γ(x) dx/( v). The other components, meanwhile, vanish due to the lack of a term coupling states of opposite spin. Noting that the reflected edge modes now carry an opposing spin current to the incident and transmitted modes, we find that Hence, unlike intra-edge spin-flip perturbations, interedge tunneling without a spin flip conserves the spin current but results in a decrease of its value. As a result, in the spin-Hall setup, Fig. 1a, the spin Hall conductance G s H is not robust against inter-edge scattering. As was mentioned above, this result could be expected from the fact that the spin current couples to the difference of the density operators between the two edges, Eq. (14), and the inter-edge scattering does not conserve this difference.
When an edge is not at constant potential but has a potential drop V along it (left-right bias), the spin current can have a jump in the presence of spin-flip perturbations, as is illustrated by Eqs. (10)- (11). This jump can be thought of as resulting from a non-zero spin torque, Eq. (6), in the non-equilibrium setup. Due to this jump, one must define separate spin conductances, which we call incident (G s I = I s z (0) /V ) and transmitted (G s T = I s z (x 0 ) /V ), for current flowing on either side of the disordered region (see Fig. 1b). Even without interedge scattering, these conductances are not quantized in the presence of magnetic disorder (unlike G s H ); their sum, however, is robust since G s I + G s T = G s H , see Eq. (30) below. Finally, we note that when there is a voltage drop V across both edges and no top-bottom voltage, we expect no net spin current (see Fig. 1c). This case is the conventional two-terminal charge transport setup, and we define the corresponding two-terminal charge conductance G c 2T as a reference.
The above results that were derived for the simple models of Eq. (5) and Eq. (16) illustrate the generic behavior of the spin conductances. We corroborate the findings by our numerical transport simulation discussed in Sec. III, where we simulate magnetic disorder as well as a quantum point contact (QPC) system to couple the edges (see Fig. 9). Before that, we introduce spin conductances defined in a four-terminal setup, Sec. II.
II. MULTITERMINAL TRANSPORT
We now move from the two-terminal case to a multiterminal system. While a two-terminal TI system requires the use of a proximitizing ferromagnetic heterostructure to drive a net spin current [57], a spin Hall current can be driven purely electrically in a multiterminal setup. In this section we therefore give the relevant expressions for the currents and conductances necessary to study multiterminal charge and spin transport.
Consider a general n-terminal system with metallic leads attached. The full scattering matrix S of such a system relates the coefficients of the incoming modes Ψ in to the outgoing modes Ψ out by Ψ out = SΨ in . In particular, the ij-th block S ij is the scattering matrix for modes scattering from terminal j to i. Furthermore, in the case that the leads share a spin-rotational symmetry along a given axis, we may choose a new eigenbasis which conserves this symmetry. In this basis, the scattering matrix takes the form S iσ,jσ , where the σ indices denote the spins of the incoming and outgoing modes.
The Landauer-Büttiker formula provides the charge current passing through a lead in the low temperature limit in terms of the voltages applied to the leads and the transmission coefficients T ij (from terminal i to j): (26) for the matrix elements. The additional negative sign for the incident conductance ensures that positive current is defined to move to the right. Fig. 1 depicts the biasing setups (except for G s D ).
In the case of spin-rotational symmetric leads, Eq. (21) may easily be generalized to give the spin-resolved current in a lead by considering each lead spin channel as a separate terminal: where the spin-resolved current I r iσ is the outgoing current in lead i due to electrons of spin σ. The charge and spin currents in each lead can then be related to these spin-resolved currents by The above equations also suggest that spin current can be measured by using two ferromagnetic terminals fully polarized along the z and −z axes. The net current into each terminal will be effectively spin resolved and their difference gives the net spin current. In Fig. 2b, we envision using this technique to measure the spin current into each terminal [58].
In the scattering formalism, the conductance G of an n-terminal system is the n × n matrix relating the currents in the leads to the applied voltages. Assuming the leads share the same spin-rotational symmetry as the TI in the pristine limit, we define the 2n × n spin-resolved conductance matrix G r by the spin-resolved current response I r iσ to a small voltage V j (setting all other voltages to zero): G r iσ,j = I r iσ /V j . From this we then define the n × n charge and spin conductance matrices G c/s by By inverting the conductance matrices, one could also quantify the inverse Hall effect and the inverse spin Hall effect, where a voltage is generated by a charge or spin current, respectively. While the conductance matrices in Eqs. (25)-(26) provide the current response resulting from any voltage configuration, it is more illuminating to define conductance values for specific voltage setups such as those depicted in Fig. 1. In Table I we define several such conductance values for the four-terminal device depicted in Fig. 2a: the standard two-terminal charge conductance G c 2T due a horizontal potential bias, the incident and transmitted spin conductances G s I/T due to a vertical bias on a single side, the spin Hall conductance G s H due to a vertical bias on both sides, and the diagonal spin Hall conductance G s D due to a diagonal bias (this was considered in Ref. [22]). We note that in the case of G s D there is a potential drop on every edge. This leads to G s D being less robustly quantized than G s H , see Sec. III C. It is important to recognize that the spin conductances defined in Table I are defined with regards to the spin currents passing through the leads. In a multiterminal system with spin-non-conserving disorder this is not the same as spin currents passing through a cross section of the TI sample. In Fig. 2c we demonstrate this difference in the case of the spin Hall current and conductance. The net spin current into leads 3 and 4 on the right has two components: the spin Hall current from the left leads, I s H , and the extra spin current between leads 3 and 4, δI s H , generated by the spin torque from spin-non-conserving disorder, see Eq. (6). In terms of these, the spin Hall conductance is G s H = (I s H + δI s H ) /V . In general, G s H is not equal to the conductance corresponding to just the spin Hall current passing through the sample, G s H = I s H /V , especially when the connection between leads 3 and 4 is disordered (see Sec. III C). Importantly, only G s H is quantized as predicted in Sec. I when the entire sample is disordered; G s H is only quantized when the connection between leads 3 and 4 has no spin-symmetry breaking disorder [59]. This picture is confirmed by our numerical study where we compare clean and disordered connection between leads 3 and 4, see Fig. 7. Using the definitions provided by Table I, we can derive several relations between the four-terminal conductances. In particular, we consider two special cases which will be relevant to the results in Secs. III A and III C. When the disorder does not break the spin-rotational symmetry of the TI, transmission between opposite spins is impossible: T iσ,jσ ∝ δ σσ . This restriction results in the following relations between the conductances, The relations in Eqs. (27)-(28) are valid so long as every conducting state is spin-polarized and the spin-rotational symmetry remains unbroken. Meanwhile, if there is no inter-edge scattering then only spin-preserving transmission and spin-flipping reflections are allowed: T iσ,jσ ∝ |δ ij − δ σσ |. The resulting conductance relations are, Unlike Eqs.
III. NUMERICAL STUDIES OF DISORDERED MULTITERMINAL SYSTEMS
To numerically study the transport properties of WTe 2 , we utilized the Kwant package [60] for Python to implement the tight-binding model introduced in Ref. [50]. Four-terminal systems were created to study the conductances in Table I. Each system is comprised of a sample in the topological phase with four leads of width W lead = 12 nm attached at the corners, as depicted in Fig. 2. We model the leads with the same WTe 2 tight-binding model as the sample, except with spin-orbit coupling set to zero. The Fermi level of the leads is placed within the valence band (µ = −400 meV) to allow for an abundance of conducting bulk modes; the sample Fermi level, meanwhile, is placed near the center of the 56 meV wide bulk gap (E = 0 in Fig. 3) to ensure only edge modes are relevant in the pristine, zero-temperature limit. All plots shown utilize a horizontal straight-edge termination [61] that has a Dirac point buried within the valence band (see Fig. 3); however, we find similar results for the zigzag termination which has a Dirac point in the bulk gap. We then use Kwant to construct the scattering matrix for the system, which is used with Eqs. (22)- (26) to determine the charge and spin conductances in the zero-temperature limit [62].
Unless otherwise stated, each plot represents the average of N = 300 disordered samples, which we find to be enough to limit most fluctuations (see Appendix C). We also attach the standard error bars for each plot (i.e. ±σ G / √ N ). For each plot we measure the conductances in terms of the charge and spin conductance quanta, G c 0 = e 2 /h and G s 0 = e/(4π), respectively. In the pristine limit we find the standard [22] quantized values for the two-terminal charge conductance (G c 2T = 2e 2 /h) and spin Hall conductance (G s H = e/(2π)). We also find that G s I = G s T = e/(4π) and G s D = e/(4π) [22] in the pristine limit. In the following subsections we discuss the effects of on-site scalar and magnetic disorder on these results, in addition to disorder in the spin-orbit coupling parameters. We also study inter-edge scattering using a QPC system and calculate the characteristic spin decay length in the presence of magnetic disorder.
A. Sz conserving disorder
Due to the spin-momentum locking of the edge states in a 2D TI, it is expected that any perturbation which neither breaks the spin-symmetry nor couples the edges will not affect current propagation, as long as the perturbation strength is smaller than the gap to bulk excitations. Previous studies [50,63] have demonstrated this in the context of scalar disorder and charge conductance. Here, we demonstrate that weak spin-symmetric disorder does not affect the charge and spin conductance values of our four-terminal system. We study the effects of both on-site scalar disorder as well as disorder in the SOC strength.
In Fig. 4a we add a spatially-dependent on-site potential u(x) drawn from a Gaussian of mean 0 and standard deviation w; we then plot the dependence of the conductances defined in Table I on w. For small enough values of w (< 200 meV), we find that the charge and spin conductances remain quantized at their expected values. This is due to the fact that scalar on-site disorder does not break the TR and spin-rotational symmetries of the TI, nor does it couple the two edges; the transmission amplitudes thus remain unaffected when the disorder is weak. At larger w, however, we see a decrease in the spin conductances and an increase in the charge conductance. The increasing charge conductance is attributable to the onset of bulk conduction within the disordered sample, whose size is smaller than the Anderson localization length. For weak disorder, the Fermi level of the sample remains within the bulk gap, ensuring that only the spin-momentum locked edge states effect the low-temperature conductances. Stronger disorder, meanwhile, can shift the bands sufficiently so that they cross the Fermi level, leading to bulk conduction.
The effect of disorder in the SOC strength is similar to spin-symmetric on-site disorder. In Fig. 4b we multiply the SOC strength by a spatially dependent factor λ(x) drawn from a Gaussian of mean 1 and standard deviation δλ; we then plot the conductances versus w = λ SOC δλ, where λ SOC = 225 meV is the sum of the SOC parameter magnitudes in the WTe 2 tight-binding model [50] (see Appendix B for details on the WTe 2 tight-binding model). Importantly, this "isotropic" modification of the SOC strength does not change the spin quantization axis; this is unlike with anisotropic SOC disorder, see Sec. III B below. Just as with spin-symmetric on-site disorder, the conductances are robust against weak spinsymmetric SOC disorder; however, this regime appears to be smaller for SOC disorder, with the conductances deviating from their quantized values for w > 60 meV.
The conductances are remarkably robust against weak spin-symmetric disorder. In Fig. 5 we plot the transmitted spin conductance G s T versus sample length for w = 150 meV and w = 300 meV on-site scalar disorder. In the weak disorder regime, the conductance remains quantized and does not appear to depend on the length up to L = 100 nm (not shown). Weak length-dependence appears in the very strong disorder regime (w > 200 meV for on-site scalar disorder). These findings are to be contrasted with a diffusive conductor where the conductance is inversely proportional to the length. Fig. 2). Each data point represents the average of 500 samples. Conductances are measured in units of the charge and spin conductance quanta. Note that the G s T and G s D curves are overlapping over almost the full range of w.
B. Time-reversal symmetric, Sz non-conserving disorder
In Sec. III A we saw that the charge and spin conductances remained quantized in the presence of weak onsite and SOC perturbations that do not break the spinrotational symmetry of the TI. Here, we demonstrate that the conductances are not protected against SOC perturbations that break the spin-rotational symmetry, even when TR symmetry remains intact. In particular, we implement a TR-symmetric, S z non-conserving disorder term by adding a spatially-dependent iλ 0,x (x)σ x term to the λ 0 hopping amplitude, where λ 0,x (x) is drawn from a Gaussian of mean 0 and standard deviation w (see Appendix B). We demonstrate the effects of this term on the conductances in Fig. 6. As expected, SOC disorder that breaks S z conservation (Fig. 6) will lead to a stronger suppression of edge spin conductances as opposed to S z conserving SOC disorder (Fig. 4b).
For disorder terms weaker than w < 300 meV, the conductances slowly deviate from their quantized values. This result suggests that TR symmetry alone is not enough to ensure quantization of the spin conductances when disorder is added to the SOC hopping amplitudes; rather, it is the combination of TR symmetry and spin-rotational symmetry that leads to this quantization. Of course, this distinction is not relevant when one only considers on-site disorder terms, as in that case spin-rotational symmetry is implied by TR symmetry. At larger w we see a qualitatively different dependence of conductance on disorder strength, corresponding to the onset of bulk conduction in the disordered sample. While the conductances do not remain quantized in the presence of TR-symmetric, spin-non-conserving disorder, their deviations from their quantized values appears to be much weaker than for disorder that breaks TR symmetry, see Fig. 7b in Sec. III C.
C. Magnetic disorder breaking time-reversal symmetry and Sz conservation
Unlike spin-symmetric on-site disorder and SOC disorder, unaligned magnetic disorder breaks both the TR symmetry and the spin-rotational symmetry of the TI, leading to a large deviation of the conductance from the pristine-limit quantization even before the onset of bulk conduction. To demonstrate this, we add a m(x)σ x onsite disorder term, where m(x) is once again drawn from a Gaussian of mean 0 and standard deviation w. We also show how the conductances defined in the leads depend drastically on whether or not there is disorder along the left and right edges.
In Fig. 7a we demonstrate the case of magnetic disorder localized such that there is no disorder between leads of the same side (L trans = 2.5 nm in Fig. 2). We see that the spin Hall conductance G s H maintains its quantized value until the onset of bulk conduction at about w = 200 meV, demonstrating the robustness predicted by Eq. (15). Meanwhile, the charge conductance G c 2T and transmitted spin conductance G s T immediately begin to decrease with w while the incident spin conductance G s I increases. These deviations are in qualitative agreement with Eqs. (10)-(11) if we make the identification η m = w 2 x 0 r 0 /( 2 v 2 ), where x 0 = L − 2L trans is the Fig. 2). Conductances are measured in units of the charge and spin conductance quanta. a) A Ltrans = 2.5 nm wide clean transition region is added to the ends of the TI to ensure no disorder at the lead-TI interfaces. In this case the spin current entering the terminals is approximately conserved and G s H stays quantized up to large w 200 meV. b) No such transition region is added, Ltrans = 0. In this case there is a spin torque that prevents the quantization of G s H , see Fig. 2c.
length of the disordered region and r 0 is the correlation length of the disorder. The conductances also obey the relations predicted by Eqs. (29)- (30). Similarly, the diagonal spin Hall conductance G s D deviates from its quantized value at a much lower strength of disorder than G s H . We attribute this difference to the different biasing configurations: in measuring G s D , every edge has a voltage drop which allows for large spin torque contributions (see Sec. I). We also note that G s D appears to decrease to half of its zero-disorder quantized value. This is due to the fact that in Table I for very strong disorder G s 31 → 0 but G s 34 = −1 due to the clean connection between leads 3 Meanwhile, in Fig. 7b, we demonstrate the case of a fully-disordered sample with magnetic disorder added along the edges connecting leads of the same side (L trans = 0 in Fig. 2). We see that the removal of the clean connection results in a different dependence on the disorder strength. The relations given by Eqs. (29)- (30), which only relied on the lack of bulk conduction and edgeto-edge coupling, still hold for w < 200 meV. However, the spin Hall conductance G s H is apparently no longer quantized, and the deviations of G s I and G s T no longer agree with what is predicted by Eqs. (10)- (11). As mentioned in Sec. II, this discrepancy is due to the fact that we define the conductances in the leads, not in the sample. We expect the spin Hall conductance corresponding to the current in the sample to remain quantized even when the sample is strongly disordered.
In addition to studying how the disorder strength affects the conductances, we also study how the transmitted conductance G s T varies with the sample length L. We plot the dependence of G s T on the disorder strength and sample length, as well as a constant w = 150 meV slice, in Fig. 8. We find that, for constant w, the transmitted spin conductance decays exponentially with the sample length, i.e. G s T ∝ e −L/l0 where l 0 is a characteristic spin decay length. For w = 150 meV, our fit gives l 0 ≈ 9.7 nm, see inset of Fig. 8. This roughly agrees with an estimate of l 0 = 2 v 2 /(w 2 r 0 ) ≈ 3.2 nm if we use the average distance between neighboring lattice sites r 0 ≈ .2 nm as a disorder correlation radius and v ≈ 120 meV · nm estimated from Fig. 3.
D. Quantum point contact system
As mentioned in Sec. I, inter-edge tunneling through the bulk of the TI is another mechanism by which the conductances can deviate from their quantized values. For each conductance G we define the deviation δG from the quantized value G(w = 0) by δG = G(w = 0) − G. In a QPC system of minimum width W QPC , we expect δG ∝ e −WQPC/W0 for W QPC W 0 , where W 0 is the effective decay length of the edge modes (not to be confused with the characteristic spin decay length l 0 studied in Sec. III C). To test this relation, we create a four-terminal QPC system where a rectangular sample is smoothly transitioned into a narrowed region of width W QPC and length L QPC (see inset of Fig. 9). We then add a scalar disorder term to extend the effective decay length W 0 .
In Fig. 9 we plot the resulting conductance deviations against W QPC on a logarithmic scale, along with their linear fits. Using the inverse slopes of the best fit lines, we find that the decay lengths of each conductance component is roughly 13 nm. The various spin conductance deviations, including the incident and diagonal conductance deviations which we hide for clarity, have similar decay lengths. Physically, the decay length serves as an indicator of the edge state width in the QPC geometry. We note that each conductance component decays at the same rate as is expected from Eqs. (27)-(28), valid for a system with spin conservation [64].
IV. CONCLUSIONS
We studied the effects of disorder on spin transport in 2D TIs and established important estimates for the level of disorder strength that starts to hinder spin transport. One of our main findings is that the spin current operator on the 2D TI edge is given by the local density, Eq. (14). For this reason, the spin Hall current generated by a transverse voltage is remarkably robust to even spinnon-conserving perturbations, see Eq. (15), as long as the two edges of the 2D TI are not coupled. However, measuring the spin Hall current in a 4-terminal geometry is difficult due to additional spin currents that flow between the terminals at different potentials, see Fig. 2c. These spin currents are not in general conserved and hinder the measurement of a quantized spin Hall conductance. These findings are confirmed by our numerical simulations, e.g. Fig. 7. Overall, we find that spin conductance is most sensitive to spin-non-conserving disorder such as random spin-orbit coupling (Fig. 6) or magnetic impurities (Figs. 7-8). In the former time-reversal symmetric case, the spin Hall conductance is nevertheless nearly quantized even with relatively large disorder strength of the order of the bulk band gap.
In WTe 2 , recent measurements of the spin quantization axis indicate that spin-orbit disorder is relatively weak. The canting of the edge state spin has been measured in experiments [65,66] in agreement with theoretical models [38,45,50,67,68]. These findings indicate that the spin quantization axis, although canted, does not vary strongly in position or momentum space. This gives hope that the spin of the edge carriers can be conserved over long distances.
We focused on low-temperatures at which scattering is dominated by elastic processes. At the same time, we found that time-reversal symmetric disorder has a weak effect on spin transport, see Secs. III A-III B. Therefore, at higher temperatures, inelastic scattering is expected to become the dominant scattering mechanism, leading to temperature-dependent corrections to the spin conductances. Finite-temperature and interaction effects on spin transport constitute an interesting future direction (see also Refs. [69][70][71] for quantum point contacts). Other intriguing future directions would be to study the details of the tunnel-coupling between a TI edge and a ferromagnetic contact [72][73][74][75] or the effects of electric fields in relatively clean systems and investigate the potential to control spin polarization electrically [76]. ACKNOWLEDGMENTS We thank Yuli Lyanda-Geller, Pramey Upadhyaya, and IgorŽutić for valuable discussions. J.C. would like to thankfully acknowledge the Office of Undergraduate Research at Purdue University for financial support. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center. standard deviation w. We also study spin-conserving disorder in the SOC strength by having λ( r) be drawn from a Gaussian of mean 1 and standard deviation δλ. In Sec. III B we break spin-rotational symmetry (while preserving TR symmetry) by adding a disorder term δH( r) = iλ x 0 ( r)c † r s x (Γ + 4 c r− b+ ∆4 − Γ − 4 c r+ b− ∆4 ) + H.c., with λ x 0 ( r) drawn from a Gaussian of mean 0 and standard deviation w. Finally, in Sec. III C we break both TR and spin-rotational symmetry by including an on-site perturbation δH( r) = m( r)c † r s x Γ 0 c r , with m( r) once again drawn from a Gaussian of mean 0 and standard deviation w.
Finally, we comment on the edge state spin quantization axis of a pristine WTe 2 obtained from Eqs. (B9)-(B10); let us denote the axis z in this section. Noting Figure 11. Disorder-averaged conductance components versus the number of samples used at fixed w. The disorder term is a localized magnetic perturbation as discussed in Sec. III C and shown in Fig. 7b. the lack of an s x term in Eq. (B10), it is clear that the spin quantization axis z lies in the yz-plane. Numerically, we find z ≈ z cos θ + y sin θ with θ ≈ 76.7 • and measure the spin current using Eq. (21) along this axis. Furthermore, as detailed in the preceding paragraph, the spin-symmetry breaking disorder terms we consider are x-polarized and thus perpendicular to both z and z , ensuring that these perturbations fully break the spinrotational symmetry. In the main text, including Eq. (1), we drop the prime from z and simply denote the spin quantization axis z.
Appendix C: Convergence of disorder-averaged conductance Here we confirm the convergence of the disorderaveraged conductance components in the presence of magnetic disorder. To do this, we have extended our calculations for Fig. 7b to include 1000 samples (in comparison to the 300 samples used in the plot). We display the results of these calculations in Figs. 10-11. In Fig. 10 we plot difference in the conductance values averaged over 300 and 1000 samples, normalized by their corresponding conductance quanta G 0 (e 2 /h for charge conductance and e/(4π) for spin conductance). The difference between these averages is less than ±0.03 G 0 for each component, which is small enough for our purposes. Meanwhile, in Fig. 11, we plot the average conductance values versus the number of samples for fixed values of the disorder strength w. We note the averages appear to converge to their long-run values after a few hundred samples, with most of the fluctuations occurring well before 300 samples (marked by a dashed line).
|
2021-12-09T02:15:57.972Z
|
2021-12-08T00:00:00.000
|
{
"year": 2021,
"sha1": "3d1eb207948449367375657b5aca0c2ac1846334",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2112.04394",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "46ce8ab912c7b5c3c406a456993fa99172256119",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
134937085
|
pes2o/s2orc
|
v3-fos-license
|
HOW DOES INTERNATIONAL TRADE REGULATION ADDRESSES EXCHANGE RATES MEASURES?
THIS IS STUDY SEEKS TO ANALYZE THE INTERNATIONAL TRADE REGULATORY FRAMEWORK REGARDING EXCHANGE RATE MEASURES THAT BEAR AN IMPACT ON TRADE. THE PRESENT ARTICLE WILL EXPLORE HOW THE EXCHANGE RATE ISSUE RELATES TO THE WTO AND AFFECTS ITS INSTRUMENTS AND PRINCIPLES AND, IN THE FOLLOWING, WILL LOOK FOR PROVISIONS UNDER THE WTO AGREEMENTS THAT COULD ADDRESS THE EXCHANGE RATE ISSUE AND REBALANCE THE IMPACTS CAUSED BY MISALIGNED CURRENCIES
INTRODUCTION
The issue of exchange rate has been historically considered as a matter of the International Monetary Fund (IMF).When the Bretton Woods system was created, it was decided that the IMF would be responsible for the supervision of exchange rates and balance of payments and the General Agreement on Tariffs and Trade (GATT) would regulate international trade.At that time, since the IMF maintained a strict control on countries' currencies, based on a dollar/gold standard, the contracting parties to the GATT did not worry in incorporating the issue into the Agreement, drafting only a few provisions on the subject, such as GATT Article XV.
Even when a flexible exchange rate system was adopted, in the 1970s, the issue remained neglected by the GATT and, in the following years, by the World Trade Organization (WTO).Nevertheless, exchange rate misalignments can cause significant impacts on international trade instruments, creating incentives to exports and representing barriers to imports (THORSTENSEN et. al., 2012).
Since the 1970s, the problem has been addressed through negotiations between the major economic leaderships, the US and Europe, and the countries whose currencies were affecting trade.The Plaza Agreement, for instance, was negotiated in 1985 by the US, UK, Germany, France and Japan and aimed to address the overvaluation of the Dollar and the devaluation of the Yen.With the increasing participation of developing countries in the international arena, though, this kind of "agreement amongst a few" became increasingly harder to achieve.
The accession of China to the WTO, in 2001, its rise as the leading world exporter and its policy of pegging the renminbi to the dollar brought to the attention of WTO members once again the effects of exchange rate misalignments on trade.After the financial crisis of 2008, some countries, such as the US, the EU and Japan enhanced the use of expansionist monetary policies in order to stimulate their economies, causing an escalation of exchange rate misalignments, with serious impacts on international trade.This issue led to the consideration by academics and public agents of whether those misalignments could be questioned under WTO rules.
The concern that persistent exchange rate misalignments could be creating trade distortions was finally raised by Brazil at the WTO, in April 2011, when it presented a submission to the Working Group on Trade, Debt and Finance (WGTDF) suggesting academic research on the relationship between exchange rates and international trade (WT/WGTDF/W/53).In September 20th, 2011, Brazil presented to the same Working Group a second proposal on the theme, suggesting the exam of available tools and trade remedies in the Multilateral System that might allow countries to redress the effects of exchange rate misalignments (WT/WGTDF/W/56).In March 2012, a seminar on exchange rates took place at the WTO.The conclusions of this seminar were that exchange rate misalignments can affect trade and that the discussion should continue among WTO and IMF members.Finally, a third document was submitted by Brazil in November, 2012 (WT/WGTDF/W/68), bringing the discussion of the effect of exchange rate misalignments on trade instruments, as well as the possibility of exploring existing WTO rules to address such effects.
These were important steps to the discussions, but deeper debates are still required in order to find which WTO rules are affected by misaligned currencies and which mechanisms can be used to address their effects.
The present article is part of these debates, exploring the relationship between the exchange rate issue and the regulatory framework of international trade.In this sense, the first section of this article will discuss the current perception that the subject of exchange rate should be dealt with solely at the International Monetary Fund (IMF) and, in the following, will analyze how the exchange rate issue relates to the WTO and affects its instruments and principles.The second section will then analyze the provisions of the WTO agreements and look for possible regulatory linkages between the Multilateral Trading System and the exchange rate issue.Both sections look at the history of negotiations and development of some trade provisions to argue that, although only a few specific mechanisms deal directly with the matter, the exchange rate issue has been an integral part of the Multilateral Trading System since its creation.
The article concentrates on the effects of sustained exchange rate misalignments on the multilateral trading system's legal framework as well as the existing rules that could be applicable to the subject and the need to address such effects.It is not the aim of this article to propose the use of exchange measures to regulate exchange rates.Rather, it focuses on alternatives that allow countries to compensate the negative effects that misaligned currencies have on international trade.
THE RELATION BETWEEN EXCHANGE RATES AND THE MULTILATERAL TRADING SYSTEM 1.ARE EXCHANGE RATES AN EXCLUSIVE MATTER OF THE IMF?
Exchange rate and currency issues were historically considered as matters of the IMF competence.Accordingly, at the time of the creation of the Bretton Woods System in 1944, maintaining an international fixed exchange rate system based at par values to the dollar was the main objective of the IMF.
The economic impacts of exchange rate manipulations on international trade flows were very much in the minds of the political parties in the negotiations of the Bretton Woods agreements (BOUGHTON, 2004, p. 6).The chaotic consequences to trade of the practice of competitive currency devaluation in the years before the World War II were still very present and an international effort was sought to restrain these mutually damaging, "beggar thy neighbor", practices.
The fixed exchange rate system was anchored at the Article IV of the IMF's Articles of Agreement, which determined that every country should maintain their exchange rates within a 1% band of a par value to the dollar established by the IMF.This meant that the Multilateral Trading System, created a few of years later, would not have to focus on the issue of exchange rate manipulations, nor elaborate mechanisms to counter it.
This could offer evidence to explain why exchange rate manipulation and other currency issues are not present but in a few Articles of the GATT.The problem seemed surpassed, with the creation of the IMF, and the GATT could be roughly silent about it.
Concerns about the impacts of exchange rate misalignments on international trade, however, would resurface after the end of the par value system in the 1970s.The fall of the gold standard required major modifications of the IMF practice in the coming decades, a process that came to be known as the organization's "silent revolution" (BOUGHTON, 2001, p. 582).
The extent and depth of such modifications had direct impact on the IMF rules, particularly on its Article IV, which was transformed from a rigid control to a flexible surveillance mechanism.Andreas Lowenfeld argues that, after the fall of the par value system and the creation of the new surveillance mechanism, "[i]f the Fund Agreement no longer described, let alone controlled, the international monetary system, then it seemed reasonable that the Articles should be rewritten" (LOWENFELD, 2010, p. 582).No such resurfacing, however, was done and "[t]he institution was preserved -that is, the skeleton; but the fundamental rule was replaced by a non-rule, and the mission gradually changed" (LOWENFELD, 2010, p. 582).
The new mission of the IMF was then focused on guaranteeing the balance of payment of endangered countries in a world of floating exchange rates.The objective of restricting exchange rate manipulations was allocated to the new surveillance mechanism.However, due to a change in the economic policies of the major international partners that relaxed concerns over exchange rate manipulations and the expansion of the IMF's surveillance mechanism, in order to include several currency mechanisms and other economic factors, the objective was never completely attained.
On that, Lowenfeld further states that: Article IV did not accomplish the objectives that the drafters had in mind.
Governments were reluctant to answer inquiries put by the Fund, and had no real incentive to do so.[...] The idea that the IMF, or the international community through the IMF, could prescribe conduct under amended Article IV comparable with what the Fund prescribed under Article V did not prove viable, if indeed it was ever seriously considered (LOWENFELD, 2010, p. 585).
The consequence was that the IMF no longer had the mechanisms, or even the purpose, of exerting a rigid control over exchange rate manipulations.Although this reality fitted well the new role of the Fund in international economic governance, it striped the Multilateral Trading System of its safe-net against competitive currency devaluations.
In this sense, current arguments defending that exchange rate issues should be dealt with solely at the IMF are outdated.As IMF history has demonstrated, the IMF can no longer be the main forum where the impacts of currency issues on trade should be discussed, especially not when the control of exchange rate manipulations is the main focus of discussion.It lacks the mandate and the mechanisms to do so (THORSTENSEN et al., 2013).
EXCHANGE RATE IMPACTS ON THE MULTILATERAL TRADING SYSTEM
Although left unguarded since the mid 1970s, the Multilateral Trading System would not feel the consequences of exchange rate deviations until the 2008 financial crisis.In the present political landscape, however, such accords are harder to be reached.After the crisis and the political choice of some of the biggest economies to devaluate their currencies in order to stimulate economic recovery and growth (notably the US, the EU, China and some other Asian countries), the problem has again arisen and the Multilateral System has found itself unprepared to offer solutions.
Although the economic effects of exchange rate misalignments have been discussed at lengths by the academia (LASTRA, 2005, chapter 2), less has been said about their impact on the regulatory framework of the Multilateral Trading System.This section will discuss what are these impacts and how can they be measured.
THE CASE OF TARIFFS AND TRADE REMEDIES
The possible impacts that exchange rates could have on the Multilateral Trading System were perceived already in the beginning of negotiations of the Havana Charter, when guidelines for the negotiations of tariffs were decided.In section E of the Annexure 10 of the Report of the First Session of the Preparatory Committee of the United Nations Conference on Trade and Employment (E/PC/T/33) it is stated:
Avoidance of New Tariff or other Restrictive Measures.
It is important that members do not effect new tariff measures prior to the negotiations which would tend to prejudice the success of the negotiations in achieving progress toward the objectives set forth in Article 24, and they should not seek to improve their bargaining position, by tariff or other restrictive measures in preparation for the negotiations.Changes in the form of tariffs, or changes in tariffs owing to the depreciation or devaluation of the currency of the country maintaining the tariffs, which do not result in an increase of the protective incidence of the tariff, should not be considered as new tariff increases under this paragraph.(emphasis added) The logic of the provision was that devaluation of exchange rates could have direct impacts on the tariffs being negotiated.Based on this provision, Brazil promoted an adjustment of its tariffs before its entry into negotiations, in 1947: Owing to the depreciation of the Brazilian currency (18.67 cruzeiros -US$1), the Brazilian import duties are reduced by 47 per cent.In order to correct this maladjustment the Brazilian Government decided to readjust the duties of its tariffs taking into account only part of the currency depreciation, i.e. 40 per cent.Otherwise, the Brazilian Government would initiate the multilateral negotiations at Geneva by making a gratuitous reduction of 47 percent of the duties of the Brazilian tariff.Furthermore, it must be pointed out quite clearly that the wording of Annexure 10 is "owing to the depreciation of the currency".This means that those provisions take into account not only the devaluation on the par value made by law or by agreement with the International Monetary Fund, but also the actual currency depreciation at the time of the multilateral negotiations at Geneva.(ECOSOC, Note by the Brazilian Delegation on the adjustment of the Brazilian custom tariff, Second Session of the Preparatory Committee of the United Nations Conference on Trade and Employment, August 5th, 1947, E/PC/T/151, p. 2).
In this sense, since its very beginning, the effects of exchange rate deviations on tariffs were already discussed by GATT contracting parties.Exchange rates can be defined as the rate at which a currency can be traded into another currency.In other words, it is the price of a country's currency represented in a second currency.It allows for the price comparison of two goods produced in different countries, using different currencies.In this sense, the exchange rate is directly associated with the price of a good in a given market.
Whenever an exchange rate is devalued it means that the import price of a product represented in another currency will be less than it should be if the exchange rate was at its equilibrium (assuming the export price of the product not to be too responsive to exchange rate misalignments).It affects, thus, not only the competitiveness of this product in a given market, but also the market access negotiated under the WTO and the effectiveness of import tariffs applied: since ad valorem tariffs are levied as percentages of a product's price, a devaluation of this price will impair its effectiveness, while specific tariffs will be enhanced by the devaluation of the exporting country's currency.
Besides the price effect, another possible way of examining these impacts is to use the concept of "tariffication", where ad valorem equivalent rates are calculated merging tariffs and exchange rate misalignments.Just like tariffs, the effect of the exchange rate can be transferred to imported and exported goods' prices.Persistent exchange rate misalignments do have significant distortion effects on ad valorem applied and bound tariffs negotiated in the WTO, being at the same time an incentive to exports from a country maintaining an undervalued currency, as well as raising additional barriers to the entry of imports into its market (THORSTENSEN et. al., 2012).However, import tariffs are not the only trade instruments affected.
Since the creation of the Multilateral Trading System, with the GATT, the contracting parties have sought to restrain protectionist measures to border tariffs.This would provide the system with better transparency and predictability.The notion of "tariffication" of trade instruments is of great importance to the Multilateral Trading System and has been a key notion since its beginning.Many trade diversion studies make an effort to "tarifficate" trade restrictive measures in order to assess their impact on trade flows.The distortion effects of persistent and large exchange rate misalignments are not, in this sense, to be taken lightly.
Trade remedies such as antidumping and countervailing duties are themselves tariffs that are applied in amount to import tariffs at the border.Exchange rate misalignments can either strengthen or weaken the effect of trade remedy measures applied, in the case of a considerable variation between the currency used to levy the duty and the currency of the international price of the product after the investigation.That would mean a potential violation of the objectives of trade remedies.
If the aim of trade remedies is to address the effects of a determined unfair trade practice, its effectiveness depends on accurately accessing and countering such effects.Antidumping and countervailing measures, which both are levied, in general, as specific tariffs, will be distorted by exchange rate variations and will not accurately address the practice identified as unfair during the investigations, to the detriment of either the exporters or the affected sector involved.The measure would, possibly be recalibrated after its review.In the meantime, however, the exchange rate issue brings instability and unpredictability to yet this instrument.
In the same manner, the investigations prior to the application of trade remedies are themselves also affected by deep exchange rate misalignments.A country facing particularly deep devaluations can suffer additional antidumping duties if no proper consideration is done during investigations.The normal price construction and the price comparability both can be affected.
The issue was raised in a GATT panel brought by Brazil against EC's application of antidumping measures on cotton-yarn from Brazilian producers (EC-Cotton-Yarn, 1995).Brazil argued that the EC had violated its obligation under the Tokyo Round Antidumping Code (the predecessor of the Uruguay Round Antidumping Agreement -ADA) by not taking into consideration the particular volatile situation of Brazilian exchange rate concurrent with high inflation.
In early 1989, facing very high inflation, the Brazilian Government froze the exchange rate at one Cr$ ("Cruzeiro") to one US$ in an attempt to decrease money supply and control inflation.The exchange rate freeze continued for a period of three months.During this period domestic inflation continued to grow.Receipts from export sales (which were paid in US$), when converted into Cr$, remained stable.Following the unfreezing of the exchange rate, the Cr$ depreciated.Brazil argued that this combination of a fixed exchange rate and domestic inflation led to a gross distortion in the comparison between domestic prices (when used as the basis of normal value) and export prices, and this resulted in an inflated dumping margin.
Brazil argued that the phrase "particular market situation" in Article 2:4 (presently Article 2:2 of the ADA) included the relevant situations external to the domestic market, such as exchange rates, which affect price comparability.Furthermore Brazil argued that Article 2:6 (presently Article 2:4 of ADA) required the EC to consider the particular exchange rate freeze situation in Brazil at the moment of export in order to respect the "fair comparison" requirement in the Article (EC-Cotton-Yarn, 1995, p. 77-81).
Although relating to a fairly specific situation involving exchange rate issues, the case was an opportunity in which both parts had to discuss particularly different views of the impact of exchange rate variations on antidumping investigations.
Brazil was of the view that the EC's refusal to adjust the exchange rates used in its investigation violated a fundamental principle of the Agreement, considering as "price dumping" what was, in fact, the well-known phenomenon of 'exchange dumping'.Furthermore, Brazil argued that in doing so, the EC had relied on certain principles such as 'monetary neutrality' that were not valid in the context of antidumping proceedings.(EC-Cotton-Yarn, 1995, p. 118).
The EC, on the other hand, argued that: The calculation of dumping margins had to be made on the basis of objective and verifiable information, and not on the basis of arbitrary and subjective aspects.Accepting Brazil's arguments in this regard would amount to introducing considerable amount of subjectivity and uncertainty into the system.It would go far beyond the scope of the Agreement, the possibilities and the competence of the investigating authorities, and the interests of the signatories to have security and predictability in international trade.(EC-Cotton-Yarn, 1995, p. 119) Brazil disagreed that the clear intent of the negotiators was to leave "monetary aspects of dumping" outside the scope of the Agreement.Rather, the intent of the negotiators would have been to exclude the depreciation of exchange rate (the so called exchange rate dumping) from the antidumping mechanism.In other words, since exchange rate dumping was not part of the Agreement, due consideration to exchange rate situations should be done in order to respect normal price and price comparability, as well as to avoid considering such a phenomenon as "regular dumping" (EC-Cotton-Yarn, 1995, p. 280).
Both countries agreed that the present rules governing antidumping investigation excluded the possibility of considering exchange rate dumping in the "price dumping margin".Brazil however argued that completely excluding monetary aspects of dumping would do exactly the opposite, since it would open up the possibility of counting exchange rate dumping as regular dumping, while the EC feared the possible consequence of bringing unpredictability to the system if it considered such arguments.
The panel concluded that Brazil had not proved that the particular exchange rate situation at the time of the sells affected directly the prices practiced in its local market to render it inadequate as a basis for normal price consideration.On that, the Panel further stated that: Even assuming arguendo that an exchange rate was relevant under Article 2.4, it would be necessary, in the Panel's view, to establish that it affects the domestic sales themselves in such a way that they would not permit a proper comparison.Brazil had asserted that exchange rates were capable of affecting domestic sales and prices because, for example, the cost of raw materials could be affected by fluctuations in the exchange rate.In particular, domestic sales and prices could be affected if imported raw materials were used in domestic production.However, Brazil had not argued that the costs of raw materials used in manufacture of cotton yarn were in fact so affected.For the Panel to engage in such an exercise, it would have to exceed its scope of review.(EC-Cotton-Yarn, 1995, p. 479) In this sense, the panel did not exclude completely the influence exchange rate misalignments could have on antidumping investigations.Actually, imported inputs were considerable costs in cotton-yarn production, and provided that Brazil presented this argument, exchange rate could be a relevant aspect in the investigation.In the Panel's view, there is no a priori exclusion of exchange rate considerations in the application of the antidumping rules.
Considering price comparability and Article 2.6 of the Antidumping Code, the Panel reached a narrower interpretation.It considered that: The exchange rate in itself is not a difference affecting price comparability.It is a mere instrument for translating into a common currency prices that have previously been rendered comparable in accordance with the second sentence of Article 2.6.In the view of the Panel, an exchange rate's function is to make it possible to subsequently effect an actual comparison on a common basis as provided under the other relevant provisions of the Agreement.(EC-Cotton-Yarn, 1995, p. 494) The EC-Cotton-Yarn case exposed the lack of specific provisions to deal with the issue in the Antidumping Code.Such absence remained in the following Uruguay Round ADA.The same overall statement can be made in relation to the Subsidies and Countervailing Measures Agreement (SCM).
Other aspects of the Multilateral Trading System are potentially affected as well since they are based on tariffs in their functioning, The Dispute Settlement Mechanism itself can be deeply affected in its efficiency since its most praised characteristicthe possibility of enforcing decisions through the allowance of retaliatory measures -would be softened were a country to maintain its currency persistently undervalued, weakening measures aimed at curbing its violating conduct.Naturally, different measures such the suspension of TRIPs rules would be immune to such effects, but a considerable part of the System could be jeopardized.
Furthermore, rules of origins can be distorted as well, since many rules depend on accurately assessing the value added to a product in different moments of the production chain.When dealing with severely distorted exchange rates, it becomes hard to determine the exact value added by a particular production stage.The assertion can be compromised, making it hard to guarantee the fulfillment of the objectives sought with rules of origin.This is particularly important in the context of Preferential Trade Agreements (PTAs), potentially being a cause of trade diversion and circumvention of the rules.
The Director-General to the WTO, Pascal Lamy, has already brought to the attention the difficulties modern production chains present to the traditional view of rules of origin and international trade negotiations in general (WTO News, Lamy: Global supply chains underline the importance of trade facilitation, 18 October, 2011).The "made in the world" initiative launched by the WTO to support measuring and analyzing international trade in terms of value added, can be severely impacted by exchange rate misalignments.
THE OVERARCHING PRINCIPLES
A more disturbing picture can be drawn if one considers the impact of persistent exchange rate misalignments on some of the pillars of the Multilateral Trading System.One of the system's principles most hit by the different exchange rate fluctuations is the Most-Favored Nation (MFN) present in Article I of the GATT.
Under the MFN principle, each contracting party is broadly obliged to concede the same tariff treatment to every other contracting party.Furthermore, any kind of advantage or privilege one contracting party should have in relation to imports and exports with another contracting party should be "immediately and unconditionally" extended to all other contracting parties.This principle aims at bringing two main benefits to the system.
Firstly, it guarantees that no particular country will have a commercial advantage in its trade with another contracting party, which otherwise could raise tensions and divert trade.This is a broad guarantee, encompassing any kind of benefit a particular country could have in its trade with another country part to the system.The aim here is to avoid arbitrary allocation of trade flows between contracting parties, which could harm the benefits brought by international trade competitiveness.
The other benefit is the stability of the system.Since a producer knows he will face the same tariff barrier to export to a particular country no matter where he exports from, he will be able to decide where to produce without taking applied tariffs into consideration.It also brings predictability and provides a better environment for production to seek whichever country presents better comparative advantages.
Misalignments and possible manipulations of exchange rates, however, bring another variable to the equation, with no direct connection to fair competition principle.The particular exchange rate of a country, and its variation from a level considered of medium term equilibrium, could represent an "advantage or privilege" in bilateral commercial relations between a set of countries when compared with other exchange rates portraying different levels of variation from their equilibrium.This is due to the effects exchange rate misalignments have on tariffs applied by each country.
After the fall of the fixed exchange rate system under the auspices of the IMF during the 1970s and its substitution with a floating exchange rate system, the contracting parties to the GATT have manifested their concern with its consequence to the multilateral trade system.In particular, the impact on market access actually faced by exporters was highlighted in a floating exchange rate system: 1.The CONTRACTING PARTIES, while not questioning the floating exchange rate system and the contributions it has made, acknowledge that in certain circumstances exchange market stability contributes to market uncertainty for traders and investors and may lead to pressures to increased protection; these problems cannot be remedied by protective trade action.(Exchange Rate Fluctuations and their Effect on Trade -Fortieth Session of the CONTRACTING PARTIES, Action taken on 30 November 1984 -L/5761) When exchange rate misalignments are "tariffied" and applied to a country's tariffs, a better picture of the uncertainties brought to the system by the exchange market instability and the level of tariff barriers actually faced by exporters from a particular country can be perceived: each particular exporter, depending on where he exports from, will have different tariff treatments and privileges, meaning different market access levels, contrary to what the MFN principle states.The greater the length and persistence of such exchange rate misalignment, the greater the consequences for the most-favored nation treatment.
The absence of specific consideration to the broad and persistent exchange rate misalignments of WTO members means potentially innumerous different tariff treatments between any set of analyzed countries.This situation is directly the opposite of what the Multilateral Trading System sought with the establishment of the MFN principle.
Not only the MFN principle is affected but, with it, the principles of transparency and predictability.After the end of the fixed exchange rate system, the GATT contracting parties, concerned with the negative effects of exchange rate fluctuations on international trade flows, made a statement urging the IMF to improve its system in order to take into account "the relationship between exchange market instability and international trade" (Exchange Rate Fluctuations and their Effect on Trade -Fortieth Session of the contracting parties, action taken on 30 November 1984 -L/5761).
In response, the IMF published in 1984 a study describing the ways by which such exchange rate instability could affect international trade flows (IMF, 1984).The academic and empirical evidences were inconclusive and no systemic adjustments were made by the contracting parties to the GATT in order to address the uncertainty and potential negative effects of exchange rate fluctuations.No particular study was commissioned by the GATT, however, to analyse the impacts of exchange rate manipulations on the instruments of the Multilateral Trade System.
AID FOR TRADE AND QUOTA-FREE-DUTY-FREE INITIATIVES
Finally, a crucial aspect of the WTO is also potentially affected by persistent exchange rate misalignments.Considered by many as the "social" aspect of the multilateral trade system, the Aid for Trade Initiative seeks to help least-developed countries to fight poverty and attain economic development through the insertion of their economies in the international market.
The outlaying idea is that through its insertion in the international market, small economies could get better prices for their products and better prospects for local producers.The volatility of exchange rates and, especially, the possibility of competitive devaluation of big economies currencies, bring deep instability and insecurity to such "export-led" and open economy strategies.
At the 1984 Declaration, the contracting parties had already identified the particularly fragile position that small trading countries would face in a floating exchange rate situation.At Paragraph 2 of the Declaration, it is stated that: The CONTRACTING PARTIES also recognize that adjustment to uncertainty over exchange market instability could be more difficult for small traders when hedging opportunities are limited, and for small trading countries and developing countries, inter alia when the geographical distribution of their trade cannot be easily diversified.(Exchange Rate Fluctuations and their Effect on Trade -Fortieth Session of the CONTRACTING PARTIES, action taken on 30 November 1984 -L/5761) The following study by the IMF also acknowledge that small trading economies would be more vulnerable to intense exchange market instability since traders would have fewer hedging options (MF, 1984).
In this sense, Marc Auboin states that: Of particular concern to LDCs is the dilemma created by regular periods of losses in the terms of trade and at the same time the need to keep the nominal exchange rate relatively stable for domestic monetary policy reasons.In periods of terms of trade losses, this dilemma results in a constant real appreciation of domestic currencies, and hence an inducement to import, with adverse effects on the current account balance and debt to Bretton-Woods institutions.(AUBOIN, 2007, p. 26) A recent OECD examined the impact of sharp exchange rate misalignments in two small open economies -that of Chile and New Zealand.It proved that small economies tend to be more impacted by exchange rate misalignments than larger economies such as the EU, the US or China (OECD, 2011).
The authors simulated misalignments, either upwards or downwards, of 10 per cent of these countries' exchange rates and analyzed the impact on their bilateral trade with bigger economies.Small trading countries have to bear the "full adjustment of exchange rate changes", as they have less diversified production and export base, being less in a position to move into economic sectors less affected by international trade.Bigger economies, when facing the appreciation of their currencies, are able to limit the damage done to their export position by moving up their production into sectors where price elasticity is wider.The study demonstrates, then, that smaller economies will be hit harder by exchange rate misalignments, either of their own currencies or of their bigger trading partners.Price elasticity and the structure of their production chains are also important, helping (or hindering) to mitigate negative effects.
The WTO has already acknowledged such problem.In a recent publication entitled "Aid for Trade at a Glance -2011, showing results", published in conjunction with the OECD, the authors recognize that: If a currency is overvalued, trade liberalization can trigger rising imports and declining exports -because of the damage to cost competitivenesswith the excess demand for foreign exchange resulting in balance-of-payments problems.In addition, domestic economic activity usually declines and unemployment rises because the contraction in import competing sectors is not offset by an expansion of the export sector.Governments then face the choice of either adjusting the exchange rate or reversing trade reform.[...] the impact of supportive macroeconomic policies is often larger than the impact of reducing binding export constraints through aid for trade.(WTO, OECD, 2011, p. 99) This brings enormous challenges to the objective by the WTO to promote economic development of LDC through their insertion in the international market.These countries would depend on a very narrow band of export products, without any real space to divert their production or climb up the production chain.With whole economies dependent on such a small economic structure, exchange rate variations of any particular trade partner could have serious impacts to such programs.
In the same manner, the Quota-Free-Duty-Free Initiative, which the WTO Secretariat and some WTO members struggle to rescue out of the stalled Doha Round, could be offset by sharp variations of the exchange rates of either the conceding countries or of the beneficiaries.
The Quota-Free-Duty-Free initiative consists in an agreement between WTO members, in 2005, to make it mandatory for developed countries, and optional for developing countries, to give duty-and quota-free market access to all exports from least-developed countries.Members were allowed, nonetheless, to exclude up to 3% of tariff lines from this initiative, in order to protect sensitive sectors (LABORDE, 2008, p. 14).
When facing markets with undervalued currencies, LDC exports will be hurt by not zero but real positive tariffs.
WTO AGREEMENTS AND THE EXCHANGE RATE ISSUE
Considering the significant impacts that misaligned exchange rates have on trade, one might ask if the existing provisions under the WTO Agreements cannot address the matter, at least partially.
The issue is in fact already present at the WTO: even though it was not a major concern when the GATT was drafted, since, as explained above, the matter was considered under the competence of the IMF, two GATT articles dealt specifically with the issue of exchange rates.Furthermore, one might consider if other WTO rules, that were not initially intended to address the issue, can be applied in order to rebalance the impacts caused by misaligned currencies.This section will first present the two GATT provisions that were drafted to deal with exchange rates, presenting its interpretation and the difficulties arising from its application to the current context of the multilateral trading system.In the following, other provisions of the WTO shall be interpreted in order to verify if they can properly address the issue of exchange rates.
WTO PROVISIONS ON EXCHANGE RATES
The provisions described below deal specifically with the impacts of exchange rates on trade.Nevertheless, it is necessary to verify if they are able to properly address the current challenges posed by misaligned exchange rates on the multilateral trading system.
ARTICLE XV
The main Article of the GATT to deal with exchange rates and its impacts on trade is Article XV.It establishes the cooperation between the GATT/WTO and the IMF for matters such as monetary reserves, balances of payments and exchange arrangements, denying the idea of a complete separation between WTO and IMF subjects.
The Article explicitly encourages WTO members to seek policy coordination on questions within the jurisdiction of the IMF that affect trade measures, recognizing the intrinsic relation between trade and finance and assuring the coherence of the international economic system as a whole, as designed at Bretton Woods.
Concerning the specific issue of exchange rates and its impacts on trade, Article XV:4 states that: Contracting parties shall not, by exchange action, frustrate the intent of the provisions of this Agreement, nor, by trade action, the intent of the provisions of the Articles of Agreement of the International Monetary Fund.
It puts on evidence the negative impacts that trade and exchange rate measures can have on one another, recognizing that exchange rate issues can affect international trade.The Article stresses the need for WTO members to take into account the relationship between the international trade and monetary systems and to avoid trade or exchange rate measures that could harm any of the purposes of both agreements.
Thus, an important question that has been raised by scholars is a about the relationship between GATT Article XV and Article IV of IMF's Articles of Agreement (See SIEGEL, 2002) and whether a violation of Article IV would be required in order to determine a violation of Article XV:4 (MIRANDA, 2010, p. 115-26).IMF's Article IV establishes the obligations of the Fund's members regarding exchange arrangements.Its paragraph (iii) reads as follows: Article IV: Obligations Regarding Exchange Arrangements Section 1.General obligations of members [...] In particular, each member shall: (iii) avoid manipulating exchange rates or the international monetary system in order to prevent effective balance of payments adjustment or to gain an unfair competitive advantage over other members; Much debate has taken place regarding the "intent" of "gaining an unfair competitive advantage over other members" and about the feasibility of the Fund identifying a member as a currency manipulator (ZIMMERMANN, 2011;FUDGE, 2011;IRWIN, 2011).It is fair to argue that, due to the deep transformation of the Fund's role in the international economic governance structure after the end of the dollar-gold standard in the 1970s and 1980s (BOUGHTON, 2001), especially regarding the possibility of members choosing the "exchange arrangements of their choice" and the surveillance system in place, it is very unlikely that the IMF would recognize a country as in violation of its Article IV due to currency manipulation.
If that holds true, what are the consequences for the applicability of GATT Article XV:4?
The answer to these questions lies in the wording used in Article XV.Three different exchange terms are used throughout the Article, each bearing a specific meaning: exchange arrangements (Article XV title); exchange action (Article XV:4); and exchange controls or restrictions (Article XV:9).The link between GATT Article XV and IMF Article IV is made obvious by their titles, each citing exchange arrangements as their subject of regulation.The use of the term exchange arrangements at the title of both articles seems to indicate it as a general expression, encompassing the different actions and mechanisms provided for in these articles.
While the link between the two mechanisms is established, the limits and structure of the relationship, as well as each Organization's prerogative in the subject is somewhat less clear.Article XV:1 establishes a broad obligation for the WTO to cooperate and coordinate its actions regarding "exchange questions" with the IMF.Article XV:2, by its turn, establishes more specific obligations regarding consultations and prerogatives: In all cases in which the CONTRACTING PARTIES are called upon to consider or deal with problems concerning monetary reserves, balances of payments or foreign exchange arrangements, they shall consult fully with the International Monetary Fund.In such consultations, the CONTRACTING PARTIES shall accept all findings of statistical and other facts presented by the Fund relating to foreign exchange, monetary reserves and balances of payments, and shall accept the determination of the Fund as to whether action by a contracting party in exchange matters is in accordance with the Articles of Agreement of the International Monetary Fund, or with the terms of a special exchange agreement between that contracting party and the CONTRACTING PARTIES.
In this sense, in all cases in which Article XV is analyzed, and thus exchange arrangements are involved, the statistical findings (e.g.whether an exchange rate is misaligned) are presented by the Fund and must be accepted as part of the facts at disposal for an objective assessment by the panel/AB (India -Quantitative Restrictions, para 5.11-13).This interpretation is in line with the position held by the WTO Appellate Body in the case Argentina -Textile and Apparel: The only provision of the WTO Agreement that requires consultations with the IMF is Article XV:2 of the GATT 1994.This provision requires the WTO to consult with the IMF when dealing with 'problems concerning monetary reserves, balances of payments or foreign exchange arrangements'.(Argentina -Textiles and Apparel, AB Report, para 84-85) This is the case of findings concerning GATT Article XV:4 obligation.In order to analyze the "exchange action" taken by a member that is allegedly frustrating the intent of the provisions of GATT, the contracting parties must consult with the Fund and accept its statistical findings.
It is important to emphasize, however, that this passage does not establish that the IMF will have the final word on whether a WTO member is in violation of Article XV:4 of GATT.The only judicial prerogative IMF has is on whether the exchange action is in accordance with the Fund's own obligations.One must bear in mind that the WTO and the IMF are two whole legal systems, each with its own peculiarities and legal reasoning, occupying two different worlds when it comes to legal and juridical interpretations.While the rules governed by the IMF are to be interpreted and applied by the Board of Governors of the IMF, the WTO is construed as a system where the legal obligations/principles are supposed to be given meaning by its dispute settlement mechanism.This is important for the application of the exception present at Article XV:9.It reads as follows: 9. Nothing in this Agreement shall preclude: (a) the use by a contracting party of exchange controls or exchange restrictions in accordance with the Articles of Agreement of the International Monetary Fund or with that contracting party's special exchange agreement with the CONTRACTING PARTIES, or (b) the use by a contracting party of restrictions or controls in imports or exports, the sole effect of which, additional to the effects permitted under Articles XI, XII, XIII and XIV, is to make effective such exchange controls or exchange restrictions.
The rationale of this paragraph is to avoid the GATT mechanism of being in the way of the well functioning of the IMF.One of the main goals of the IMF is to ensure the stability of balance of payments (Article I:iv) and the financial health of its members (Art.I:i).In this sense, and in critical situations, the IMF allows for the exceptional use of capital controls and exchange restrictions throughout its Articles of Agreement.
Paragraph 3 of the Agreement between the IMF and the WTO further clarifies the issue of IMF decisions authorizing exchange restrictions, discriminatory currency arrangements or multiple currency practices pursuant to the IMF Articles of Agreement: The Fund shall inform the WTO of any decisions approving restrictions on the making of payments or transfers for current international transactions, decisions approving discriminatory currency arrangements or multiple currency practices, and decisions requesting a Fund member to exercise controls to prevent a large or sustained outflow of capital.(Annex I, p. 3) The Agreed Commentary on this provision provides as follows: Comment: This information on Fund decisions is relevant to the implementation of GATT and GATS because of certain consequences under these Agreements when a measure is consistent with the Fund's Articles (Article XV of GATT 1994 and Article XI of the GATS).Additionally, under the GATS, members are allowed to impose controls on capital transactions related to their scheduled commitments under certain circumstances, including if such controls are imposed at the request of the Fund.In practice, the Fund's authority to request capital controls (Article VI, Section 1(a) of the Fund's Articles) has never been used.(WTO, Agreed commentary, Annex III to document WT/L/195, 18 November 1996, p. 13-14) Other GATT articles contain similar exceptions that take into account the functioning of the IMF, e.g.Article VII:c on multiple currencies conversion; Ad Note to GATT Article VIII regarding exchange fees for balance of payment reasons; Article XIV:1,3 and 5(a) on exceptions to the Rule of Non-discrimination; Ad Note to Section B of GATT Article XVI on multiple exchange rates.
A member will thus be allowed to depart from a GATT rule in order to duly apply an IMF provision.In such cases, as determined by GATT Article XV:2, the final word on whether the member is correctly applying the IMF provision and thus not violating the GATT obligation, falls onto the IMF prerogative.Article XV:9 is an example of such case.
GATT Article XV:9, however, makes clear reference to exchange controls or restrictions, while other GATT articles make direct reference to multiple exchange rates.These are the only exchange actions that are comprised in the exception.As noted above, these terms are found throughout the IMF's Article of Agreements and indicate specific exchange actions.It is then plausible to argue that exchange actions, other than exchange restrictions or controls and multiple exchange rates, even when operated in accordance with the Articles of Agreement of the IMF can still frustrate the intent of the GATT provisions and thus contravene GATT Article XV:4 (MIRANDA, 2010, p. 120).
The WTO Dispute Settlement Body (DSB) had the opportunity to analyze whether a measure was an exchange restriction in the sense of Article XV:9, and thus part of the exception rule and under the auspices of the IMF consideration, or whether it was an exchange action in the sense of Article XV:4.In the case Dominican Republic -Import and Sale of Cigarettes, Honduras argued that the foreign exchange fee charged on foreign exchange transactions by the Dominican Republic was computed on the value of imports at the selling rate of foreign exchange and applied upon the "importation" of a product, thus nothing more than an import charge, a trade measure within the jurisdiction of the WTO, although in the form of an exchange action (First oral statement of Honduras, para.22).
On the other hand, the Dominican Republic argued that the foreign exchange fee was an "exchange restriction" because it was a direct governmental limitation on the availability or use of exchange as such and that the meaning of exchange restrictions should be interpreted by the IMF, citing the Article XV:9(a) exception (First written submission of the Dominican Republic, paras.91-94).
Although the paragraph 4 of Article XV has not been directly cited by the parts, the underlying issue regarding the foreign exchange fee imposed by the Dominican Republic was whether it should be considered an exchange restriction part of the exception of Article XV:9 and under the jurisdiction of the IMF (as argued by the Dominican Republic) or a broader exchange action violating a trade obligation in the sense of Article XV:4 and thus a matter under the jurisdiction of the WTO (as argued by Honduras).
The Panel stated that Article XV:9 was an exception or an affirmative defense and, in so, that it would be Dominican Republic's burden to prove that the foreign exchange fee should be considered an exchange restriction.The panel noted that the foreign exchange fee only applied to importation of goods, but not to foreign exchange payments of non-import related services or to foreign currency payments made by Dominican Republic residents, nor to remittance of dividends from companies located in the Dominican Republic.The panel thus considered that the measure could not be considered a exchange restriction in the sense of Article XV:9 and stated that: The Panel considers that the ordinary meaning of the "direct limitation on availability or use of exchange ... as such" means a limitation directly on the use of exchange itself, which means the use of exchange for all purposes.It cannot be interpreted in a way so as to permit the restriction on the use of exchanges that only affects importation.To conclude otherwise would logically lead to the situation whereby any WTO Member could easily circumvent obligations under Article II:1(b) by imposing a foreign currency fee or charge on imports at the customs and then conveniently characterize it as an "exchange restriction".Such types of measures would seriously discriminate against imports while not necessarily being effective in achieving the legitimate goals under the Articles of Agreement of the IMF.Therefore, the Panel finds that because the fee as currently applied is imposed only on foreign exchange transactions that relate to the importation of goods, and not on other types of transactions, it is not "a direct limitation on the availability or use of exchange as such".(Dominican Republic -Import and Sale of Cigarettes, Panel Report, para.7.138) The Dominican Republic further stated that the foreign exchange fee had been approved by the IMF as a part of its stand-by arrangement with the Fund and therefore it would be in accordance with the Articles of Agreement of the IMF (First written submission of the Dominican Republic, paras.199-201).The Panel then decided to consult with the IMF, even though acknowledging it was not obligated to, and asked the Fund: i) how the measure was being applied by the Dominican Republic; and ii) whether the measure constituted an exchange restriction in the sense of the Articles of Agreement of the IMF.The IMF replied that the measure was not payable on sales of foreign exchange, rather, it was payable as a condition for the importation of goods (Dominican Republic -Import and Sale of Cigarettes, Panel Report, para.7.141).
It becomes clear from this case that the IMF has no jurisdictional say in exchange actions that may violate trade obligations in the WTO other than the specific cases of multiple currency practices or exchange restrictions or controls.The WTO must consult with the IMF in cases concerning exchange arrangements, but only to obtain statistical inputs.
This rationale is relevant in order to correctly differentiate between manipulators of exchange rate (IMF Article IV:3) and "frustrators" of trade objectives (GATT Article XV:4).Although related, such notions bear important differences.As indicated, the proof of, and IMF's political will to recognize, currency manipulation is complicated (ZIMMERMANN, 2011, p. 427-37).One could still argue that any exchange rate manipulation "to gain an unfair competitive advantage" over other countries will, if effective, frustrate trade objectives.The opposite is not necessarily true however.There may be other exchange actions that do not involve the specific action of exchange rate manipulation that can frustrate the intent of the GATT provisions.
The legal discussion at the WTO could, in this and other similar cases, shun the politically sensitive discussion of identifying currency manipulators and could focus on the analysis of the effects of exchange actions on trade.The IMF would be consulted to provide statistical inputs on exchange rate misalignments, while the WTO experts would determine if exchange actions were frustrating the objectives of GATT articles.
In order to declare a violation of Article XV.4, the WTO Dispute Settlement Body would need to determine that a member is taking an exchange action which is having consequences on trade and, furthermore, is frustrating the intents of the WTO.
The notion of exchange action is unclear and was never tested by the WTO Dispute Settlement Mechanism on a policy of currency devaluation.Although the expression has a wide meaning, one would still have to argue that policies of currency devaluation can be classified as exchange actions.It would be necessary to prove the existence of specific measures taken by a government that directly impacts on the misalignment of its exchange rate, either by provoking or sustaining the misalignment.The simple devaluation of a currency, without any identifiable "action" by the government, would not seem to be in violation of Article XV.4.
The second step to apply Article XV lies on the proof that the exchange action is frustrating the intent of the provisions of GATT.The meaning of the word "frustrate" is given in the Notes and Supplementary Provisions (Annex I) of Article XV, which explain that its intention is: [...] to indicate, for example, that infringements of the letter of any Article of this Agreement by exchange action shall not be regarded as a violation of that Article if, in practice, there is no appreciable departure from the intent of the Article.[...] In order to violate Article XV, the exchange action must create a situation which departs "appreciably" from the economic situation provided for by another GATT article.In other words, a specific GATT mechanism "frustration" must be identified other than Article XV itself.Although not expressly invoked by the parts, it seems to be what happened in the case Dominican Republic -Import and Sale of Cigarettes regarding the violation of Article II.1.Concerning currency devaluations, Article II of GATT that deals with market access could be challenged.This interpretation will be developed below.
It is important to stress though that even in the absence of an independent violation of a specific article of the GATT -as it was the case in the Dominican Republic dispute -the argument based on GATT Article XV:4 obligation would only require to demonstrate that the exchange action frustrated the "intent" of the said provision.In other words, an exchange action can violate Article XV:4 even without violating completely the GATT mechanism whose intent has been frustrated -a rationale similar to the one present at GATT Article XXIII concerning non-violation.
ARTICLE II:6
The idea that exchange rate misalignments can affect the negotiated level of market access is evident under Article II:6.The Article allows the adjustment of tariffs in order to reestablish the negotiated market access affected by misaligned exchange rates in one specific situation: The specific duties and charges included in the Schedules relating to contracting parties members of the International Monetary Fund, and margins of preference in specific duties and charges are maintained by such contracting parties, are expressed in the appropriate currency at the par value accepted or provisionally recognized by the Fund at the date of this Agreement.Accordingly, in case this par value is reduced consistently with the Articles of Agreement of the International Monetary Fund by more than twenty per centum, such specific duties and charges and margins of preference may be adjusted to take account of such reduction; provided that the CONTRACTING PARTIES (i.e., the contracting parties acting jointly as provided for in Article XXV) concur that such adjustments will not impair the value of the concessions provided for in the appropriate Schedule or elsewhere in this Agreement, due account being taken of all factor which may influence the need for, or urgency of, such adjustment.(Emphasis added) A devalued currency has an effect of lowering the relative value of specific duties, enlarging the negotiated market access.It has the exact opposite effect of ad valorem tariffs, which have their relative value raised by a devalued currency, diminishing the market access.The Article allows, thus, countries to reestablish their negotiated market access that was unduly enlarged by the effects of the devalued currency, by negotiating a raise on their specific duties.This negotiation has occurred nine times during GATT era, between 1950 and1975, allowing the raise of bound specific tariffs of Benelux, Finland (3 times), Israel, Uruguay (twice), Greece and Turkey.
Nevertheless, the provision encompasses only one of four possibilities of the effects of exchange rates on tariffs, the other three being: (i) overvalued currencies raise the relative value of specific duties, restringing the market access; (ii) devalued currencies raise the relative value of ad valorem duties, restringing the market access; and (iii) overvalued currencies diminish the relative value of ad valorem duties, enlarging the negotiated market access.If the Article recognizes the need that countries may have to adjust their tariffs in order to address to the impacts of currency misalignments, why not allow this adjustment in all four cases, instead of just one of them?
A second interesting issue raised by Article II:6 is the change in the international monetary system, from a par value to a floating exchange rate system.Initially, any devaluation that could give rise to the application of Article II:6 would be defined by the IMF, according with the par value system managed by the Fund.With the end of the gold standard, it would be necessary to adapt the Article, so misalignments could still be calculated, despite the lack of a par value.
The GATT contracting parties created a Working Group whose objective was to adapt the existing mechanism in Article II:6 to the new reality of floating exchange rates.From 1978 to 1980, the Working Group met and adopted, in January 29 th 1980, the Guidelines for Decisions under Article II:6(a) of the General Agreement (L/4938, 27S/28-29).This document reaffirmed the importance of maintaining the mechanism in order to neutralize the effect of exchange rate devaluation on specific tariffs of contracting parties and created a methodology for the calculation of the currency depreciation, which shall be performed by the IMF.The calculation takes into consideration the import-weighted average exchange rate during the previous six months, and the depreciation shall be based on currencies of trading partners supplying at least 80% of the imports of the concerned country.This Guidelines have been incorporated under GATT 94, as established by its Article 1(b)(iv), and can be rightfully invoked by any WTO member.
It is also worth noting that, unlike Article XV, the provision of Article II:6 is focused on the misalignment itself and not on a governmental action that results in a currency misalignment.
Another relevant element of Article II:6 is the threshold of a 20% devaluation, to enable countries to adjust their tariffs.This is important as it shows that only large misalignments can have a significant impact on the level of market access, justifying an adjustment on tariffs.Under a floating exchange rate system, this threshold is even more necessary, since small variations and peaks in exchange rates occur often, but are not sufficiently grave to affect markets access.Only long standing misalignments, for the past six months, for instance, as established by the Guidelines, should be taken into consideration when evaluating the levels of market access.The Guidelines kept the threshold of 20% of exchange rate misalignment as a base for the renegotiation, but it should be noted that this threshold was considered reasonable based on the level of the tariff rates at that time.Due to the decrease of tariff rates levels, a new exchange rate misalignment threshold could be negotiated in order to allow a tariff renegotiation of the current systems of floating or administrated exchange rates.
Finally, it must be stressed that the negotiation mentioned in Article II:6 had the objective to assure that there has been an enlargement in the country s market access that surpasses the negotiated level and that the tariffs are not adjusted in such way as to restrain his access further than the original negotiated level.It is different, thus, of GATT's Article XXVIII, which allows members to withdraw its concessions, at the condition that it provides other compensations to maintain the general level of concessions.In the case of Article II:6, no compensation is required since the adjustment already aims to reestablish the level of concessions.
Of all the impacts of exchange rate on trade instruments, the impact on market access is the most relevant one, since it may impair the main aim of the WTO: to liberalize international trade, based on a balance negotiation of concession on market access.Article II already provides some mechanisms to address the issue, but in an incomplete manner.Negotiations are essential to adapt the Article in a way it can fully prevent the distortions created by misaligned exchange rates on market access.
THE APPLICABILITY OF OTHER WTO PROVISIONS ON THE EXCHANGE RATE ISSUE
The following provisions were not drafted with the intention of dealing with the impacts of exchange rates on trade.They were designed to address other issues that could be faced at the multilateral trading system.Nevertheless, one can discuss if they can be applicable to the exchange rate issue.
ARTICLE II:1
The basic rules for market access in the context of the GATT/WTO are in GATT Article II.Article II:1(a) establishes that: Each contracting party shall accord to the commerce of the other contracting parties treatment no less favorable than that provided for in the appropriate Part of the appropriate Schedule annexed to this Agreement.
The paragraph states that the level of market access, determined by tariffs and other barriers, shall not be less than the negotiated level under each country s schedule of concessions.
The provision aims, thus, to assure that the negotiations made through GATT and WTO rounds are not impaired by any treatment imposed by a member that might increase or impose new barriers to international trade, reducing the market access.There is a worry of guaranteeing the respect of concessions in order to allow an increasing access to countries markets.Under this trade liberalization logic, the expression "less favorable treatment" should have a wide meaning, including all measures placed by a member that reduces the negotiated market access.Such is the understanding of the EC -IT Products case that has stated that a less favorable treatment should be understood as a measure that adversely affects the conditions of competition for a specific product (Panel Report, para.7.757).
One could argue that since the provision makes reference to a treatment provided on a member s Schedule, this could only comprise tariffs.This interpretation is incorrect: Article II:1 (b) has specific provisions on customs and duties.If paragraph (a) was restricted to tariffs, both Articles would have the same exact purpose.The difference in the wording of both provisions should be taken into consideration.In Argentina -Textiles and Apparel, the Appellate Body found that 'Paragraph (b) prohibits a specific kind of practice that will always be inconsistent with paragraph (a): that is, the application of ordinary customs duties in excess of those provided for in the Schedule."(Appelate Body Report,para. 45).This leads to the conclusion that paragraph (a) has a broader meaning that includes other practices besides the application of ordinary customs duties.
Another interpretation that leads to the same conclusion requires to "tariffy" exchange rate misalignment's effects, by calculating the percentage by which prices of products are increased or decreased due to the misalignment, as previously presented.
When dealing with converting currencies, the "tariffication" process can indicate distortions caused by misaligned exchange rates on prices.When this effect of misalignments is applied onto tariffs charged at the frontier, one can verify the real barriers imposed by a country to imported products and compare it to its WTO commitments.
In the case of devalued currencies, the final barrier imposed to imported products (tariffs adjusted to exchange rate misalignment) may be greater than the bound tariffs under the country s Schedule, reducing the market access and resulting on a less favorable treatment.This effect clearly impairs the aim of Article II, which is the maintenance of the negotiated market access, in a perspective of trade liberalization.
The provision requires, nevertheless, that such less favorable treatment is accorded by the country which is importing the products.In other words, the treatment must be attributable to governments.It cannot be a result of external circumstances.If one considers that the misalignment is a result of a country s policy, regardless if the devaluation was the aim of such policy or just a side effect, it is possible to argue that it is according a less favorable treatment than the one negotiated under the WTO, in violation of Article II:1(a).
Article II:1 (b), on its turn, states that products described on Schedules shall: [...] be exempt from ordinary customs duties in excess of those set forth and provided therein.Such products shall also be exempt from all other duties or charges of any kind imposed on or in connection with the importation in excess of those imposed on the date of this Agreement or those directly and mandatorily required to be imposed thereafter by legislation in force in the importing territory on that date.
In other words, countries have to keep their applied tariffs in an equal or lower level than their bound tariffs and shall not impose any other kind of duty connected with importation that exceeds the negotiated duties under the WTO.It should be noted that paragraph 1(b) is more specific than paragraph 1(a) and its violation automatically means a violation of paragraph 1(a).
Tariffs adjusted to the exchange rate misalignment may be considered as charged in excess of the ordinary customs duties settled on the country's Schedule.The exchange rate misalignment distorts the applied tariff and may increase it over the bound tariffs, especially in developed countries and countries that have recently acceded to the WTO, which have a very narrow margin between applied and bound tariffs.This tariff in excess of the bound tariff would constitute a de facto violation of Article II:1(b) -and consequently of Article II:1(a), reducing the market access and disrespecting the negotiations of tariff reduction through the GATT and WTO rounds (HUDSON et. al., 2011).
The impacts of exchange rate misalignments on GATT Article II:1 are of great worry, since they directly affect the guarantees of the WTO system on the respect of market access commitments made by its members.The distortions caused by those misalignments on market access are undeniable and must be considered under the WTO, under the principles laid out on Article II.
Even if one considers that there is no violation of the letter of Article II:1, is still possible to argue that its intent has been frustrated, since misaligned exchange rates can reduce the negotiated market access.In this case, the frustration of such aim may give rise to a challenge under GATT Article XV:4, combined with Article II, as explained above.
TRADE DEFENSE REMEDIES
Other instruments that might be considered when dealing with the exchange rate issue are the trade defense remedies.Antidumping rights and countervailing measures are the two WTO instruments that can be used unilaterally against unfair trade.Those instruments allow countries that are importing products with unfair competitive advantage, due to the practice of dumping or the concession of subsidies, to implement duties up to the margin of this advantage, rebalancing the competition in their market.Considering the competitive advantages accorded by devalued currencies, in a manner that distorts many of the WTO instruments and principles, one can ask if there is an applicable trade defense remedy.
ANTIDUMPING
The Antidumping Agreement has very few dispositions on exchange rates.The Uruguay Round Agreement was based in the concept of dumping and the practices made during the GATT period, which, by its turn, was created under a context of par value system of currencies, where exchange rate misalignments where not a worry.
The concept of dumping is the practice of an export price below the comparable price of such product, in the ordinary course of trade, in the internal market of the exporting country (normal value).The concept of dumping, as presented above, is based on the difference of the two prices of a product (ADA, Article 2.1) and not on the comparison of the export price practiced and the export price that the product should have if the currency was not misaligned.This competitive advantage that products from countries with devalued currencies may have is not addressed under the Antidumping Agreement.
Under this "price dumping" concept, the issue of exchange rates appears only during the price comparison, when the normal value, in a local currency, must be converted into the same currency used for the export price.Article 2.4.1 states that the conversion shall be made on the date of sale.During negotiations, such provision was proposed due the fact that "[...] the amount of the dumping margin may differ significantly, depending on the exchange rate to be used on specific case" (GATT, Submission of Japan on the Amendments to the Antidumping Code, Multilateral Trade Negotiations -The Uruguay Round, MTNNG/NG8/W/48, 1989, p. 5).An exception is made to fluctuations in exchange rates, which shall be ignored.Finally, exporters shall have at least 60 days to adjust their export prices to reflect sustained movements in exchange rates during the investigation.
The Article reflects some of the impacts of exchange rate fluctuations on trade, allowing mechanisms to adjust the calculation of dumping margins to sudden variations of exchange rates during the period of investigations, which could lead to an inadequate comparison between the export price and normal value.Nevertheless, it does not consider these variations after the implementation of antidumping rights nor it considers the effects of exchange rate misalignments on the determination of the injury.
Exchange rate misalignments can also affect antidumping rights in two different moments: during the investigation, when determining the injury caused by the dumped products and during the application of antidumping rights when there is variation of exchange rates.
When the conversion of the normal price is made in accordance with Article 2.4, there should be no impact of exchange rates on the determination of the margin of the dumping, since the same rate would be used for the establishment of the export price by the producer (considering the production costs in its local currency) and for the normal value, as converted to the export currency on the date of the sale.Since fluctuations shall be ignored, both rates should be the same, regardless of its misalignment.
The problem arises when determining the injury caused by the dumped products on the importing country's market.When a dumped product comes from a country with devalued currency, besides the unfair competitive advantage arising from the price dumping itself, it also has an advantage from its lower price due to the currency conversion at favorable rates.This second factor can increase significantly the injury caused by the imports of such product at the domestic industry of the importing country, which cannot compete with these artificially low prices.The amount of injury, used for the determination of the antidumping rights (Article 9.1), will be much greater than the injury that would be caused only by the dumping.This is relevant because the injury is essential to the application of antidumping rights.If no injury is caused by the dumping, but an injury is found due to currency devaluation, antidumping rights may be implemented in contradiction with its original aim of avoiding harmful price dumping.
In the case of antidumping, a devalued currency may facilitate the application of antidumping rights.This demonstrates the importance of addressing the issue of exchange rates under the WTO, since it distorts many trade instruments, in different manners.The correction of such distortions interests all countries, since countries with both overvalued and devalued currencies are being affected by exchange rate misalignments.
The second unaddressed impact on antidumping rights occurs after the investigation.Exchange rate fluctuations will affect the antidumping duties, frequently charged as specific duties in foreign currencies.This distortion will only be adjusted at the sunset review, until then, the producers may be charged of a much higher or lower antidumping duty, depending whether there was devaluation or overvaluation of the currency.
To address the competitive advantages of a product arising from currency devaluation, another instrument would have to be created, since the Antidumping Agreement has no provisions over the issue, except for the few ones mentioned above.This new instrument would be based on the comparison between the export price and the export price that would be practiced if the currency was in its medium term equilibrium.This is a concept of "currency dumping".
The concept is not new, and some countries had domestic provisions on it.The national legislation of South Africa, for instance, stated that: 84.The dumping duties which may be imposed in terms of section eighty-three, Shall be the following, namely -[...] (e) "exchange dumping duty", which shall be the amount by which the actual cost of the goods as defined in section eighty-five is less than such cost expressed in the currency of the territory of origin or export of the goods and converted into Union currency at a rate which the Minister is hereby authorized to determine and notify in the Gazette.(Anti-dumping and Countervailing Duties -Secretariat Analysis of Legislation, Contracting Parties Twelfth Session, 1957, L/712, p. 94-95).
Furthermore, negotiations of the Havana Charter have considered this possibility, within a proposal of creation of four kinds of dumping, which could be object of antidumping measures: price, service (freight), currency and social dumping (UN, ECOSOC, Report of the Drafting Committee of the Preparatory Committee of the United Nations Conference on Trade and Employment, E/PC/T/34, 1947, p. 13).
The proposal was rejected possibly because in a par value system the idea of a currency dumping seemed a remote risk since the IMF would already provide enough guarantees to avoid countries manipulating their currencies to get competitive advantages.Nevertheless, in a floating exchange rate system, the concept of a currency dumping remedy seems very plausible, creating a trade defense mechanism that would allow countries to offset the advantages acquired by imported products due to exchange rate misalignments.
Despite the lack of prevision of a currency dumping, GATT states, in its Second Ad Note to paragraphs 2 and 3 of GATT Article VI, that: Multiple currency practices can in certain circumstances constitute a subsidy to exports which may be met by countervailing duties under paragraph 3 or can constitute a form of dumping by means of a partial depreciation of a country's currency which may be met by action under paragraph 2. By "multiple currency practices" is meant practices by governments or sanctioned by governments.(Emphasis added).
This provision was included at the request of South Africa that stated that: Mr. Chairman, the South African delegation raised this matter of multiple currency rates in relation to what we term "exchange dumping duties", We had these expressions of opinion and we withdrew our endeavours to get the proposed new paragraph 7 1 written into this particular Article, by virtue of the fact that this commentary was to be included in the notes of this meeting.(ECOSOC, Verbatim Report of the Twentieh Meeting of Commission A to the Second Session of the Preparatory Committee of the United Nations Conference on Trade And Employment, E/PC/T/A/PV/20, 1947, p. 34) 2 The provision shows that although no specific instrument to counter currency dumping was created and even though the IMF would exert control over exchange rates, some currency practices were still considered by members as a form of countervailable subsidy or dumping.It demonstrates that even with the supervision of IMF over exchange rates, a form of currency dumping was still addressed by the contracting parties.
Subsidies
Another trade defense instrument that should be analyzed when dealing with exchange rates is the countervailing measure.The SCM regulates the granting of subsidies and allows members to charge countervailing duties in order to offset the effects of subsidies on imports that are found to be injuring domestic producers.
Economically, a devaluation of a country's exchange rate can be considered as a subsidy, since it is a governmental policy, that includes the buying of foreign currency in order to keep its own currency at artificially low rates, and it has an effect of lowering the prices of exported products, granting them a competitive advantage in other markets.One can ask if this economic concept can be framed under the concept of subsidy under the WTO, allowing the use of countervailing measures.
The SCM has a much more restricted concept of subsidies: 1.For the purpose of this Agreement, a subsidy shall be deemed to exist if: (a)(1) there is a financial contribution by a government or any public body within the territory of a Member [...]; or (a)(2) there is any form of income or price support in the sense of Article XVI of GATT 1994; and (b) a benefit is thereby conferred.
In the case of devalued exchange rate, the conferred benefit is evident, as a devalued currency allows a product to have a lower price at the external market than it would have with an exchange rate at its equilibrium.The benefit could be recognized by the fact that the beneficiary would be placed in a better position than it would be in the absence of the subsidy (Canada -Aircraft, AB Report, para.161).
A greater difficulty arises from the identification of a financial contribution.This requirement was intended to ensure that not all governmental measures that conferred benefits could be deemed to be subsidies (US -Export Restrains, Panel Report, para.8.85).That indicates that the expression "financial contribution" cannot be understood in a wide sense, including all governmental measures that confer a benefit.A financial contribution is an act or an omission involving the transfer of money or the provision of certain goods or services (US -Softwood Lumber III, Panel Report, para.7.24).
In order to classify a devalued currency as a subsidy, first of all it is necessary to prove that the government is taking action to maintain its currency artificially low.The devaluation cannot be a result of an external economic context, there must be a governmental act or omission.Nevertheless, even if a government is deliberately manipulating its currency, one has yet to identify the money transfer between the government and the beneficiary.It could be argued that this transfer lies in the act of converting a currency, by buying a foreign exchange rate at a lower price than the regular market price, or what should be the market price, since the official exchange rate would be the misaligned one.
Once again, the Second Ad Note to Paragraphs 2 and 3 of GATT Article VI states that multiple currency practices can in certain cases constitute a form of subsidy.As the Articles of GATT shall be read together with the SCM, this implies that, in some circumstances, one can identify a financial contribution -that would constitute a subsidy -in certain currency practices.The provision foresees, thus, that some currency practices might be deemed as subsidies by the contracting parties and should be object of countervailing measures.
To be actionable under the WTO, a subsidy must also be specific.A specific subsidy is either a prohibited subsidy (contingent upon export performance or upon the use of domestic over imported goods), or a specific subsidy to an enterprise or industry or group of enterprises or industries (Article 2).In the case of multiple currency practices, the specificity was evident, since a lower exchange rate would be accorded only to some sectors, and thus an Ad Note was included under GATT Article XVI, in order to allow multiple rates in accordance with IMF.
In the case of a single devaluated exchange rate, it seems difficult to consider as specific a devalued exchange rate that should be available to all exporters, investors, etc.Nevertheless, if one can prove that the exchange rate is directly linked with the volume of a country s export, varying when this volume raises or diminish, it is possible to argue that the subsidy arisen from a devaluated exchange rate is contingent upon exports, and thus, a prohibited subsidy under SCM Article 3 -which could be object of a fast track panel.
Furthermore, by the provision on SCM Article 2.3, this prohibited subsidy shall also constitute a specific subsidy, actionable by countervailing duties if it is proved that it causes injury to the domestic industry of another member (LIMA CAMPOS; GIL, 2012).
A proposal dealing with trade defense remedies as a mechanism to address the effects caused by misaligned exchange rate is the "Currency Exchange Rate Oversight Reform Act of 2011".The American Act, still pending on approval, seeks to address to the effects caused by fundamentally misaligned exchange rates through negotiations or by implementing trade defense remedies.
The Secretary of Treasure shall submit to the Congress an annual report on monetary policy and currency exchange rates, which will include a list of currencies designated as fundamentally misaligned.Some of those currencies will be designated for priority action, according to factors such as: market interventions protracted in the currency exchange market; accumulation of foreign reserves; and restrictions on the inflow or outflow of capital, for balance of payment purposes.
The Secretary of Treasure shall seek bilateral consultations with countries with misaligned currencies, so these countries can adopt measures to address the issue, and also seek advice from the IMF, in case of currencies designated for priority action.In this last case, if no policy is adopted by the respective country, antidumping initiations taken by the US shall take into consideration the exchange rate effect, by adjusting the price used to establish export price to reflect the misalignment of the currency of the exporting country.The US government shall also: prohibit procurement by Federal Government of products and services from the country, if it is not party to the Agreement on Government Procurement; request that the IMF consult the country with misaligned currency under Article IV of IMF Articles of Agreement; and not approve any financing of projects located in the country.
One year after the designation of a currency for priority action, if no measure is adopted, the government of the US shall: request consultations at the WTO with such country; and consider undertaking remedial intervention in the international currency markets.
The draft bill also proposes an amendment to the Tariff Act of 1930, in order to allow the initiation of investigations to determine whether currency undervaluation by the government of a country is providing a countervailable subsidy.The initiation of the investigation shall be mandatory for currencies designated as for priority action.The proposal also presents a provision that states that "the fact that a subsidy may also be provided in circumstances that do not involve export shall not, for that reason alone, mean that the subsidy cannot be considered contingent upon export performance".
The Act proposes, thus, both the creation of a currency dumping, that should be implemented together with regular price dumping, and the use of countervailing duties to address to exchange rate issues.
ARTICLE XXIII
One of the most unique rules of the WTO system is Article XXIII, which deals with non violation issues: measures that affect a member s benefits, gained through negotiations, although they do not violate any of the WTO rules.
GATT Article XXIII:1 (b) states that, if a member considers that a benefit accruing to it is being nullified or impaired as a result of the application by another member of any measures, whether or not it conflict with the provisions of GATT, the member may take the issue to the DSB.The disputes arising from Article XXIII can therefore be either violation or non-violation complaints.
The logic of this provision is that competitive opportunities, legitimately expected from tariff concessions, can be frustrated by both measures inconsistent and consistent with GATT (EEC -Payments and subsidies paid to processors and producers of oilseeds and related animal-feed proteins, para.144; EC -Asbestos, AB Report, para.185).Therefore, it is necessary to protect the balance of concessions under the WTO, by providing means to redress any actions that impairs member's legitimate expectations from tariff negotiations.In this sense, the Panel for Japan -Film considered that: [...] the safeguarding of the process and the results of negotiating reciprocal tariff concessions under Article II is fundamental to the balance of rights and obligations to which all WTO members subscribe.Panel Report,para. 10.35)The possibility to pursue a complaint under Article XXIII based on misaligned currencies was affirmed in a Working Party of the GATT.In 1979, during the discussions on the application of Article II:6 (a), which allows the adjustment of specific duties by members with devalued currencies, under the new context of floating exchange rates, a question was raised about the possibility of application of that Article in the opposite situation: whether contracting parties with appreciated currencies should be required to reduce their specific duties, in order to maintain the negotiated level of market access.The Working Party agreed not to pursue the matter, noting that contracting parties could resort to Articles XXII and XXIII of GATT if they considered that the currency appreciation impaired in a particular case the value of specific duty concessions (GATT, Report of the Working Party on Specific Duties, L/4858, 2 November 1979, p. 6).
A devalued currency that reduces or nullifies other members tariffs could, thus, affect the negotiated level of market access, and give rise to a complain, regardless of the violation of any WTO rule, if three elements are met: application of a measure by a WTO member; a benefit accruing under the GATT; and nullification or impairment of this benefit as result of the application of the measure (Japan -Film, Panel Report, para 10.41).
The word "measure" has a broad definition which encompasses binding government action as well as measures that have an effect similar to a binding one (Japan -Film, Panel Report, para 10.47-50).Regarding currency misalignments, one can argue that governmental policies that aim to keep exchange rates in a certain level below or above its fundamental equilibrium could be considered as a measure under the meaning of Article XXIII.It is important to notice that the Article requires an action by a government, which results on the misalignment.The misalignment itself cannot be object of a complaint under Article XXIII.Therefore, misalignments deriving from instability in the global economy, which express only the floating character of some exchange rates could not be charged under a non violation complaint.
Regarding the benefit, it is usually considered as legitimate expectation of improved market access.In a market with devalued currency, other members will face a more restricted market access, with higher tariffs, once considered the effects of exchange rates, which can easily be classified as an impairment of the legitimate expectation of improved market access.The opposite reasoning can also be done: when members with devalued exchange rates export their products, they are, by giving an incentive to their exports through devalued exchange rates, making other members concede a larger market access than the one that was negotiated, impairing expectations of a balanced level of concessions between these members.
An important aspect of those expectations of a benefit is that, in order to be legitimate, the measures must not have been reasonably anticipated at the time of the tariff concessions.This can be a problem when dealing with longstanding policies of currency devaluation.It could be argued that this specific situation could have been foreseen during the negotiations of tariff concessions and, therefore, there is no misbalance to be addressed to.
At last, one is required to prove the existence of a nullification or impairment of that benefit.Such nullification or impairment should be understood as upsetting the competitive relationship between domestic and imported products, clearly caused by the measure at issue.When dealing with policies of currency devaluations, such nullification is evident, since it can give a significant competitive advantage to products, nullifying the protection performed by tariffs and further causing trade diversion.
Article XXIII can, thus, be a useful remedy to some cases of distortions caused by exchange rate misalignments.Nevertheless, it does not directly address the issue, maintaining the systemic distortions caused by misaligned exchange rates.
CONCLUSIONS
Although the GATT and, later, the WTO, have a few provisions on exchange rates, proving the direct relation between exchange rates and international trade, a more consistent regulation over the issue was never a primary concern.The exchange rate misalignments that have caused similar trade imbalance concerns after the end of the dollar/gold standard have all been met with political negotiations between a few interested parties.In the present political landscape, with multiple relevant actors, however, such accords are harder to be reached, as can be noted with the G-20.
After the 2008 financial crisis and the political choice of some of the biggest economies to devaluate their currencies in order to stimulate economic recovery and growth (notably the US, the EU, China and some other Asian countries), the problem has again arisen and the Multilateral System has found itself unfit to offer solutions.This issue has proved to be hard to solve, bringing unpredictability and tension to international trade, with arguments of trade protectionism from all sides.
Three alternatives can be foreseen to avoid the detrimental impact of persistent exchange rate misalignments over international trade context.First of all, although unlikely in the present international scenario, a political agreement to the likes of the Plaza and Louvre Accords can be met among the principal interested parties, namely China, the US and a few other countries, in order to achieve a compromise over the misalignment of their exchange rates.
Secondly, a multilateral negotiation should take place in the WTO to adapt the existing rules to the new international trade landscape, taking into consideration the new role and focus of the IMF and WTO.Exchange rates are no longer controlled efficiently by the IMF, being one of several macroeconomic tools that countries can resort to in order to equilibrate their balance of payments.Presently, IMF Article IV reviews take into consideration the renewed focus of the Organization and no longer demand countries that they avoid manipulating their exchange rates.This broad review and the place of exchange rates in this scheme are perfectly adapted to IMF's objectives, but they bring deep consequences for international trade and the well functioning of the WTO law.
The members of the WTO should address this new reality and work on reforms to the multilateral trading rules system in order to offer the necessary mechanisms to neutralize the effects of persistent exchange rate misalignments over members' trade.This negotiation has to be done alongside representatives of the WTO and IMF so as to guarantee the coherence between the organizations required be the Marrakesh mandate (Marrakesh Agreement Establishing the World Trade Organization, Article III:5).
Finally, there are a few provisions under the WTO Agreements that could be applicable to the exchange rate issue.A case study of a specific devaluated currency could conclude on the possibility to challenge a WTO member under the Dispute Settlement System, based on violation (and non-violation) of WTO provisions due to its currency misalignments.Article XXIII and, especially, Article XV:4 combined with Article II:1 could be argued before a panel in order to address specific governmental measures thought to be causing exchange rate misalignments.
In principle, countries that maintain devalued currencies violate Article XV:4 (frustration), through the frustration of Article II:1 intents (applied ad valorem tariffs above bound tariffs).
The evolutions suffered by the Bretton Woods System were not properly incorporated by the IMF and the WTO, and more rules could be negotiated in the multilateral trading system in order to create stronger mechanisms to deal with the impacts of exchange rates on trade.Nevertheless, in the absence of these new rules, the main provision on the relationship between exchange rates and trade -GATT Article XV:4 -can be an useful tool to prevent, through the WTO Dispute Settlement Mechanism, further damaging effects of misaligned exchange rates on the multilateral trading system.
| P. 379-416 | JUL-DEZ 2014 NOTES Paragraph 7 of Article 17 of the Havana Charter Draft would have established the concept of currency 1 dumping.UN, ECOSOC, Verbatim Report of the Twentieh Meeting of Commission A to the Second Session of the Preparatory Committee 2 of the United Nations Conference on Trade And Employment, UN ECOSOC, E/PC/T/A/PV/20, 28 June 1947, p. 34.
|
2018-12-05T20:26:17.734Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "44cef14f98f03e8d3798764e01457dc8c36845c3",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rdgv/a/sBWY3D5QxNFnvk5CqMK6pbQ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "56ef87dfb051213dec1f6385e9249b411458588f",
"s2fieldsofstudy": [
"Economics",
"Law",
"Political Science"
],
"extfieldsofstudy": [
"Geography",
"Economics"
]
}
|
24488365
|
pes2o/s2orc
|
v3-fos-license
|
Model reduction for networks of coupled oscillators
We present a collective coordinate approach to describe coupled phase oscillators. We apply the method to study synchronisation in a Kuramoto model. In our approach an N-dimensional Kuramoto model is reduced to an n-dimensional ordinary differential equation with n<<N, constituting an immense reduction in complexity. The onset of both local and global synchronisation is reproduced to good numerical accuracy, and we are able to describe both soft and hard transitions. By introducing 2 collective coordinates the approach is able to describe the interaction of two partially synchronised clusters in the case of bimodally distributed native frequencies. Furthermore, our approach allows us to accurately describe finite size scalings of the critical coupling strength. We corroborate our analytical results by comparing with numerical simulations of the Kuramoto model with all-to-all coupling networks for several distributions of the native frequencies.
Introduction
The collective behaviour of interacting oscillators in complex networks is ubiquitous in nature and has occupied scientists from as disparate areas as biology, engineering, mathematics, physics and sociology for many years now [10,24,1,18,2]. These systems often exhibit collective synchronisation whereby some or all oscillatory agents assume the same phase. Synchronisation behaviour is strongly dependent, amongst other factors, on the nature of the distribution of the native frequencies. In the case where all oscillators are connected with each other and where their native frequencies are unimodally distributed, for example, the onset of synchronisation as a function of the coupling strength is a soft transition, where the order parameter increases smoothly from zero as in a second-order phase transition. On the other hand, in the case of uniformly distributed frequencies, the onset of synchronisation is a hard transition, where at the critical coupling strength the order parameter has a non-zero value as in first-order phase transitions, with possible hysteresis [10,20,6,11]. Capturing all these different dynamic behaviours is a challenging task.
The collective behaviour of coupled oscillators such as synchronisation behaviour suggests that the dynamics of complex systems may (at least in certain cases) be described by a low dimensional dynamical system. To find these dimension-reduced descriptions is a formidable challenge with some remarkable results in recent years [19,22,13,12,23]. In this work we propose a new approach to describe coupled phase oscillators and their non-trivial dynamics. Our approach is not restricted to a thermodynamic limit of infinite many oscillators and allows for the study of finite size effects [20,8,9,27], apparent in any real world networks.
The particular approach proposed in this work seeks to find an approximate parametrisation of the synchronisation manifold by means of appropriately chosen collective coordinates [7,14,15,16,4]. The underlying premise is that the actual solution of the dynamical system assumes a specific functional form the parameters of which are coined collective coordinates. The temporal evolution of the actual solution is then described by the temporal evolution of those parameters, constituting an immense reduction in dimensionality. The functional form of the actual solution and the associated collective coordinates have to be specified upon inspection of numerical simulations of the underlying system. For the Kuramoto model we will establish that the phases are linearly correlated with the native frequencies and we define the collective coordinate to be the parameter relating the two. The method deals directly with the dynamical system rather than its associated macroscopic (infinite-dimensional) description for the distribution or moments thereof [19,22,13,12,23]. It is nonperturbative in the sense that the solution is not written as an expansion in some small parameter. The paper is organized as follows. In Section 2 we introduce the Kuramoto model which constitutes a paradigm for studying coupled phase oscillators.
Our approach to achieve effective model reduction of the dynamics is introduced in Section 3. In Section 4 the method is applied to the Kuramoto model with all-to-all coupling with three different distributions for the native frequencies and we compare the results of direct numerical simulations of the full Kuramoto model with those of the proposed 1-(or 2-)dimensional reduced model. We consider here a uniform native frequency distribution where a hard onset of synchronisation is experienced, a unimodal normal frequency distribution where a soft onset of synchronisation is experienced, and thirdly a bimodal frequency distribution where global synchronisation is preceded by partial synchronisation of weakly coupled synchronised communities. We conclude with a summary and discussion in Section 5.
Kuramoto model
Weakly coupled limit cycle oscillators can be described in terms of their phases as an autonomous dynamical system. A widely used model which governs the dynamics of the phases ϕ i of N oscillators with native frequencies ω i is the celebrated Kuramoto model [10,26,1] (2.1) The adjacency matrix A = {a ij } determines the topology of the network and describes which oscillators are connected. We restrict our analysis to unweighted, undirected networks for which the adjacency matrix A = {a ij } is symmetric with a ij = a ji = 1 if there is an edge between oscillators i and j, and a ij = 0 otherwise. The degree of a node d i , i.e. the number of edges emanating from node i, is then given by d i = j a ij . For interacting oscillators, generically there exists a critical coupling strength K c such that for sufficiently large coupling strength K > K c the oscillators synchronise in the sense that they become locked to their mutual mean frequency and their phases become localized about their mean phase [10,18,26]. This type of synchronous behaviour known as global synchronisation occurs if the dynamics settles on a globally attracting manifold [5]. The level of synchronisation is often characterised by the order parameter [10] with 0 ≤ r ≤ 1. In practice, the asymptotic limit of this order parameter is estimated whereby T 0 is chosen sufficiently large to eliminate transient behaviour of the oscillators.
In the case of full synchronisation with ϕ i (t) = ϕ j (t) for all pairs i, j and for all times t we obtainr = r = 1. In the case where all oscillators behave independently with random initial conditionsr = O(1/ √ N) indicates incoherent phase dynamics; values inbetween indicate partial coherence.
Collective coordinate approach
We will employ a non-perturbative approach to study synchronisation. Our approach is borrowed from the theory of solitary waves where it is known as collective coordinate approach [25]; it has since been used in the context of dissipative pattern forming systems [7,14,15,16,4]. The method we propose makes explicit use of the functional form of the phases as suggested by numerical simulations. The parameters describing the functional form of the phases constitute the collective coordinates. For example, if observations reveal that the functional form of the solution is bell-shaped at all times, the collective coordinates might be the amplitude and width of a Gaussian. The temporal evolution of the full solution is then described by the temporal evolution of the collective coordinates, i.e. how the amplitude and the width of the Gaussians evolve in time. Of course, a specific assumed functional form is typically only an approximation of the actual solution. To eek out most of the assumed ansatz the collective coordinates are determined to optimally describe the solution. The most appropriate notion of optimality is to require that the error made by restricting the solution to be of the assumedz ansatz is minimised. Minimisation is achieved if the error is orthogonal to the subspace of the solutions spanned by the collective coordinates. This projection yields an evolution equation for the collective coordinates which allows to describe the actual solution at all times.
We now establish the method of collective coordinates for the Kuramoto model in detail. Without loss of generality we assume that the mean frequency is zero (unless stated otherwise). Let us assume that the nodes are labelled in order of increasing native frequencies, i.e. i = 1 denotes the node with the most negative native frequency ω 1 and i = N denotes the node with the most positive native frequency ω N . In Figure 1 we show a snapshot of the phases ϕ j obtained by a numerical simulation of the Kuramoto model with an underlying Erdős-Rényi topology with N = 200 oscillators at a coupling strength K = 9.5. The associated order parameter isr = 0.78 indicating a high level of synchronisation. The figure shows that the phases of oscillators with native frequencies of sufficiently small absolute value are frequency locked and correlate highly with the underlying native frequency distribution. This observation suggests that the phases of those frequency-locked oscillators may be approximated by Oscillators with large absolute native frequencies which could not be entrained at a given coupling strength, do not obey this functional relationship but rather oscillate with their native frequencies. The ansatz (3.1) is trivially exact for K = 0 with α(t) = t. Furthermore, in the case of an all-to-all coupling the ansatz (3.1) can be formally motivated for large coupling strength as follows. The stationary Kuramoto model (2.1) can be rewritten as ω i = −Kr sin(ψ − ϕ i ) with ψ being the mean phase [10]. Expanding ϕ i = ψ + arcsin(ω i /(rK)) in 1/K for large coupling strength yields up to first order ϕ i = ψ + ω i /(rK). Since the Kuramoto model (2.1) is invariant under constant phase shifts we may set ψ = 0 leading to our ansatz (3.1).
Our method consists of assuming that the phases of the N oscillators are approximately given by our ansatz (3.1). The time-dependent amplitude α(t) takes the role of a collective coordinate. Our goal is to find an evolution equation for α(t) and thereby reducing the N-dimensional Kuramoto model of phase oscillators to a onedimensional ordinary differential equation for α(t) (in Section 4.3 we will see how to modify the approach to include more collective coordinates). We do so by requiring that the error made by restricting the solution to the subspace defined by the ansatz (3.1) is minimised. This is achieved by assuring that the error E α is orthogonal to the restricted subspace spanned by (3.1). We therefore require that the error E α is orthogonal to the tangent space of the solution manifold (3.1) which is spanned by ∂ϕ i /∂α = ω i . Projecting the error onto the restricted subspace spanned by (3.1) yields the desired evolution equation for α Solutions α ⋆ solving (3.2) withα = Ω correspond to phase-locked solutions rotating uniformly with frequency Ω and phases ϕ j = α ⋆ ω j + Ωt. The existence of such solutions corresponds to a synchronised state. The advantage of this approach is that it allows to study the onset of synchronisation of the N-dimensional network by analysing a one-dimensional problem and furthermore that it allows to study synchronisation for finite network size N.
In the limit N → ∞ we can simplify the expressions by introducing the frequency distribution g(ω) and the variance of the frequencies σ 2 ω = lim N →∞ Σ 2 . We obtain in an all-to-all coupling network with a ij = 1 for all i, j The order parameterr restricted to solutions ϕ j (t) = α(t) ω j is introduced aŝ (3.5) In the limit N → ∞ the real part yieldŝ where we used that our ansatz (3.1) implies for the mean phase ψ = 0. We remark that this approach is not restricted to all-to-all network topologies. For example, in an Erdős-Rényi network, where nodes are connected independently with probability p and where degrees d j are Poisson-distributed with mean degree d = pN, the inner sum in (3.2) can be evaluated as a sum of (on average) d random variables The evolution equation for α(t) is then evaluated in the limit N → ∞ aṡ In the next Section we will employ our framework to study the synchronisation properties of all-to-all coupling networks for several frequency distributions g(ω).
Examples
We now set out to illustrate the capabilities of the collective coordinate approach to describe the synchronisation behaviour of phase oscillators in a Kuramoto model with an all-to-all coupling topology. We do so by determining the steady state solution α The continuous line depicts a smooth cubic function. The corresponding value of the order parameter isr = 0.78. and the order parameterr, in the case of finite N as well as in the thermodynamic limit of N → ∞, for three different distributions of the native frequencies: uniform distribution, normal distribution and bimodal distribution. The results from the collective coordinate approach are then compared with results from direct numerical simulations of the corresponding Kuramoto model (2.1).
Rather than performing averages over realisations of the native frequencies according to the respective distributions we will perform the calculations for the collective coordinate approach by choosing N values of the native frequencies such that the probability of a random draw of a native frequency to fall in the interval (ω i , ω i+1 ) is equal for all values of i.
Uniform distribution of native frequencies
In a first suite of experiments we consider native frequencies which are distributed uniformly on the interval [−1, 1] with distribution Dividing the compact support of the frequency distribution [−1, 1] into N −1 intervals of equal measure, i.e. ω i = 2(i − (N + 1)/2)/(N − 1)) for i = 1, · · · , N, the evolution equation (3.2) for α for finite N is readily evaluated aṡ .
In the thermodynamic limit this simplifies tȯ with σ 2 ω = lim N →∞ Σ 2 = 1/3. The expression (3.6) for the order parameter simplifies in the thermodynamic limit tor In Figure 2 we show the order parameterr as a function of the coupling strength K obtained from a long time integration of the full Kuramoto model (2.1). The onset of synchronisation appears to be hard (see for example, Pazó [20]), i.e. there exists a nonzero value of the order parameter at the critical coupling strength K c . The collective coordinate approach captures this very well as shown in Figure 2. Figure 3 shows that within the framework of collective coordinates the hard onset of synchronisation is described as a saddle node bifurcation [24]: for K > K c = 1.234 a pair of stationary solutions ϕ j = αω j (a smaller stable and a larger unstable one) exist; at criticality the two solutions collide in a saddle node bifurcation at α = α c ≈ 1.303, and there are no stationary solutions for K < K c . Evaluating the right-hand-side of (4.3) around the critical value α c yields as an approximation of the stable and unstable stationary solutions α s,u close to criticality with the critical coupling strength K c and m = 0.270/0.177. Figure 3 shows a numerical evaluation of the stationary solutions α of (3.1) as well as the approximate solutions (4.5). Note that the stable stationary solution is well approximated for a large range of coupling strengths K even far away from criticality. We now analyse the order parameterr as given by (4.4). Figure 2 shows the order parameter as a function of the coupling strength obtained from a numerical simulation of a large network with N = 10, 000 nodes simulating the Kuramoto model (2.1), and as calculated within the collective coordinate framework using (4.4). The critical coupling strengths for the full Kuramoto model with N = 10, 000 is K c = 1.279 which is close to the exact analytical result for the thermodynamic limit with K c = 4/π ≈ 1.273 [10,20,29]. This is well approximated by our simple model with an error of 3%. The non-zero order parameter at the hard transition, which is r c = π/4 ≈ 0.785 in the thermodynamic limit [10,20,29], is estimated as r c = 0.744 within the collective coordinate approach implying a 5% error. Note that the order parameter is extremely well approximated for large values of the coupling strength. This is not surprising since, as pointed out in Section 3, the collective coordinate ansatz (3.1) is consistent with an expansion of the stationary solution in 1/K for all-to-all coupling networks.
A particular advantage of our approach is that it allows us to study the finite size scaling of synchronisation behaviour [20,8,9,27]. In Figure 4 we show a comparison of the critical coupling strength K c (N) as calculated via our collective coordinate approach for variable network sizes N and results from direct simulations of the Kuramoto model (2.1). The difficulty is determining the critical coupling strength K c in finite size networks is that the order parameter has fluctuations of the order 1/ √ N which confounds the onset. As a proxy for the critical coupling strength we record for each value of N the smallest value of the coupling strength K such thatr > 0.8. We have also used the criterion whereby the critical coupling is determined as the coupling strength when the minimal value of the order parameter r(t) over some sufficiently long time window changes from values close to zero to values significantly above zero [28]. This method yields very pronounced onsets, but is not able to detect global synchronisation in the case when it is preceded by partial synchronisation. We therefore present only results obtained using the first method. In the case of uniformly distributed native frequencies, however, both methods yield the same results. Linear regression suggests a scaling K c (N) − K ⋆ c ∼ N where we estimate the critical coupling strength in the thermodynamic limit K ⋆ c as K ⋆ c = 1.279 for the full Kuramoto model and K ⋆ c = 1.234 for the collective coordinate approach.
Besides being able to describe the collective behaviour of oscillators and the onset of synchronisation, we now show that the collective coordinate approach also captures the temporal evolution of individual oscillators through the evolution equation (3.2) or its equivalent formulation (4.2) for uniformly distributed native frequencies.
For sufficiently small coupling strengths K, where the oscillators only weakly interact, both models produce indistinguishable trajectories with phases growing linear in time (not shown). Figure 5 shows a comparison of actual trajectories for a network with N = 101 oscillators at coupling strength K = 1.5 > K c where the collective coordinate approach describes the order parameterr ≈ 0.9 very well (cf. Figure 2). We show a comparison of the phase of the 75th oscillator ϕ 75 with native frequency ω 75 = 0.48 is obtained by solving the full Kuramoto model (2.1) and by solving (4.2) for the collective coordinate approach (3.1). If the initial conditions are chosen to satisfy ϕ j (0) = α 0 ω j with the initial condition α(0) = α 0 not too far away from its equilibrium solution, the two trajectories are reasonably close (top panel). This correspondence of the time evolution of the solutions of the full Kuramoto model and the collective coordinate approach is destroyed for initial conditions which are too far from the asymptotic state, i.e. if α 0 is chosen too large. Their asymptotic state, however, will be close and both systems will evolve to the same fix point, implying that the order parameterr will be close for the two systems. Similarly, if the initial conditions ϕ j (0) of the Kuramoto model are distributed around the initial condition the asymptotic state and therefore differs from the initial condition ϕ j (0) implied by the collective coordinate ansatz (3.1), the asymptotic temporal evolution of the full Kuramoto model and the reduced collective coordinate system are close (not shown). This is consistent with the previous observation that the order parameters r are close for the respective systems, as shown in Figure 2. We show a snapshot depicting the phases of all oscillators in the phase-locked state illustrating that the collective coordinate approach captures the dynamics of the full model. Deviations occur for the extreme oscillators with largest absolute value of the native frequencies.
As we have seen in Figure 2 the collective coordinate approach predicts the onset of synchronisation for smaller values of K than observed for the actual Kuramoto model. For coupling strength where the order parameter significantly differs between the reduced model and the full model, there is of course, also no correspondence between the temporal evolution of the phases nor their asymptotic dynamics. We remark that we obtain similar results for networks differing in several orders of magnitude in size. For small networks of, for example, size N = 20, the phases are very well recovered if the native frequencies are chosen such that they divide the interval [−1, 1] into equiprobable partitions. For a particular random draw from the uniform distribution, the phases and their asymptotic states may differ though, in particular for oscillators with large absolute native frequencies. This discrepancy can be alleviated for the well-synchronised oscillators if averages over many realisations of native frequencies are taken. With increasing size of the networks, the difference between solutions obtained for random realisations of the native frequencies become smaller.
Normal distribution of native frequencies
In a second suite of experiments, we consider native frequencies which are normally distributed with ω i ∼ N (0, σ 2 ω ). The distribution is given by with a normalisation constant Z = 2πσ 2 ω . We use here σ 2 ω = 0.1. The evolution equation (3.2) for α for finite N can be evaluated for random draws of ω i , but we omit here the cumbersome expressions. In the thermodynamic limit the dynamic model for the collective coordinate (3.4) simplifies tȯ The equation for the order parameter (3.6) can be evaluated in the thermodynamic limit tor It is well known that in the case of unimodal frequency distributions, the onset of synchronisation is soft [10,18]. This is illustrated in Figure 6 wherer is shown as a function of the coupling strength. At the so called "Kuramoto coupling" K = K l the order parameter becomes non-zero and a few oscillators with native frequencies close to the mean frequency mutually synchronise; increasing the coupling strength allows more and more oscillators to synchronise, implying a continuous change of the order parameterr(K) as supposed to the hard transition in the case of uniformly distributed native frequencies described in the previous subsection. At some coupling strength K = K c global synchronisation sets in affecting all oscillators [30]. In the thermodynamic limit N → ∞ the Kuramoto coupling can be approximated by K l = 2/πg(0) ≈ 0.505 [10]. The transition to global synchronisation is not visible, however, by just looking at the order parameterr determined from numerical simulations of the full Kuramoto model (2.1). We will now show that the collective coordinate approach is able to describe both, the onset of global synchronisation at K = K c as well as the onset of local synchronisation at the "Kuramoto coupling" K = K l . The onset of global synchronisation can be calculated as before. In Figure 6 we show a result of the collective coordinate approach (4.8) which predicts the onset of global synchronisation at K c ≈ 0.730 with a non-zero value ofr c ≈ 0.779. By construction, the ansatz (3.1) cannot describe local synchronisation where only a subset of the N phase oscillators are phase locked. We now modify the collective coordinate approach to allow for local synchronisation. We denote by N l the size of the mutually synchronised local group, consisting of those N l oscillators with frequencies closest to the mean frequency zero. Hence we restrict our solutions to obey The evolution equation for the collective coordinate α(t) is again obtained by projecting the error made by the ansatz (4.9) onto the restricted subspace spanned by (4.9). We obtainα where the variance of the local group of frequencies is This is just the analogous formulation of (3.2) for a group of oscillators, centred around ω i = 0, of size N l . Assuming that all those oscillators which can synchronise do so, the size of the locally synchronised group of oscillators N l can be determined as the maximal value of N l which supports stationary solutions of (4.10) for a given coupling strength K. Note that N l = N for K ≥ K c . Figure 7 shows how the normalised domain length of the local synchronised cluster increases from zero to L domain > 0 at K = K l and then reaches L domain = 1 at K = K c at which point global synchronisation sets in. The Kuramoto coupling, i.e. the smallest value of K which gives rise to a non-zero value of L domain , is estimated for N = 1000 by our approach as K l ≈ 0.5 corresponding very well with the numerically observed onset of local synchronisation. The asymptotic value is given by K l ≈ 2/πg(0) ≈ 0.505 [10]. It is pertinent to mention that in the case of uniformly distributed native frequencies, no stationary solutions α exist for any N l < N, consistent with the absence of local synchronisation and the existence of a hard transition, as seen in Figure 2. In Figure 8 we illustrate again that the collective coordinate approach can be used to study finite size scaling. For normally distributed native frequencies the numerics suggest a finite size scaling of K c (N) − K ⋆ c (N) ∼ N 2/3 .
We show again a comparison of the actual temporal evolution of individual oscillators. Figure 9 shows results for the global synchronisation regime at K = 0.9 and Figure 10 for the local synchronisation regime at K = 0.6. In the case of the local synchronisation regime we assume that the oscillators which do not take part in the synchronised cluster are simply oscillating with their native frequencies and satisfy ϕ i (t) = (α 0 + t)ω i . The temporal evolution is well described by the collective coordinate approach in both cases. It is clearly seen that, whereas the collective coordinate approach is able to capture the dynamics well of the well-entrained oscillators, it has difficulties describing the dynamics of the entrained extreme oscillators with large absolute native frequencies as seen in the insets of Figures 9 and 10. This discrepancy is due to the collective coordinate approach, as employed here, not taking into account the interaction with the drifting extreme oscillators.
Bimodal distribution of native frequencies
In a third suite of experiments, we consider native frequencies which are distributed according to a bimodal distribution with maxima at ω = ±Ω and We choose here σ 2 ω = 0.1 and Ω = 0.75. The bimodal distribution for these parameters is depicted in Figure 11.
The synchronisation behaviour of Kuramoto networks with bimodal frequency distributions is more complex than in the two previous examples [10,3,5,17,12,21]. If the two peaks are sufficiently close together, the behaviour is, roughly speaking, as described in the unimodal case, discussed in the previous section, with local synchronisation being organised by oscillators with native frequencies closest to the mean frequency zero. However, when the peaks are sufficiently separated, a so called standing wave state [5] occurs at some critical coupling strength K = K p whereby the oscillators with native frequencies close to the peak frequencies ±Ω may synchronise and form two synchronised clusters which rotate with the same frequency but in the opposite direction. Upon increasing the coupling strength further, the oscillators will eventually globally synchronise at a critical coupling strength K = K c [12,21]. In Figure 12 we show a snapshot of the phases for the case K p < K < K c where two partially synchronised clusters are established, centred around the nodes with ω i = ±Ω, respectively, which together form the standing wave state. In Figure 13 we show the order parameterr, where one can see clearly the standing wave state for K p < K < K c and global synchronisation for K > K c with K p ≈ 1.05 and K c ≈ 1.7.
First we apply our approach to the problem of global synchronisation, i.e. for K > 1.7. In the thermodynamic limit the dynamic model for the collective coordinate (3.4) becomesα × cos(Ωα) σ 2 ω α cos(Ωα) + Ω sin(Ωα) ) . (4.14) The equation for the order parameter (3.6) can be evaluated in the thermodynamic limit tor We have again omitted to write down the cumbersome expressions for the case of finite N, which nevertheless can readily be put into a numerical programme. Figure 13 shows a remarkable skill of the collective coordinate approach to describe the onset of global synchronisation and the order parameterr. The critical coupling strength for the global synchronisation at K c = 1.70 is well captured. Furthermore, finite-size scaling can be described within our framework as shown in Figure 14 where we show a comparison of the critical coupling strength K c (N) as calculated via our collective coordinate approach for variable network sizes N and results from direct simulations of the Kuramoto model (2.1). As before we use as a proxy for the critical coupling strength the smallest value of the coupling strength K such thatr > 0.8. The normalised size L domain of the globally synchronised cluster, which we determine as the largest number of nodes for which non-trivial stationary solutions α exist, is depicted in Figure 15. The smooth gradual decrease of L domain with decreasing coupling strength K, is replaced here by a different behaviour caused by the standing wave state and the partial synchronisation of oscillators with native frequencies close to ±Ω.
Oscillators with native frequencies ω ≈ ±Ω near the maxima of the native frequency distribution experience local synchronisation similar to the case of unimodally distributed native frequencies discussed in Section 4.2. In the case of a bimodal frequency distribution this leads to two partially synchronised clusters -one with frequency close to −Ω and another one with frequency close to +Ω (cf. Figure 12). With increasing coupling strength K the two clusters grow in size and will start to interact before, upon further increasing K, they will merge at the onset of global synchronisation. We recall that this scenario only occurs provided the two peaks of the distribution of the native frequencies are sufficiently far separated allowing for a range in K for which they can partially synchronise without interacting too strongly [12] to form the standing wave state. We now set out to describe the standing wave state in our collective coordinate approach.
In order to describe the effect of two partially synchronised clusters which rotate with non-uniform angular speeds of opposite direction we modify our ansatz and introduce a time-dependent phase function f (t) as an additional collective coordinate. We split the phase oscillators into two groups, one group ϕ − i describing the cluster centred around −Ω, and one group ϕ + i describing the cluster centred around +Ω. We make the ansatz where ω ± i are the native frequencies of the nodes participating in the cluster centred around ±Ω. Motivated by the results from direct simulations of the Kuramoto model One can see clearly the two partially synchronised clusters with frequencies centred around Ω = ±0.75. The two clusters rotate with angular velocities of opposite sign forming a standing wave state.
we assume that each of the clusters consists of N 2 ≤ N/2 oscillators. Projecting the error onto the restricted subspace spanned by (4.16), i.e. onto ∂ϕ ± i /∂α = (ω i ∓ Ω) and onto ∂ϕ ± i /∂f = ±1, yields the desired evolution equations for α(t) and f (t). Projecting onto ∂ϕ − i /∂α and ∂ϕ − i /∂f yieldṡ where here The sums are taken over indices representing the nodes within the clusters ϕ − i (cf. (4.10)). Due to symmetry projecting onto ∂ϕ + i /∂α and ∂ϕ + i /∂f reduces to the same equation. The first sum in the right hand side of (4.17) describes the interaction of oscillators within the partially synchronised cluster whereas the second sum describes the interaction of oscillators of one cluster with those of the respective other cluster. In the thermodynamic limit N → ∞, the evolution equations for the collective coordinates simplify in the case when N 2 = N/2 tȯ Whereas in the case of global synchronisation the collective coordinate evolves to a stationary value, in the standing wave regime solutions of the system (4.17)-(4. 18) or (4.20)-(4.21) are oscillatory. These solutions can be found numerically. The order parameter can then be calculated as an average of (3.6) over one period T p of the phase function f (t) and is given in the thermodynamic limit as In the thermodynamic limit the period T p can be determined analytically. Defining the collective coordinateᾱ as an average over the period T p , the Adler equation (4.21) can be solved analytically as with A = (K/2) exp(−ᾱ 2 σ 2 ω ). The associated period T p is then defined as Note that because there are two counter-rotating clusters, the integration only goes to π rather than to 2π. In Figure 13 we show results of the collective coordinate approach for the order parameterr as a function of the coupling strength K. In practice we first test for global synchronisation, and if this cannot be achieved for any domain length L domain , we test for the standing wave state. We have again allowed for local synchronisation whereby not all of the N/2 oscillators ϕ − i are synchronised (cf. Figure 12) analogously to (4.9) and (4.10). The onset of the standing wave state at K p = 1.05 is very well captured. The size of the synchronised clusters is shown in Figure 15 where we count the total sum of locally synchronised oscillators ϕ − i and ϕ + i in the case of the standing wave state for K < 1.7.
In Figure 16 we show a comparison of the actual temporal evolution of individual oscillators in the global synchronisation regime at K = 2.5 and in Figure 17 in the standing wave regime at K = 1.1. The phases of the drifting oscillators which are not included within the collective coordinate analysis, are plotted simply by assuming that they are oscillating with their native frequencies. The actual phase dynamics of the synchronized oscillators is well described by our collective coordinate approach. One sees clearly the oscillatory behaviour of the phases in the standing wave regime which is caused by the interaction of the two counter-rotating clusters. The oscillation with period T p = 5.8 is well captured by the dynamics of the collective coordinates and matches approximately the analytically obtained period T p = 5.6 if we use the sample mean and variance of the native frequencies instead of Ω and σ 2 ω in (4.24).
Summary and Discussion
The collective coordinate approach we propose allows for the description of networks of N oscillators. The dimension N is drastically reduced to a few n judiciously chosen collective coordinates; here we presented examples with n = 1 and n = 2. The approach is not restricted to the thermodynamic limit of infinite network size and allows to study finite networks. The approach can be used to study the synchronisation behaviour of networks, both global and partial, and determine the order parameter and the size of the synchronised clusters. Besides capturing this collective behaviour of oscillators the collective coordinate approach also is able to resolve the temporal evolution of individual oscillators for a wide range of coupling strength.
We have corroborated our approach for the Kuramoto model with all-to-all coupling in numerical simulations for different distributions of the native frequencies. We Figure 15: Normalised length of phase synchronised domain as a function of the coupling strength for a network with bimodal native frequency distribution, calculated using the collective coordinate approach. The globally synchronised branch with L domain = 1 tis preceded for K < 1.7 by a standing wave state in which, for K close to 1.7, all oscillators are involved (i.e. L domain = 1), but where the two partially synchronised clusters are not oscillating in phase. found good agreement of our reduced 1-dimensional model (or 2-dimensional model in the case of bimodal native frequency distributions) with the full N-dimensional system. In particular, the behaviour of the order parameter was well captured and the approach was able to describe soft second-order as well as explosive first-order transitions to synchronisation. We have illustrated that the collective coordinate approach reproduces finite size scalings of the full system. Furthermore, the approach allowed to describe the interplay between a standing wave state involving partially synchronised counter-rotating clusters and global synchronisation in networks with bimodal distribution of native frequencies. We have shown that the collective coordinates are able to capture the dynamics of individual oscillators which is a much stronger form of approximation than just reproducing the collective behaviour.
It is pertinent to caution that the method is by no means rigorous. The choice of collective coordinates is so far limited to a priori information obtained from direct numerical simulations of the full dynamical network. We have seen that transitory temporal evolution of oscillators in a Kuramoto model is only well described by the collective coordinate method provided the initial conditions are sufficiently close to the synchronisation manifold. Furthermore, the temporal evolution of individual oscillators at the edge of a synchronised cluster is not accurately captured. To put our ansatz on a firm theoretical footing which allows to describe its limitations is an open From a practical point of view, there are several issues which require further attention and which we plan to pursue in future research. First of all, whereas the general framework of collective coordinates is formulated for general network topologies, we have only presented numerical results for the case of an all-to-all coupling. It is an interesting and important question to see whether the success of the method translates to more complex network topologies. Second, it is by no means clear that our ansatz captures all possible attractors of the full dynamical system. For example, there are examples of networks where the Ott-Antonson method of reduction [19] does not account for the actual dynamical behaviour observed in these networks (see the discussion in Martens et al. [12]). In particular, chaotic dynamics is excluded from their framework. The collective coordinate approach is, in principle, capable of recovering chaotic dynamics by considering at least three collective coordinates. To test whether it actually is able to describe more complex dynamic behaviour is an interesting avenue to pursue. Thirdly, the success in describing the interaction between two partially synchronised clusters in the case of bimodally distributed native frequencies suggests that collective coordinates may be used to reduce complex networks involving several clusters or communities. Fourthly, as we have seen in the numerical simulation, the collective coordinate approach does not capture the interaction between the drifter oscillators and the synchronised oscillators. This leads to the collective coordinate behaviour not being able to accurately capture the oscillators which sit on the edge of the cluster. At a next step one can extend the approach to include drifters.
|
2015-05-20T04:49:08.000Z
|
2015-05-19T00:00:00.000
|
{
"year": 2015,
"sha1": "ba7ce93f4c9e40ea38f54c4fcd36572c2940765c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1505.05243",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ba7ce93f4c9e40ea38f54c4fcd36572c2940765c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Medicine"
]
}
|
56567725
|
pes2o/s2orc
|
v3-fos-license
|
Observation of Conduction Band Satellite of Ni Metal by 3p-3d Resonant Inverse Photoemission Study
Resonant inverse photoemission spectra of Ni metal have been obtained across the Ni 3$p$ absorption edge. The intensity of Ni 3$d$ band just above Fermi edge shows asymmetric Fano-like resonance. Satellite structures are found at about 2.5 and 4.2 eV above Fermi edge, which show resonant enhancement at the absorption edge. The satellite structures are due to a many-body configuration interaction and confirms the existence of 3$d^8$ configuration in the ground state of Ni metal.
Inverse photoemission spectroscopy (IPES) is an important technique to investigate the unoccupied density of states (DOS) of a solid. Combining photoemission spectroscopy (PES), which measures the occupied DOS, with IPES measurements, gives us complementary information about the valence and conduction band DOS. 1 The IPES technique has two measurement modes: Bremsstrahlung Isochromat Spectroscopy (BIS) mode and Tunable Photon Energy (TPE) mode. The BIS measurements are easier than TPE measurements, because it does not use a photon monochromator and sensitive band pass filters are available in X-ray and vacuum ultraviolet (VUV) region. This has led to the early development of X-ray BIS (XBIS) and ultra-violet BIS (UVBIS) techniques. 2,3 The observation of IPES in the soft X-ray (SX) region corresponding to energies from several ten's of eV to about 1 keV is still experimentally difficult, because the emission intensity in IPE is extremely weak. We succeeded in the observation of the resonant IPES (RIPES) of Ce 4,5 compounds near the Ce 4d absorption region, using a monochromator developed for SX emission spectroscopy (SXES). 6 The obtained results are consistent with Ce 3d RIPES by Weibel et al. 7 , though the surface effect is strong. Furthermore, RIPES of Ti 8,9 compounds was also measured across the Ti 3p edge and a weak satellite has been found.
Ni metal is an itinerant ferromagnet which has been used as a classic reference to test the validity of new experimental and theoretical techniques in the study of electronic structure of solids. Beginning with the Stoner condition in the mean-fieldapproximation or the local density approximation (LDA) 10 , as well as many spectroscopic studies of Ni metal have provided important insights in the study of solids e.g. resonant PES 11,12,13 , angle-resolved PES 14 , magnetic circular dichroism (MCD) 15,16 , and spin-resolved PES. 17,18,19 Furthermore, UVBIS 20 and XBIS 21 spectra of Ni metal have also been reported, as well as spin polarized 22,23,24 and k-resolved 25,26,27 IPES. The observed electronic structure of Ni is, however, still an important subject of study that many researchers are interested in, since it is not understood within standard band theory and only recent dynamical mean field studies 28 provide a con-sistent description of its magnetic properties and electronic structure.
It is well known that the so-called "6-eV satellite" is observed in the PES spectrum at about 6 eV from Fermi energy E F . 11,17 This satellite is known as the two-hole-bound state that means two 3d holes are bound in the same Ni site in the final state, and has a 3d 8 final state (3d 9 initial state). 29 Another satellite was found at a higher energy than 6 eV and it was assigned to the 3d 7 final state (3d 8 initial state). 30 Furthermore, it was suggested by analysis of the MCD spectra that the 3d 8 configuration with 3 F symmetry exists with a weight of 15 20 % in the ground state. 31,32 Sinkovic et al. found triplet feature of 3d 8 configuration at 6 eV by means of spin-resolved PES. 19 The main 3d configuration of Ni atom in Ni metal is 3d 9 in the ground state. From a many-body view-point, 3d 10 and 3d 8 should be mixed in addition to 3d 9 due to the electron transfer. Then, the ferromagnetism is considered to be caused by Hund's coupling in the 3d 8 configuration as it reduces the energy cost of an electron transfer. In fact, such a viewpoint is proposed as an origin of ferromagnetism in Ni. 33 In this context, an experimental measurement of 3d 8 weight is of great importance.
In this study, we report resonant IPES of Ni metal across the Ni 3p-absorption edge. Since the process of the IPES adds an electron to the ground state, IPES should give us new information of the ground state configuration. Figure 1 shows energy diagram of RIPES. In a normal IPES process, an electron that is incident upon a solid surface decays radiatively to states at lower energy. In a 3d n -electron system, the normal IPES process is expressed as where e denotes incident electron. If the electron energy is higher than the binding energy of a core level, the core electron can be excited and ejected out of the system. Then, the created core-hole decays radiatively (fluorescence) or nonradiatively (Auger process). The fluorescence process is where c denotes core hole. On the other hand, if the energy of the incident electron is close to the Ni 3p ! 3d absorption edge, a second order process would take place. Because of the interference between (1) and (3), a resonance effect would be observed. IPES measurements of Ni were performed on both polycrystal and (110) single crystal. The polycrystalline sample was evaporated on Mo substrate at a pressure of < 1 10 8 torr. Measurements were performed at low temperature of about 14 K. The cleanliness of the sample was checked by measuring O 1s fluorescence. The measurement chamber pressure was < 3 10 10 torr throughout the measurements. Single crystal was measured with some excitation energies. (110) sample was cleaned by Ar-ion bombardment and annealing. The cleanliness was checked by Auger and LEED measurements.
A soft X-ray monochromator, which consists of a Rowlandtype grazing-incidence monochromator with a 5-m spherical grating (300 lines/mm), was used in this experiment. 5,6 The incidence angle of monochromator was fixed at an angle of 85.98 . A two types multi-channel detector PIAS (for wide range) and CR-chain (for high resolution) (Hamamatsu photonics) were used as a photon detector. The absolute energies of the spectra were calibrated by measuring the Fermi edge of Au.
A filament-cathode-type and a BaO-cathode-type electron guns were used for excitation. The kinetic energy of excitation electron was calibrated by an energy analyzer. An excitation electron was incident normally for polycrystal, while off-normal for Ni (110) of emission spectra is proportional to third power of photon energy. Figure 2 shows RIPES spectra of polycrystalline sample, obtained for various energies across the Ni 3p absorption edge. Numbers beside the spectra indicate excitation energies. In this figure, observed spectra, which have energies close to excitation energies, are plotted with respect to the relative energy from Fermi edge. The spectrum of 54.0 eV, which is sufficiently below the absorption edge, corresponds to normal IPES spectrum. This spectrum agrees with the spectra observed in XBIS 20 and UVBIS. 21 From comparison with band calculations 34 , the structure just above Fermi edge and broad peak at about 10 eV are assigned to Ni 3d and Ni 4sp bands, respectively.
When the excitation energy is higher than 66.1 eV, a core electron is excited. Thus, the emission spectrum then includes both IPES and fluorescence components. The Ni 3d ! 3p fluorescence peak is observed at a constant energy of about 65 eV in emission spectra. The energy position of this peak is changed with changing excitation energy in Fig. 2 as indicated by vertical bars. The Ni 3d peak just above E F becomes very weak when the excitation energy is around 66.1 eV, where the fluorescence peak has almost same emission energy. On the other hand, Ni 4sp peak does not seem to change its intensity with changing excitation energy. In addition to these structures, a weak structure is observed at around 2.5 and 4.2 eV as indicated by the dotted line. These structures are observed only at the excitation near absorption edge.
Insertion in Fig.2 shows the peak intensity of the Ni 3d and Ni 4sp peak plotted versus the excitation energy. Filled circles and squares denote the intensity of Ni 3d and Ni 4sp, respectively. The open squares and triangles are calculated intensity that is discussed below. 35 The Ni 3d spectrum has a dip at about 66 eV and shows an asymmetric lineshape typical of a Fano-type resonance. 36 A similar resonance has been observed in the resonant photoemission study of Ni. 12,30 On the other hand, the Ni 4sp peak does not change its intensity with changing excitation energy, although at higher energies it cannot be conclusively stated because of an overlap with the fluorescence signal. Thus, satellite intensity is observed at about 58 66 eV spectrum, the peak intensity of these satellites is enhanced as seen in Fig. 2.
The results show that the IPES of Ni 3d exhibits a resonance effect at the excitation energy near Ni 3p-absorption edge. The nominal ground state of Ni is 3d 9 configuration. It is thought, however, that the actual ground state consists of a mixture of 3d 8 , 3d 9 and 3d 10 configurations. The intermediate state of RIPES has an n+2 electron state as has been mentioned before. So, only the 3d 8 initial state can be resonant in the IPES process, while the 3d 9 and 3d 10 initial states cannot resonate. That is, the observed resonance confirms the existence of 3d 8 configuration in the ground state. The existence of 3d 8 configuration has been suggested by resonant PES 30 and MCD 31,32 measurements. However, the present result is the only direct experimental evidence of a 3d 8 initialstate configuration. Figure 3 shows comparison between on-and off-resonant spectra. The spectra of (110) single crystal are shown in addition to the on-resonant spectrum of polycrystal. The spectra of single crystal show narrower main peak than that of polycrystal, because these were observed in angle resolved mode. In the on-resonance spectra of both samples, two satellite structures are observed at about 2.5 and 4.2 eV as indicated by the dotted lines, while the off-resonant spectrum does not show. A fluorescence component is expected in the on-resonance spectrum at the energy position marked by arrow in Fig. 3, but it is very weak compared with other structures. The spectrum at bottom shows the calculation result 35 discussed in the following.
We now discuss the origin of the satellite structures. We think the satellite structures are not caused from k-dependence of other components, because Ni 4sp peak is observed broadly in both sample at around 10 eV that is sufficiently higher than the satellite energy. Possibility of direct transition that is observed in UVBIS spectra 25 can be neglected, because the excitation energy in this study is much higher than UVBIS.
Since the satellites are observed near absorption edge, it is possible that the structure is caused by a many-body effect, as suggested by Tanaka and Jo. 35 The spectrum at bottom of Fig. 3 shows RIPES spectra of Ni metal calculated by impurity Anderson model including many-body configuration interaction effect. In the calculation, the initial state of Ni metal consists of 3d 8 , 3d 9 and 3d 10 configurations, and the IPES spectrum consists of the three structures arising from the bonding, nonbonding and anti-bonding states of the 3d 9 and 3d 10 configurations. The main peak near Fermi edge corresponds to the bonding state and it shows Fano-type resonance, while nonbonding and anti-bonding peaks at 2.5 and 4.2 eV are resonantly enhanced at absorption edge. In this calculation, band effect is not included. If proper band effect is included in this calculation, the non-bonding peak would become wide as observed in experimental results. The intensity changes in this calculation are shown in Fig. 2. The calculated results seem to qualitatively well-describe the intensity change of main peak. From the comparison between the observed and calculated spectra, the weight of 3d 8 in Ni metal is estimated to be at least 10 %. As mentioned before, a satellite called the "two-hole-bound state" is observed at 6 eV in resonant PES spectra. The satellite arises from 3d 8 dominant states, while the main peak corresponds to 3d 9 dominant states. The non-bonding state is not observed in PES spectra. The satellite energy of 6 eV in PES is larger than that of RIPES in this study. This is attributed to the fact that the satellite in PES has 3d 8 configuration and Coulomb interaction between two holes is more effective, while the satellite in RIPES has 3d 9 configuration. Furthermore, in case of PES spectra that have 3d 8 and 3d 9 final states, the multiplet splitting of the 3d 8 configuration is larger than the hybridization energy, so that the separation of the anti-bonding state from the non-bonding 3d 8 state is not obvious. On the other hand, there is no multiplet splitting due to Coulomb interaction in the final states of IPES, because the final states have 3d 9 and 3d 10 configurations. Thus, the nonbonding state would become observable in IPES.
In conclusion, we could observe RIPES spectra of Ni metal across the Ni 3p-3d absorption edge. Satellite structures of Ni 3d band are observed at about 2.5 and 4.2 eV. The excitation spectrum of Ni 3d state shows Fano type resonance across the Ni 3p absorption edge. The results are a direct evidence for existence of 3d 8 configuration in the initial state of Ni metal. The satellites are described well by the cluster-model calculation including many-body configuration interaction effects. This result must help for understanding the ferromagnetism on Ni metal.
|
2018-12-18T08:52:24.573Z
|
2003-12-24T00:00:00.000
|
{
"year": 2003,
"sha1": "4d83df049af88223ea9834afdf135a634ed77ecb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0312628",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b88d25a3098f1d5f897a4a8675a1a1c40aa95a9d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
157000
|
pes2o/s2orc
|
v3-fos-license
|
Modulation of the Na,K-pump function by beta subunit isoforms.
To study the role of the Na,K-ATPase beta subunit in the ion transport activity, we have coexpressed the Bufo alpha 1 subunit (alpha 1) with three different isotypes of beta subunits, the Bufo Na,K-ATPase beta 1 (beta 1NaK) or beta 3 (beta 3NaK) subunit or the beta subunit of the rabbit gastric H,K-ATPase (beta HK), by cRNA injection in Xenopus oocyte. We studied the K+ activation kinetics by measuring the Na,K-pump current induced by external K+ under voltage clamp conditions. The endogenous oocyte Na,K-ATPase was selectively inhibited, taking advantage of the large difference in ouabain sensitivity between Xenopus and Bufo Na,K pumps. The K+ half-activation constant (K1/2) was higher in the alpha 1 beta 3NaK than in the alpha 1 beta 1NaK groups in the presence of external Na+, but there was no significant difference in the absence of external Na+. Association of alpha 1 and beta HK subunits produced active Na,K pumps with a much lower apparent affinity for K+ both in the presence and in the absence of external Na+. The voltage dependence of the K1/2 for external K+ was similar with the three beta subunits. Our results indicate that the beta subunit has a significant influence on the ion transport activity of the Na,K pump. The small structural differences between the beta 1NaK and beta 3NaK subunits results in a difference of the apparent affinity for K+ that is measurable only in the presence of external Na+, and thus appears not to be directly related to the K+ binding site. In contrast, association of an alpha 1 subunit with a beta HK subunit results in a Na,K pump in which the K+ binding or translocating mechanisms are altered since the apparent affinity for external K+ is affected even in the absence of external Na+.
INTRODUCTION
The Na,K pump is composed of an a~ heterodimer. The two subunits are assembled soon after synthesis in the endoplasmic reticulum (Geering, 1990). Although neither hydrolysis of adenosine triphosphate (ATP) nor ion transport function by an isolated subunit has ever been demonstrated, the a subunit has been called the catalytic subunit because it includes the binding site for ATP (Pedemonte and Kaplan, 1990) and the catalytic phosphorylation site (Ohtsubo, Noguchi, Takeda, Morohashi, and Kawamura, 1990). In addition, several lines of evidence (site-directed mutagenesis, covalent binding of ouabain analogues) clearly indicate that the binding site of ouabain, the Na,K pump-specific inhibitor, is primarily located on the ,x subunit; for review see Horisberger, Lemas, Kraehenbtihl, and Rossier (1991c). The [3 subunit has a major role in the maturation and the translocation of the Na, K-pump protein from the endoplasmic reticulum (ER) to the plasma membrane; for review see Geering (1991). However, the role of this 13 subunit, once the Na,K pump is in its mature and active form at the plasma membrane, is poorly understood.
The structure of the e~ subunit (with 8-10 putative transmembrane segments) and the existence of other ion-motive P-ATPases that do not include a 13 subunit make it probable that the a subunit forms the binding sites and the pathway for the transported cations. However, the possibility of a contribution of the 13 subunit (that includes only one transmembrane segment) is still open. A few recent reports point to this possible role (Eakle, Kim, Kabalin, and Farley, 1992;Schmalzing, Kr6ner, Schachner, and Gloor, 1992;Jaisser, Canessa, Horisberger, and Rossier, 1992a;Jaisser, Horisberger, and Rossier, 1992b;Lutsenko and Kaplan, 1992). In addition, selective chemical modifications of the [3 subunit of the gastric H,K-ATPase have been shown to entail loss of function of the holoenzyme (Chow, Browning, and Forte, 1992).
We have reported that the Xenopus [33NaK subunit, as well as the [31NaK isoform, could assemble with the ,xl subunit to form a functional Na,K pump at the plasma membrane of Xenopus oocytes but we were not able to detect any significant physiological differences between the aliB1NaK and ~1 [33NaK complexes (Horisberger, Jaunin, Good, Rossier, and Geering, 1991a). However, this approach was not powerful enough to detect small physiological differences between Na,K-pump isoforms expressed by cRNA injection, because the noise of the endogenous oocyte Na,K pump was added to the signal of the exogenous Na,K pump. More recently, using the Xenopus and Bufo e~ and [3 subunits, we could show a consistent difference in the apparent affinity for external K + between Na,K pumps including a [3 subunit of the [31 and [33 isotype. In the present paper, we have extended this work by studying the kinetics of the activation by external K + of Na,K pumps composed of an ~x subunit and either one of the two known amphibian isoforms of the [3 subunit, [31NaK or [33NaK (Verrey, Kairouz, Schaerer, Fuentes, Geering, Rossier, and Kraehenbtihl, 1989;Good, Richter, and Dawid, 1990;Jaisser et al., 1992a), or the [3 subunit of the most closely related P-ATPase, the stomach H-K-ATPase; for review see Wallmark, Lorentzon, and Sachs (1990). In addition, the effects of the membrane potential and of the presence or absence of external Na + on apparent K + affinity of all these isoforms was studied. The (x subunit of the Bufo marinus Na-K-ATPase was chosen because it is well expressed in Xenopus oocytes and it confers a relative resistance to ouabain, allowing the study of the function of the artificially expressed Bufo Na,K pump after selective inhibition of the endogenous Xenopus Na,K pump (Jaisser et al., 1992a, b).
Stage V-VI Xenopus oocytes were obtained as previously described (Horisberger et al., 1991a) and were injected with 10 ng of Na,K-ATPase et subunit cRNA and 1 ng of Na,K-ATPase 13 subunit, or 4 ng of H,K-ATPase 15 subunit cRNA in a total volume of 50 nl. We have shown previously that co-injection of (~ and 131NaK or 133NaK cRNA of Bufo Na,K-ATPase induces a large increase in the activity of Na,K pumps at the surface of Xenopus oocytes, when compared to oocytes injected with water, et subunit alone or I~ subunit alone (Jaisser et al., 1992a).
Electrophysiological Measurements of Na, K-Pump Activity
Na,K-pump activity was measured in Na÷-Ioaded oocytes as the outward current activated by addition of K +, in the presence of K + channel blockers, as described earlier (Horisberger et al., 1991a). Briefly, 3-5 d after cRNA injection, the oocytes were first loaded with Na + by a 2-h exposure to a K+-free and Ca++-free solution. They were kept thereafter in a K + free solution containing 0.4 mM Ca ++ until the measurements were performed. Whole-cell currents were measured using the two-electrode voltage clamp technique. Current and voltage were recorded under voltage clamp conditions with a Dagan TEV-200 clamp instrument (Dagan Corp., Minneapolis, MN). The TL-1 DMA interface and Pclamp data acquisition program (Axon Instruments, Inc., Foster City, CA) were used to drive the voltage clamp and record voltage and current signals at a sampling rate of 1 KHz. The current signal was low-pass filtered at 25 Hz.
Whole-cell current-voltage (I-V) curves were obtained by recording the current while, starting from a holding membrane potential of -50 mV, rectangular voltage pulses (125 ms) of varying amplitude (from +80 to -80 mV) were applied every 1.5 s and the steady state current was measured at 100 ms after the start of the voltage step.
Specific Measurement of the Exogenously Expressed Na, K Pump
To measure specifically the activity of the exogenous Bufo Na,K pumps we took advantage of the relative resistance to ouabain of the Bufo Na,K pump (K~ ~ 50 I~M) and its fast dissociation rate constant (Jaisser et al., 1992a) compared to the Xenopus Na,K pump (KI < 0.1 I~M) (Canessa, Horisberger, Louvard, and Rossier, 1992). As illustrated in Fig. 1, the activity of the endogenous Xenopus Na,K pump was inhibited by exposure for 1 min to 10 I~M ouabain. We have shown that this manoeuvre completely inhibits the Xenopus Na,K pump for a period of at least 15 min (Canessa et al., 1992;Jaisser et al., 1992a). Ouabain was then removed and a 4-8-min period was allowed for the recovery of the small part of the Bufo Na,K-pump activity that could have been transiently inhibited by the 10 IxM ouabain (see Fig. 1).
Activation of the Na,K Pump by External Potassium
The activation of the Na,K-pump current by external K + was studied in two separate sets of experiments, in the presence and in the nominal absence of external Na ÷. In the Na + containing solutions, the K+-induced current was measured after a stepwise increase of the K + concentration from 0.0 to 0.3, 1.0, 3.0, and 10.0 raM. In the Na+-free solutions the K + concentrations were 0.0, 0.02, 0.1, 0.5, and 5.0 raM. The various K + concentrations were obtained by addition of adequate amounts of K-gluconate to the corresponding K+-free The holding potential was maintained at -50 mV, except for the series of short voltage steps (a-g). In this example, the K ÷ concentration was first increased in steps from 0 to 5.0 raM. K + activated a large outward current. Then 10 I~M ouabain was added in the presence of 5.0 mM K +. The outward current decreases by ~ 100 hA, which corresponds to the ouabain-sensitive endogenous Xenopus Na,K pump. The small part of the Bufo pump that might also have been inhibited was allowed to recover during the 4--8 rain after removal of ouabain. The K + concentration was then increased as indicated and series of voltage steps were obtained (a-e L Thereafter, 2 mM ouabain was added and I-V curves were recorded in the presence (5 raM, f) and absence of K ÷ (g). The current induced by K + at each potential was obtained by subtracting the current measured in the K+-free solution from the current measured in the presence of K ÷. The parameters of the Hill equation: were fitted to the data of the current (I) induced by various concentration of K + (CK) and yielded least square estimates of the maximal current (/max), the half-activation constant (Kt/2, and the Hill coefficient (nil). The voltage dependence of Kl/~ was obtained by fitting the parameters of the exponential function: to the Kl:~ vs. V,, data to obtain least square estimates of the K~/~ at 0 mV [Kl:2(0)] and k, an exponential steepness factor. A nonlinear fit program based on the simplex method (Nelder and Mead, 1965) was used for fitting equations to the data.
All experiments were performed at room temperature (24--26°C). Na, K-pump current measurements were restricted to oocytes showing a membrane resistance > 0.25 MI~. As no differences were observed in water-injected (50 nl) or noninjected oocytes, oocytes of these two groups were pooled to form the "control" group.
Solutions and Drugs
The composition of the solutions used for the electrophysiological measurements were as follows: Na+-containing solution (mM) Na ÷ 87, Ca ++ 0.41, Mg ++ 0.82, Ba +÷ 5, TEA + 10, gluconate 90, CI-22.5, HCO~-2.4, MOPS 10, pH 7.4; Na+-free solution (raM) Ca +÷ 0.41, Mg ++ 0.82, Ba ÷+ 5, TEA + 10, CI-22.5, sucrose 140. Ouabain (Sigma Chemical Co., St. Louis, MO) was added from a 0.2 M solution in dimethylsulfoxide for the low concentration (10 ~M) solutions, and was directly dissolved in the final solution for the 2 mM concentration. Results are expressed as mean -+ SE (n--number of observations). The statistical significance of differences between means was estimated using Student's t test for unpaired data. P < 0.05 was chosen as the level of statistical significance.
Expression of Exogenous Na, K Pumps
Coinjection of ~t and [3 subunit cRNAs resulted in the expression of a large exogenous Na,K-pump activity in all groups. Fig. 2 shows that the current due to exogenous Na,K pumps (i.e., the current measured after inhibition of the endogenous Na,K pump) was four to six times higher in the 0tlfllNaK and alJ33NaK groups, and about twice higher in the ~tll3HK group than the current due to the endogenous Na,K pump, i.e., the current measured in noninjected oocytes before exposure to ouabain. In noninjected oocytes no ouabain sensitive current could be detected after exposure to 10 ~M ouabain for 1 min. All the means values of the groups of cRNA-injected oocyte are significantly different from 0 (P < 0.005 in each case). In earlier experiments in which ouabain binding was measured in paralell with the activity, we have shown that the Na,K-pump current measured at -50 mV in the presence of 10 mM K ÷ was highly correlated with the number of ouabain binding sites and thus is a reliable estimate of the number of Na,K-pump expressed at the surface of the oocyte (Jaunin, Horisberger, Richter, Good, Rossier, and Geering, 1992). A similar relationship between the number of ouabain binding sites and Na,K-pump current (or ouabain-sensitive rubidium uptake) was observed when cdNaK alone, al{31NaK and alI3HK were compared (Horisberger et al., 1991b). Therefore, the lower activity in the cd{3HK group can most probably be attributed to the lower level of expression, possibly due to a less efficient assembly between the amphibian Na, K-ATPase al subunit and the mammalian H,K-ATPase {3 subunit (Horisberger et al., 1991b). with Na
Ouabain-sensitive and Potassium-induced Currents
Wl B1 83 8HK without Na FIGURE 2. Ouabain-sensitive currents at -50 mV. The effect of 2 mM ouabain was recorded in a Na + containing solution (with Na) in the presence of 10 mM K +, and in Na+-free solution (without Na) in the presence of 5 mM K +. The number of noninjected (NI) or waterinjected oocytes 0NI), or of oocytes injected with al[31NaK ([31),al[33NaK ([33), or al[3HK ([3HK) cRNA is indicated in the columns. The NI/WI oocytes had not been previously exposed to ouabain, and the values indicated by the white columns represent the endogenous Xenopus oocyte Na,K-pump current. In the three other groups this endogenous component had been removed by previous exposure to 10 ~M ouabain, and the values represent essentially the activity of the exogenous Bufo Na,K pump. ouabain in the presence of 10 mM K + (corresponding to the subtraction of the I-V curves e minusfof Fig. 1). In all groups the voltage dependence of the Na,K-pump current was marked at negative potentials ( ~ 50% decrease of the ouabain-sensitive current between -50 and -130 mV) similarly to what has been described in earlier reports in oocyte (Rakowski and Paxson, 1988;Schweigert, Lafaire, and Schwarz, 1988;Wu and Civaa, 1991) or other cell types (Horisberger and Giebisch, 1989;Rakowski, Gadsby, and De Weer, 1989;Gadsby and Nakao, 1989;Stimers, Shigeto, and Lieberman, 1990). The voltage dependence tended to be smaller at depolarized membrane potentials, especially for the endogenous pump and for the cxl{3HK group. This can be explained by the lower apparent affinity of K + for these cx/{3 complexes in the depolarized potential range (see below). K + concentrations higher than 10 mM could not be used because significant ouabain-resistant K+-induced currents appeared at K + concentrations > 10 mM. The middle panel shows that currents induced by 10 mM K + were of similar magnitude and had a similar voltage dependence as the ouabain-sensitive current. The lower panel shows the current induced by 10 mM K + in the presence of 2 mM ouabain (corresponding to the subtraction of the I-V curvesfminus g of Fig. I). The size of this current amounted to a few percent of the ouabain-sensitive current in the presence of 10 mM K +. For the cRNA-injected oocytes a small residual Na,K-pump current was expected because of the high KI of ouabain for the Bufo Na,K pump (Jaisser et al., 1992a). Assuming a KI of 50 CM and simple one-site inhibition kinetics, 2 mM ouabain should inhibit 97.6% of the total current. From these results we conclude that the K+-induced currents were essentially due to activation of the Na,K-pump. Fig. 4 shows the results of similar measurements performed in the absence of external Na +. The ouabain-sensitive current (upper panel) was slightly voltage sensitive in the negative potential range (~ 20% decrease of the ouabain-sensitive current between -50 and -130 mV). Although smaller than that observed in the presence of external Na ÷, this voltage dependence was statistically significant and similar in all groups. Gadsby, Rakowski, and De Weer (1993) have shown that the external Na ÷ binding step is the main voltage-dependent step of the sodium translocating part of the pump cycle, and Rakowski, Vasilets, LaTona, and Schwarz (1991) have shown that the apparent affinity of K + was also voltage dependent. The presence of a voltage dependence in the absence of external Na + and at saturating K + concentration suggests that there is another voltage-dependent step in the cycle. Except for the ~I~INaK group for which the K+-induced current was voltage independent, there was a small increase of the K+-induced current in the high negative potential range. As the effect of K ÷ in the presence of ouabain was negligible (lower panel), the discrepancy between the ouabain-sensitive current (upper panel) and the current activated by 5 mM K + (middle panel), at negative membrane potentials, correspond to the presence of a small ouabain-sensitive inward current in the absence of external Na + and K +. A similar ouabain-sensitive current has been observed by Rakowski et al. (1991) and has been investigated in more detail by Efthymiadis, Rettinger, and Schwarz (1993). The nature of this current is unknown.
Potassium Activation of the Na, K-Pump Current in the Absence and the Presence of External Sodium
The voltage dependence of the current activated by different concentrations of K + (corresponding to the subtraction of the I-V curves b--d minus curve a of Fig. I) in the presence and in the absence of external Na + are shown in Fig. 5. Current values were normalized to the ouabain-sensitive current measured in the presence of the highest K + concentration at -50 inV. Potassium activation kinetics were obtained for each membrane potential value by fitting the parameters of the Hill equation (/max, Kl/~, Hill coefficient) to the K+-induced current vs K + concentration data for each oocyte (see examples in Fig. 6). The Hill coefficients (n•) were in the range of 1.5-2.0 for the experiments performed in the presence of Na +, and in the range of 0.9-1.3 for the experiments in the absence of Na +. There was no obvious voltage dependence of n~ obtained by parameter fitting in either case. Using fixed values of 1.6 and 1.0 for ntt in experiments with Na + containing and Na+-free solutions, respectively, and fitting the -100 -50 0 50 FIGURE 3. Ouabain-sensitive and potassium-activated steady-state current-voltage relationship recorded in the presence of external Na ÷ (87 raM). In all three panels the current values (I) normalized to the ouabain-sensitive current at -50 mV membrane potential in the presence two remaining parameters (/max and K1/z) yielded essentially similar results concerning the Kl/2 estimates.
The voltage dependence of the K1/2 of the activation of the Na,K-pump current by external K + is shown in Figs. 7 and 8. In the absence of external Na + (Fig. 7) the apparent K + affinity was increasing monotonicly with the membrane potential in all groups. In the noninjected oocyte group, the voltage dependence of the Kl/2 was similar to that observed under similar conditions by Rakowski et al. (1991), with an exponential steepness factor of 0.36 +-0.05 (n = 6). In the etl[31NaK and ctl133NaK groups the steepness factor of the voltage dependence of KI/2 was 0.21 -0.02 (n = 8) and 0.22 + 0.02 (n = 8), respectively (no significant difference between these two groups). Both these values were significantly smaller than in the noninjected group (P < 0.005). Although the KI/2 values were similar at high negative membrane potentials, the K1/2 at + 10 mV was much higher in the noninjected grou, p (632 + 62 p,M) than in the 0tll31NaK (241 -+ 19 I~M) and etl133NaK groups (234-+ 15 IzM) (P < 0.001). The K1/2 values were similar in the 131NaK and in the [33NaK groups.
No significant difference could be established at any potential value. The etl[3HK group had a much higher KI/2 over the whole potential range, with a voltage dependence, k of 0.27 + 0.06 (n = 7), a value not significantly different from those of the ~tllMNaK and ctl133NaK groups.
In the presence of Na ÷ (Fig. 8), the activation by external K ÷ presented a different type of voltage dependence. In all groups, both hyperpolarization and depolarization tended to decrease the apparent affinity of K + for activation of the pump current, with a maximal apparent affinity around -50 to -30 mV. This indicates that the voltage dependence of the apparent affinity of K ÷ results not only from a voltagedependent step in the binding of K +, but also from another step with a reverse voltage dependence, presumably the binding/release of external Na + (Gadsby et al., 1993). Again the etll3HK group had a higher KI/2 than the other groups. The ctl[31NaK group had a significantly lower affinity for K + than the etl[~3NaK, (P < 0.02 or smaller for each potential value) and this difference was larger in the high negative potential range.
DISCUSSION
In this paper we have extended the previous finding by us (Jaisser et al., 1992a, b) and others (Schmalzing et al., 1992;Eakle et al., 1992) that the structure of the 13 subunit has an influence on the function of the Na,K pump present at the plasma membrane, and more specifically on the apparent affinity of potassium for its external binding site. The Na,K pump is a transport system that undergoes a complex cycle with at least two conformations, one of which (the E2 conformation) of 10 mM K ÷ are plotted against the membrane potential (Vm). (Top) Current sensitive to 2 mM ouabain in the presence of 10 mM K + (corresponding to the subtraction of current recordings f minus e of Fig. 1). (Middle) Current activated by 10 mM K ÷, (e minus a of Fig. 1)
. (Bottom)
Current activated by 10 mM K ÷ in the presence of 2 mM ouabain (g minusfof Fig. 1). Note the expanded scale. The values are the mean -+ SE of eight, seven, six, and six measurements in the noninjected or water injected (NI/WI, O) or ed131NaK (131, A), ~1133NaK (133, O), and aI13HK (13HK, m) cRNA-injected oocytes, respectively. has a high affinity for external K ÷ ions. The apparent affinity of K +, which we measure as the KI/2, will generally be different from the intrinsic affinity (Kin) of the E2 conformation (L~iuger, 1991). In principle, modifications of the K1/2 could result from a change of the Km as well as from other alterations in the kinetics of the pump cycle. To investigate the role of the [3 subunit in the Na,K-pump function, we have examined the potassium activation kinetics of Na,K-pump heterodimers including different 13 subunits under two conditions, in the presence and in the absence of external Na +, and over a wide range of membrane potential. The most striking difference between pumps including [31NaK and [33NaK isoforms was the apparent affinity for K + measured in the presence of external Na +. No difference could be detected in the absence of external Na + over the whole potential range. In contrast, a much lower apparent affinity for K + was observed with the al[3HK complex both in the presence and in the absence of external Na +.
To analyze the difference of the apparent affinity for K + and its relation to the intrinsic affinity, we have used a simple three-state kinetic model of the Na,K-pump cycle that is described in the Appendix. In this model the apparent affinity, KI/2 is related to the intrinsic Km by the following relation (Eq. A8 in the Appendix): fl +bl The large difference of K1/2 observed between the a l [31NaK and the a l [3HK groups was of roughly similar magnitude in the presence and in the absence of external Na +, and at all tested membrane potentials. The most simple explanation for this uniform difference under various conditions is an alteration of the intrinsic Kin, which appears as a factor in the right side of Eq. 3.
The absence of detectable difference between the c~l[31NaK and the al[33NaK groups in the absence of external Na + makes it very unlikely that the intrinsic Km was altered. If the change of the apparent affinity observed in the presence of Na + has to be attributed to the modification of a single rate constant, an increase of the rate constant b 1 (i.e., the backward Na + translocating step) in the al133NaK group could explain both the higher Kl/z in the presence of external Na + and the absence of difference in Na+-free external solutions. This hypothesis is supported by the observation that the difference of KI/2 increases at negative membrane potential.
Indeed, as the rate of the backward Na + translocating step (b l) increases at negative membrane potentials, owing to the voltage dependence of external Na + binding (Gadsby et al., 1993), the influence ofbl on thefl + bl term increases. It is, however, obvious that more complex modifications, concerning several rate constants could also produce these results. minus e of Fig. 1). Note that the vertical scale of this panel was expanded to allow for better visualization of the voltage dependence of the ouabain-sensitive current. Middle panel: Current activated by 5 mM K +, (e minus a of Fig. 1). (Bottom) Current activated by 5 mM K + in the presence of 2 mM ouabain (g minusfof Fig. 1); note the expanded vertical scale. The values are the mean -+ SE of 8, 10, 8, and 7 measurements in the noninjected or water injected (NI/WI, C)) or a1131NaK ([31,A),al[33NaK ([33,O), and a113HK (13HK, m) cRNA-injected oocytes, respectively. The 13 subunit might be involved in the function of the Na,K pump because the transmembrane segment of the [3 peptide chain participates directly in the structure of cation binding or occlusion sites, as suggested by Capasso, Hoving, Tal, Goldshleger, and Karlish (1992). Alternatively, the [3 subunit, by its close interaction with the c~ subunit, might modify the structure of the c~ protein and/or the equilibrium between different conformational states of this subunit. Comparison of the primary sequence of a large number of [3 subunit isoforms (Horisberger et al., 1991c) indicate that the overall structure is well conserved: a short intracellular amino-terminal sequence is followed by one transmembrane segment and a large extracytoplasmic carboxy-terminal domain. A striking difference is the presence in the [31 isoform of a stretch of 15-20 amino acids (corresponding to exon 5) that is not present in the other [3 isoforms. The differences between [3HK and the other [3 isoforms are more substantial (about 30% identity) and widespread throughout the whole sequence. Studies using chimeric [3 subunits formed from different isoforms may allow us to determine more precisely which parts of the [3 protein are involved in the functional differences that we have observed, and help to delineate functional domains of the 13 subunit.
Although the presence of c~ and 133NaK subunits has been shown in Xenopus oocytes (Jaunin et al., 1992), the exact isoform composition of the protein forming active Na,K pumps at the plasma membrane of this cell type has not yet been determined. Our results obtained with noninjected oocytes show that the function of the endogenous Na,K pump is clearly different from either that of the cxl[31NaK or the cxl[33NaK complexes. In particular, the voltage dependence of the apparent affinity for K + is nearly twice as steep for the endogenous than for the ~xl[31NaK or cd [33NaK pumps. The reasons for this difference are not clear. Species differences do not seem to be the explanation since Xenopus cd[31NaK or ~l[33NaK complexes expressed in oocytes by cRNA injection also show a higher apparent affinity for K + when compared to endogenous Na,K-pumps studied in non injected oocytes (unpublished results). The existence of an oocyte specific ~ subunit isoform might explain the difference in K + affinity. Deletions of the amino-terminal part of the ~ subunit have been shown to alter the apparent affinity for K + (B/Jrgener-Kairuz, Horisberger, Geering, and Rossier, 1991), and more specifically the voltage dependence of the apparent K + affinity (Vasilets, Omay, Ohta, Noguchi, Kawamura, and Schwarz, 1991). The importance of the amino terminus is further suggested by the presence of a distinct transcript of the ~1 isoform in the oocyte and during early development (Bfirgener-Kairuz, personal communication).
In conclusion, our results point to an important role of the [3 subunit in the transport function of the Na,K pump, confirming evidence obtained by other techniques (Eakle et al., 1992;Schmalzing et al., 1992;Lutsenko and Kaplan, 1992). Analysis of the difference of potassium activation kinetics between Na,K-pump dimers including different 13 subunits suggests that the presence of the gastric H,K-ATPase 13 subunit modifies the properties of the external binding site of K + to the Na,K pump. When compared to the 131 subunit, the presence of the Na,K- (87 mM). The Kl/~ was determined for each measurement by fitting the parameters of the Hill equation to the current versus concentration data, as described in the Methods section. The mean Hill coefficients at 50 mV were 1.91 --+ 0.06, 1.65 _+ 0.03, 1.70 _+ 0.03, and 1.83 --+-0.03, in the NI/WI, a1131NaK, oL1133NaK, and a113HK groups, respectively, and did not show any obvious voltage dependence. The values are the mean -+ SE of eight, seven, six, and six measurements in the noninjected or water-injected (NI/WI, O) or a1131NaK (131, "), c~1133NaK ([33, O), and al I3HK (13HK, m) cRNA-injected oocytes, respectively.
THE JOURNAL OF GENERAL PHYSIOLOGY -VOLUME 103 • 1994
ATPase 133 subunit induces a decrease of the apparent affinity that is most likely due to alterations of the Na + translocation kinetics rather than to a direct alteration of the K + binding site.
APPENDIX
To analyze the relation between the intrinsic affinity (Kin) and the apparent activation constant (Ki/2), we used a simple steady-state kinetic model of the Na, K-pump cycle. As illustrated in Fig. 9, this model includes three states: El, E2, and E2K. An equilibrium binding kinetic with an intrinsic affinity K,, links the E2 and E2K states. The pseudomonomolecular rate constantsfl and bl represent the step during which, according to the Post-Albers model, Na + ions are released (forward: fl) or bound (backward: bl). The f2 rate constant summarizes all the other steps of the cycle between the E2K and the E1 state. The backward rate constant of the E2K to E 1 step is assumed to be slow compared to the other rate constants and is set to 0. The b I rate constant is equal to 0 in the absence of external Na + and has a finite value in the presence of Na +. From published data (Rakowski et al., 1991;Gadsby et al., 1993), both bl and Km are expected to be voltage dependent. The observed voltage dependence of the ouabain-sensitive current in the absence of external Na ÷ (Fig. 3, top) suggests that eitherfl or f2 are also voltage dependent. It should be noticed that the E1 and E2 states do not have exactly the same meaning as they do in the Post-Albers model.
|
2014-10-01T00:00:00.000Z
|
1994-04-01T00:00:00.000
|
{
"year": 1994,
"sha1": "e1e18cdde979dc43af5f055baa48d83e454dfc21",
"oa_license": "CCBYNCSA",
"oa_url": "http://jgp.rupress.org/content/103/4/605.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1e18cdde979dc43af5f055baa48d83e454dfc21",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
134601367
|
pes2o/s2orc
|
v3-fos-license
|
The Distribution of Heavy Metals in the Sediment of Low Tidal Flat, Eastern Chongming Island, China
The distribution of heavy metals in the tidal flat is of great significance on the estuary environment. This study aims to find the heavy metal distribution mechanism at low tidal flat in Yangtze Estuary, China. 4 sediment cores were collected at low tidal flat of eastern Chongming Island at four seasons spring, summer, autumn and winter. The concentrations of element Al, Cu, Cr, Fe, Mn, Pb, Rb, Zn, Zr in the sediment were analyzed. The vertical distribution of different element with depth were analyzed and the main factors that control the distribution of heavy metals in the sediments of low tidal flat were discussed. The results showed that the source of sediment and tidal hydrodynamics are the two main factors that controls the distribution of heavy metals in the sediment of low tidal flat. Zr in the sediments can be used as geochemical marker to reflect the erosion and deposition changes in tidal flat.
Introduction
Tidal flats, as wetland habitats, are an important ecosystem and of great ecological significance for humans. Estuarine flats provide vital feeding grounds for migrant and native birds as well as nursery zones for fish [1]. However due to increasing urban and industrial development in surrounding areas, more and more contaminants such as heavy metals are discharged into these ecologically sensitive areas. The tidal flats tend to become a sink or source for the heavy metals [1][2][3]. Heavy metals are adsorbed onto suspended particles, deposit on the bottom as sediments and accumulate in the sediments of tidal flats [3]. Sediments are regarded as an effective archive that can record heavy metal contamination [4].
Sediment pollution by heavy metals has become a critical problem in marine environment due to their toxicity, persistence, bioaccumulation and non-degradability in the environment [4][5][6][7]. Heavy metals in the sediments could significantly affect the health of marine ecosystems, and they can act as a source of heavy metals, imparting them into water and degrade water quality when the conditions of sedimentary environment change [4]. Therefore the study of heavy metal distribution in the sediments of estuary tidal flats is of great importance for the sustainable development of this region.
The Yangtze River is considered the largest and most important river in Asia [8]. It ranks ninth globally in terms of drainage area, and fourth in sediment flux (500 Mt/year before its decline in the 1970s) [8]. Among 470 million tons of silt carried by the Yangtze River annually, about half is deposited on Shanghai tidal flats [9]. Eastern Chongming tidal flat is a well-developed tidal flat and the largest intertidal zone of the Yangtze River estuary [9]. The study area is located on the eastern coast of Chongming island ( Figure 1). The sediment in this tidal flat has obvious zonation and divided into high, middle and low tidal flat from land to sea. Our study is focus on the low tidal flat and study the heavy metal distribution mechanism in the sediment of this zone.
Sampling and Pretreatment
Four sediment cores were collected with PVC pipe (ca. 40cm in length, 100mm in diameter) at low tidal flat during ebbing period ( Figure 1). The sampling was performed during a year at four different seasons in January, April, July and September. The sampling and pretreatment follow the same procedure at that described in [9].
Major and Trace Element Analysis
4g sediment sample was weighed and put onto a low pressure polyethylene base. The sample pretreatment and analysis methods is the same as that described in [9].
Standardization of Elements
Grain size is an important parameter to affect the distribution of heavy metals. Natural transport processes of heavy metals mainly depend on the presence and transport of find-grained sediment, which is an efficient sink for heavy metals, with the capacity to concentrate and retain heavy metals [7]. Therefore before analyzing the temporal and spatial distribution of heavy metals in sediments, the concentrations of elements are normally standardized with Aluminum, one of the major chemical components of fine-grained sediments, to eliminate the effect of grain size on heavy metals content [9]. The standardization method is as follows: , , indicate the content of element after normalization, content of element before normalization and content of Al before normalization.
Results and Discussions
The vertical distributions of heavy metals in the four seasons were analyzed and the results were shown in Figure 2-5. The distribution mechanisms of heavy metals in different seasons were discussed in the following section.
The Vertical Distribution of Heavy Metals in the Sediment in Spring
In spring, Cr and Zr showed very consistent vertical distribution characteristics, which indicates that Cr mainly migrates with relatively course particles? In a depth deeper than 12cm, Fe and Mn showed similar distribution characteristics as that of Cr and Zr, which showed a relatively large peak at a depth 3 of 16cm and a relatively small peak at a depth of 22cm. However, Cu, Pb, Rb and Zn exhibit an opposite vertical distribution from that of Zr and Cr. Since Zr mainly produced and enriched in heavy minerals, it is estimated that the vertical distribution of Fe and Mn in the sediments of low tidal flat in spring is primarily caused by the source of sediments. Fe and Mn are mainly enriched in the heavy minerals of course particles. While the distribution of Cr, Cu, Pb, Rb, and Zn are more affected by the grain size, and mainly attached to the fine sedimental particles and migrate with fine grain size particles. Due to the strong hydrodynamic effect in the low tidal flat, the sediment is under constant stirring. The enrichment of heavy metals in the redox boundary layer during early diagenesis is a relatively slow process. So if the sediment deposits at a relatively slow speed, heavy metals have sufficient time for post-deposition migration and form a peak at the redox boundary layer. In the contrary, if the sediment deposit at a fast speed, it is difficult to form a peak. Therefore the strong hydrodynamic conditions at low tidal flat are not conducive to the formation of redox boundary layer. So the vertical distribution of heavy metals in the sediment of low tidal flat is mainly affected by the source of sediment, and the post-migration after deposition process is weak.
The Vertical Distribution of Heavy Metals in the Sediment in Summer
In summer, Fe, Mn and Cr, Zr showed similar vertical distribution in the sediment of low tidal flat at a depth deeper than 18cm. Zr is very stable in the sediment, and the post-migration is extremely weak. Therefore the similar distribution of Fe, Mn and Cr, Zr indicates that the distribution of Fe, Mn is mainly affected by the source of sediment. Fe and Mn are not in the form of oxides and are mainly present in the heavy minerals rich in Zr. The peak value moved from 16cm to about 26cm. This is mainly because the tidal flat is dominated by siltation process from spring to summer. The sedimentation depth is about 10cm. At a depth lower than 18cm, Cu, Pb, Zn, Fe and Mn exhibit an opposite distribution characteristic to Cr, which is especially obvious between the distribution of Cu and Cr. Zr indicates a relatively course sediment source and Cr migrates mainly with relatively course
The Vertical Distribution of Heavy Metals in the Sediment in Autumn
In autumn, Fe, Mn and Zr, Cr showed similar vertical distribution characteristics at a depth deeper than 14cm and both have a peak value at a depth of 16cm. This indicates that the tidal flat experiences mainly a scouring effect from summer to autumn and about 10cm of sediments are washed away from surface. From 14cm to surface layer, Zr and Cr showed little change, while Cu, Pb, Zn, Fe and Mn showed an increasing trend from 16cm to the surface. This may be the combined effect of sediment source and organic matter.
The Vertical Distribution of Heavy Metals in the Sediment in Winter
From autumn to winter, the tidal flat is still dominated by scouring effect. The depth of scouring is about 10cm. The peak value moved from a depth of 16cm in autumn to about 6cm. From 14cm to the surface layer, the heavy metals generally showed a decreasing trend and enriched in the surface. This may be related to the sediment source and the hydrodynamic effect.
In summary, the distribution of heavy metals in the sediments of low tidal flat is mainly affected by the sediment source and the tidal hydrodynamic forces. The low tidal flat is under constant strong hydrodynamic effect and experiences intense erosion and siltation process throughout seasons. The average erosion and deposition depth is about 10cm. In winter and spring the siltation process dominates and in summer and autumn, the souring process dominates ( Figure 6).
Conclusions
The distribution of heavy metals in the sediments of low tidal flat is mainly affected by the sediment source and the tidal hydrodynamic forces. The low tidal flat is under constant strong hydrodynamic effect and experiences intense erosion and siltation process throughout seasons. The average erosion
|
2019-04-27T13:13:06.586Z
|
2019-02-18T00:00:00.000
|
{
"year": 2019,
"sha1": "90560582c2a044e436b1832e3e1142302eb30658",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/472/1/012091",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e6f4d678fc5d2fcb7a6d492b9aeee73d9b98eb1d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
263556663
|
pes2o/s2orc
|
v3-fos-license
|
Inclusive planning: African policy inventory and South African mobility case study on the exclusion of persons with disabilities
Background The Sustainable Development Goals (SDGs) and universal design (UD) principles call for inclusive planning. Within the transportation field, this includes the development or improvement of facilities that accommodate people with disabilities. Between 10% and 20% of the African population is affected by disabilities. A lack of understanding of the needs of people with disabilities leads to isolation. Within the transportation field, isolation manifests itself as a reduction in trip-making. Methods This paper investigates the availability of transport policies and guidelines in 29 different African countries, focusing on the inclusion of persons with disabilities. A desktop study was conducted creating heat maps for 29 African countries, followed by the analysis of secondary data in the case study area, South Africa, demonstrating that the lack of adequate policies, guidelines, and appropriate implementation leads to a lack of accessibility, opportunities, and social isolation, measured through trip frequencies. Results The data analysed revealed that many African countries omit, or only superficially include, people with disabilities in their transport policy framework. Ghana has the most inclusive People with Disabilities Act, while South Africa is most inclusive regarding their planning and design of transport facilities and services. In South Africa, 4.5% of the population did not travel at all in the 7 days before the interview, as disability or age prevented them from doing so, or due to a lack of appropriate travel services. When comparing the trip rates per week, people with disabilities travel significantly less, between 27.2% and 65.8%, than their abled counterparts. Conclusions The study reveals that people with disability live less integrated, more isolated lives due to the lack of acknowledgement in the transport policy framework and accommodation in infrastructure and services. The results underpin the need for disability-inclusive planning in the African context and provide recommendations for actions that mitigate the isolation challenges faced by people with disabilities. Municipalities play a crucial role in improving the quality of life for people with disabilities. Supplementary Information The online version contains supplementary material available at 10.1186/s12961-021-00775-1.
groups and, in particular, persons with disabilities (PWDs), based on household survey data.
Transport research on people with disabilities
Research challenges of transport planners demonstrate a shift in strategic priorities over time. In the 1970s, the focus was enhancing road capacity where drivers were predominantly male, middle-class workers, using private motor vehicles [29]. Four decades later, the focus has shifted to recognizing the needs of vulnerable transport user groups, highlighting the need to focus research attention on identifying gender issues in transport planning [2]. According to the authors, a further key area of research for the future is transport planning for PWDs. Allen and Vanderschuren [2] identified the Transport Research International Documentation (TRID) database as the most inclusive data source. This is because the TRID database is an integrated source that combines the records from the Transportation Research Board's Transportation Research Information Services (TRIS) database and the Organisation for Economic Co-operation and Development's (OECD's) Joint Transport Research Centre's International Transport Research Documentation (ITRD) database. Hence, the database provides access to more than 1.25 million records of transportation research worldwide, including academic papers.
An analysis of publications in the TRID database over the past two decades across all modes of transport revealed a limited number of research publications focusing on PWDs. Based on the types of disabilities that affect the ability to move independently (i.e. disability, in general, hearing, vision and intellect/concentration impairment, as well as the use of mobility aids and epilepsy), a keyword search was conducted. Table 1 provides a summary of the number of publications found.
It may be seen from the table that over the two decades analysed, research reports on transport-related challenges for PWDs were limited.
The United Nations' (UN's) general motto is to create peace, dignity, and equality on a healthy planet. The SDGs, established in 2015, unpack this motto further. SDG11, sustainable cities and communities, has 10 targets and 15 indicators. Target 11.2 states, "By 2030, access is provided to safe, affordable, accessible and sustainable transport systems for all, improving road safety, notably by expanding public transport, with special attention to the needs of those in vulnerable situations, women, children, PWDs and older persons" [45]. Despite the aspirations of the UN and SDG11, it can be concluded from the analysis of the number of transport-related research publications that studies on PWDs are underrepresented in mainstream research documents.
Review of key publications Disabilities in policies and legislation
Meriläinen and Helaakoski [26] found that inclusive transport is not (fully) considered in transport planning, design, construction, and implementation, especially in developing countries. This is in contrast with earlier findings by Metts [27], who concluded that "low-and middleincome countries now also have disability policies that reflect reasonably advanced concepts of disability, based on the UN 1982 World Program of Action Concerning Disabled Persons (WPA) and 1994 Standard Rules on the Equalization of Opportunities for Persons with Disabilities (Standard Rules)".
The UN Convention on the Rights of Persons with Disabilities (CRPD), held in 2006 [43], recognized the importance for PWDs of their individual autonomy and independence, including the freedom to make their own choices, as well as the need for PWDs to have the opportunity to be actively involved in decision-making processes about policies and programmes.
Bardinard et al. [4] report that "accessibility is not yet a systematic concern in the planning or implementation of urban transport infrastructure" in East Asia and the Pacific, even though universal access principles originated in Japan. One of the implementation obstacles is the misconception that the application of universal design (UD) standards would be more costly [4,34].
Disabilities affecting independent mobility
"The transport justice [25,37,38] framework goes some way to link space and mobility in discussions about accessibility. However, it tends to overlook how people are differently embodied and how the interactions between the physical environment, including transport infrastructure, affects these people" [46]. UD, on the other hand, is the design and composition of an environment so that it can be accessed, understood, and used to the greatest extent possible, by all people, regardless of their age, size, ability, or disability (http:// unive rsald esign. ie/ What-is-Unive rsal-Design/). According to the principles of UD, obstructions such as stairs, heavy doors, steep ramps, and poor signage/lighting should be minimized in the transportation system, to develop an environment that is truly open and functional to everyone [7]. However, barriers remain in governance, regulatory, planning, and implementation of universally accessible transport infrastructure and services. Social exclusion for people with disabilities still exists. Part of this exclusion is due to a lack of funds for travel, as established by Khayatzadeh-Mahani et al. [20]. Other difficulties, due to long travel distances [32,33], are exacerbated by a lack of or insufficient transport facilities and services. Mobility and access requirements of PWDs should be considered by planning and designing barrierfree transport systems. This implies an understanding and identification of the circumstances that create barriers for people with disabilities [26]. Equal access is often not provided in (public) transport planning as persons using mobility aids (crutches, a walking stick or a wheelchair) are confronted with many physical barriers, such as stairs in subway stations or inaccessible buses, when using the transport system [6,14,23]. Street and sidewalk conditions have a significant impact on persons with more severe impairment. The lack of and poor quality of footpaths, such as uneven surfaces due to cracks, were identified as a common barrier for people with vision impairment (VI) due to an increased risk of falling [12,18,35]. Facility maintenance or the provision of amenities can improve mobility independence almost immediately for someone who was previously unable to navigate transport facilities independently because of mobility impairment (MI) [10]. Venter et al. [47] reported, based on European, Asian, and African information, that a lack of UD implementation in urban transport leads to social difficulties, psychological pressure, and structural exclusion of people with disabilities. Curb cuts (depressed curbs that act as ramps in sidewalks), smooth pavement, and barrier-free sidewalks [21] are some of the environmental characteristics that can easily prevent mobility disability and promote independence in adults at greatest risk, such as those with underlying weakness in movement-related functions and balance. Yet, relatively little work has examined the effect of the built environment on mobility disability, particularly across those with different levels of physical impairment [9].
People with intellectual disability (ID), including aging people with cognitive impairment, commonly suffer severe communication limitations. However, written information continues to be the most common form of communication, creating notable access barriers [46]. These communications, and other barriers, require people with ID to rely on pre-booked support staff services, limiting their mobility and spontaneity in their social lives [28].
There are fewer transport barriers for persons with hearing impairment (HI), according to Chang et al. [8]. However, various studies have found that HI is associated with driving safety-increased crashes and poor on-road driving performance [11,16].
Estimates suggest that disabled people in England and Wales undertake one third fewer journeys than "nondisabled" members of the population [1,48]. Similar results were recently found by the authors during focus group interviews in Tshwane, South Africa.
PWDs in Africa
Over one billion people globally live with some form of disability-about 15% of the world's population, and this number is increasing. The number of people living with disabilities is expected to double to two billion by 2050 [49]. In countries with life expectancies over 70 years, individuals spend, on average, about 8 years, or 11.5%, of their life span living with disabilities (https:// www. disab led-world. com/ disab ility/ stati stics/). Some 80% of PWDs live in developing countries, while an estimated 60-80 million of them are living in Africa. People with disabilities are estimated to account for 10% of the general African population, but the proportion may as high as 20% in the poorer regions. School enrolment for disabled minors is estimated at no more that 5-10% (https:// www. disab led-world. com/ news/ africa/).
There is an apparent underreporting of disability in low-income countries, which has been attributed, in part, to the stigma associated with disability and the reporting methodologies used [5,15,31,36,39]. The UN Workshop on Disability (in Kampala during 2001) found that in many African societies, there are sociocultural pressures to underreport disability. Respondents are reluctant to admit the presence of PWDs in the household, and interviewers tend not to ask about disability unless a person with a very severe kind of disability is seen during the interview.
This lower reported prevalence rate is evident in South Africa, where the National Census of 2011 estimated the prevalence of disability to be 7.5% of the population [41]. Additionally, the highest prevalence of disability in the country has been reported among those with lower income, particularly those who had no schooling (10%) compared to those who had postsecondary education (3%) [41]. Black Africans in South Africa, who generally reside in under-resourced communities, were still found to have the highest rate of disability (7.8%) in the 2011 census [41]. In the UN Disability Statistics database (https:// unsta ts. un. org/ unsd/ demog raphic-social/ sconc erns/ disab ility/ stati stics/#/ home), only 11 African countries report on disability levels. Of these, Senegal and South Africa do not provide a differentiation into types of disability. For all other available country data, the information is included in Fig. 1.
Most data included were from 2012 to 2014. The exception was Tanzania, with data from 2017. Underreporting, as identified by various sources, is also apparent in the UN disability statistics [44]. The countries that do report data, on average, report that 4.9% of their population live with disabilities. Exceptions are Tanzania and Zimbabwe, both reporting 9.1% of the population living with disabilities. Although these percentages are significantly higher than data for other African countries, they are still far below the 15% indicated by the Global Burden of Disease Report [50].
Furthermore, disability reporting categories are not standardized amongst African countries. Upper (4 countries) and lower (3 countries) limb-based disabilities are only reported by a limited number of countries. Cameroon and Guinea report zero paralysis cases, while Tanzania does not report any cases of speech impairment. These statistics, realistically, are highly unlikely.
Zimbabwe reports a significantly higher number of cases of VI (4.2%) and paralysis (3.5%), as well as the highest percentage of people with a HI (1.75%). Rwanda reports the highest level of paralysis cases (2.35%), while Tanzania reports the highest number of people with mental/learning disabilities (2.05%). 1 Based on the literature, it can be concluded that PWDoriented transport planning is highly encouraged on the African continent, given the vast number of affected individuals. Sources disagree about the actual level of inclusive planning in the developing world in general, and Africa more specifically. This paper enhances the knowledge on disability-inclusive transport planning in Africa through an inventory of the current planning document status quo. An analysis of mobility patterns in the South African context for PWDs provides insights into the level
Methods
The information in this study consists of two distinct parts, a desktop study of available transport policy and planning documents in African countries, and an analysis of secondary household data, for South Africa as a case study example. The data for each country were collated into two distinct segments. The first segment relates to policy frameworks in the form of legislative and institutional support for PWDs within the countries. Here, documents such as the country's constitution and other policy documents that address the living conditions of PWDs, with the aim of improving overall access to the various sectors of the economy, are collated and reviewed. The second segment of collated data indicates the availability of transport sector-specific provisions for particular types of disabilities within each country. A checklist comprising VI, HI, mobility aids, and other types of impairment was used to guide the data collection (Additional file 1: Policy Documents Raw Data).
The desktop study was conducted during the months of June and July 2020 by three researchers who were recruited for the purpose. Each researcher received training on the data type and collection method to be used before they commenced. The researchers rated documents on their inclusivity of PWDs compared to international best practices ( [44]; https:// nacto. org/). If more than one document dealt with a specific disability, the ratings were accumulated and assigned as scores to each country. The scores were then normalized to a scale of 0 to 10, using a linear normalization function (see Eq. 1).
This was done to ensure the uniformity of all the country data in terms of comparison and data visualization.
In total, 29 sub-Saharan African countries were surveyed: 11 Francophone and 18 Anglophone. The countries were from East, West, Central, Southern, and Northern Africa. The countries surveyed include Algeria (11), Benin (11), Botswana (4), Burkina Faso (10) x − x minimum x maximum − x minimum experience of the researchers in the local transportation context in 12 countries (indicated in italics) out of the 29 countries that were studied, was used as a basis to validate the data.
It should be noted that no contact was made with stakeholders in the various countries; as such, the data collection was limited to online available sources. This is identified as the main limitation of this study. Another limitation is that the accumulated count of policy documents, found on the disability areas highlighted in the checklist, was used as a measure of the extent of inclusiveness of each country. The authors acknowledge that the availability of documents alone is a limited metric to determine the disability inclusiveness of a country's transportation policy. However, it was assumed that documents that were not available online 2 would also be difficult to access and apply by local practitioners.
The second part of the study uses existing, secondary data for South Africa to assess the level of isolation for the vulnerable population groups identified, that is, PWDs. The South African National Household Travel Survey (SANHTS) raw data were used to conduct the analyses [42]. This database is the most comprehensive transport data currently available 3 in South Africa. Data collection in this regard took place between January and March 2013, and a total of 51 341 households and/or dwelling units were sampled, using a random stratified sample design. Within the households, 157 273 respondents shared their transport information and opinions. Statistics South Africa, using a multitude of data available to them, created a weighting value for every household and person in the database to represent the whole South African population. All analysis in this paper applied the weighting.
Results
In this section, the result of analysing the African country documents regarding their inclusivity of PWDs, in general, or UD specifically, is presented and compared to international best practices. The results also allow for a consideration of the level of isolation experienced by PWDs, as they are excluded from planning documents and institutional guidelines. Specifically, within the transportation field, this lack of inclusion manifests itself as a reduction in trip-making.
The document analysis results are presented as heat maps using Microsoft Excel for the visualization. For a given variable presented in the heat maps, a higher intensity, which is depicted by darker colours, indicates a higher score, hence, a greater level of inclusiveness for the given country in terms of that variable. The policy framework analysis and case study results are provided for the four themes that make up the earlier highlighted checklist, that is, VI, HI, MI individuals, and people with other impairments.
VI
People with VI face the risk of being injured by obstacles and falling due to uneven surfaces. Furthermore, depending on the severity of the VI, the use of vehicles (both bicycles and private cars) is prohibited. Navigating the outdoors is a definite challenge for people who are visually impaired. The use of increased contrast, highly visible colours, and improved street lighting, the use of sound and tactile pavers, and the application of barriers and railings can improve the outdoor experience of people with VI.
In the African context, the recognition of VI is scarce. Of the 29 countries investigated, six do not mention VI in their policy documents (Benin, Burundi, Cameroon, Liberia, Sierra Leone, and Zimbabwe), while another 10 only mention the disability, but do not give any policy direction. 4 The Ghanaian Persons with Disability Act 715 [13] is the most comprehensive document for people with VI, compelling social workers to start changing the physical space (see Fig. 2a).
The transport policy framework in eight investigated countries (Algeria, Botswana, Burkina Faso, Burundi, Eritrea, Liberia, Senegal, and Eswatini) do not include any accommodation of PWDs. The most inclusive transportspecific documents accommodating people with VI in the African context is found in South Africa. The documents legislate and guide the implementation of tactile paving, intersection design, and access to formal public transport (see Fig. 2b).
According to the SANHTS [42], South Africa has 4.15 million adults that have mild to severe VI, and this accounts for almost 15% of the population. In many cases, advanced prescription glasses can mitigate some of the negative impacts, improving the travel experience of the visually impaired. However, more severe cases of VI do experience reduced mobility, indicating a form of isolation.
In South Africa [42], over 2.2 million people (4.5% of the population) did not travel at all in the 7 days before the interview, as their disability or age prohibits them to do so, or due to a lack of appropriate services. For those that make trips, on average, the number of trips per person per week for people with VI is reduced by almost 40% [42].
An analysis of the portion of trips for persons with/ without VI per income quintile was conducted to understand whether the main reason for reduced mobility, and the resulting isolation, was disability-related, or whether other issues, for example, low household income, were at the core of this isolation. Although the distribution per income quintile is not identical between the two population groups, there is no significant bias towards any income quintile (see Fig. 3). The isolation experienced is due to the VI experienced.
HI
Although less limiting, persons with HI do experience limitations when using the road network and transport services. In the Netherlands, a sign indicating a HI has been available for cyclists for over half a century to improve road safety. Scandinavian countries have similar signs, and there is research underway to improve and standardize the signs. People with HI cannot anticipate traffic coming from behind, causing a hazard. Even crossing the road can be challenging, which has also been confirmed in research with people without HI, when vehicles are electric [30].
When analysing the policy frameworks in African countries, as displayed in Fig. 4a, again six countries (Burundi, Cameroon, DRC, Liberia, Sierra Leone, and Zimbabwe) do not mention HI in any of its policy and legislation documents. A further 10 countries only mention HI superficially. Ghana is again most inclusive to people with HI in its Act [13], followed by Kenya [19] and Malawi [24]. Figure 4b reflects the findings for the transportspecific policy framework. The previously mentioned eight countries do not include any HI attributes in their transport policy framework either. Nigeria and Tanzania are most inclusive regarding HI-related transport policies. However, research indicates that people with HI "appear to be the most vulnerable group in Nigeria and many other African countries" [17,22]. Asonye et al. [3] found that children with HI are isolated from the public. 19:124 According to the SANHTS [42], 0.5 million people in South Africa have HI. The effect of this HI is that tripmaking is reduced by 47%, which is, according to the authors, an indication of isolation. Analysing the distribution within income quintiles, as done previously for VI (see Fig. 3), did not provide a significant over-or underrepresentation for any of the population groups.
MI
Experiencing MI has several causes. Most known are neuromuscular and orthopaedic impairments. However, people suffering from high blood pressure, obesity, asthma, and the like also experience compromised mobility abilities. People with MI can use various aids, such as a walking stick, crutches, or wheelchairs, to improve mobility.
In Africa, seven (Algeria, Burundi, Cameroon, Liberia, Madagascar, Sierra Leone, and Zimbabwe) of the 29 countries investigated do not mention any support for people with MI at all, including wheelchair users, in their policy frameworks. Another two countries only make a brief mention of persons with MI. Interestingly, though, the other countries have a reasonable inclusion of MI aspects in their policy frameworks (see Fig. 5a). MI is also included in the Ghanaian policy framework, followed by Kenya and Malawi (in that order).
Eight (Algeria, Botswana, Burkina Faso, Burundi, Eritrea, Liberia, Senegal, and Eswatini) of the 29 African countries make no allowance in their transport policy framework for people with a walking stick, on crutches, or in a wheelchair. All other countries, on the other hand, have a good to very good inclusion of MI in their transport policy framework (see Fig. 5b). South Africa's transport policy documents are clearly superior regarding mobility aid requirements, followed by Tanzania and Malawi. South Africa specifies sidewalk surface requirements, the provision of drop curbs and intersection standards [40]. Furthermore, standards for the access of public transport are included. Unfortunately, the inclusion of MI in the transport policy framework does not guarantee improved practice. A prime example is included in Fig. 6, where the drop curbs (with tactile paving) have been incorrectly implemented; the sidewalk in one direction is discontinued after approximately 10 m, and the traffic light is an obstacle for any wheelchair user.
The SANHTS [42] established that 0.70 million people in South Africa use a walking frame or stick to aid their mobility, while 0.12 million people are wheelchair users. The same database identified a significant reduction in trip-making of over 65% by people with MI. Recent focus group interaction with users of mobility aids (50% crutches and 50% wheelchair) confirmed the difficulties. Non-conducive surfaces and drop curbs, obstacles, high speeds, and aggressive traffic does make it impossible for this group to travel independently. Furthermore, overdimensioned cambers add great difficulties for people with MI even with a mobility assistant. In this qualitative data collection, the trip frequency of people with MI was 50% lower than any other vulnerable group. The MI focus group indicated that public transport is not conducive to travel, and that they will not make a trip if a private vehicle with a driver is not available. This at times even leads to students missing classes. Overall, people with MI indicate that they feel vulnerable when using the road environment, since they are slower than their able-bodied counterparts. This affects their road safety and personal security perception. Furthermore, they use more energy when moving, which can cause fatigue.
Other impairments
Although the policy framework analysis did not yield specific information for people with concentration, selfcare challenges, or memory impairment, when analysing the weekly trip rates of PWDs, compared to the average adult South African, trip-making reduced significantly. The analysis revealed that concentration impairment (−52.9%), self-care challenges (−27.2%), and communication impairment (−35.6%) reduce mobility and contribute to isolation. Again, household income does not influence these findings significantly.
Discussion
The UN SDGs, more specifically Goal 11, and UD agencies call for more inclusive transport planning. Inclusive transport planning includes the accommodation of all road users, independent of gender, age, or ability. In this paper, the needs for PWDs have been unpacked. The literature provides a clear indication that transport is a burden for this population group. People with VI have the risk of falling and walking into obstacles. HI affects the anticipation of other traffic, which increases the road safety risk, while MI (people with a walking stick, on crutches, or in a wheelchair) requires more movement energy, and the slower movement also increases the road safety risk. Although some authors find that policy documents, specifically in developing countries, are reasonably reflective of advanced disability concepts, other sources disagree, concluding that there is a continued gap.
The findings from this study reveal that Africa still has a long way to go regarding the development and implementation of people-centric, inclusive transport planning. Many countries lack an appropriate conducive transport planning framework. Where general planning frameworks exist, such as in Ghana, the translation of the rights of PWDs is not translated into transportspecific policies and legislation. African countries must move towards a people-centric planning approach and translate this into the transport policy frameworks in respective countries. Furthermore, following the UN recommendation, PWDs should have the opportunity to be actively involved in the development of transport policies and programmes.
South Africa has the most inclusive transport policy framework that is inclusive of PWDs. However, this has not led to an extensive improvement in practice, although some good examples do exist. Based on an analysis of the SANHTS data [42], PWDs are likely to be at risk of isolation due to the lack of appropriate transport infrastructure and service provision. When comparing the trip rates per week, PWDs travel significantly less than their able-bodied counterparts. Their trip rates are between 27% and 66% less than their able-bodied, adult counterparts. Although various other socioeconomic factors also influence isolation, income was not significant for PWDs -in all income groups, PWDs make fewer trips.
Currently, across Africa, the lack of transport infrastructure and services to accommodate vulnerable road users, such as PWDs, which results from the lack of binding and enforceable policies, legislation, standards, and guidelines, serves to jeopardize vulnerable individuals' safety, security, freedom, and, therefore, dignity.
Making sidewalks, public spaces, and public transport accessible to PWDs will also improve the transport system for other vulnerable groups, such as women, children, and the elderly. It is very likely that the improvement of transport infrastructure and services will catalyze the use of more environmentally friendly modes, including non-motorized transport and public transport.
Conclusions
The literature related to PWDs is sparse, as established based on the inventory presented in Table 1, indicating a significant knowledge gap. When policy documents are reviewed, the inclusivity of PWDs is mostly conducted through content appraisal. This study applied the same technique, a decade after the last broad update on African countries. Studies related to isolation analysis of PWDs are often qualitative. This study uses quantitative data to assess the transport isolation of PWDs. Notwithstanding the value of qualitative data collection, it is recommended that other countries and continents also establish whether common household surveys can provide improved insights on the lived reality of PWDs.
This study highlights the state of the transport sector in many African countries with respect to the integration of PWDs. A major issue identified in the study is the fact that due to the lack of consideration in the transport policy, institutional frameworks, and accommodation in infrastructure and services, people with disability live less integrated, more isolated lives. The results, therefore, accentuate the need for disability-inclusive planning and practice in the African context. Along these lines, recommendations are made for the improvement of African policy with the goal of mitigating the isolation challenges faced by people with disabilities. In the short term they are as follows: • An improved understanding of the needs of PWDs can be gained from the analysis of existing databases, as demonstrated in this paper, as well as collecting new primary data. As PWDs are part of the most vulnerable in any society, resources reallocation towards needs assessment projects is key. • Infrastructure audits will have to go together with improved financial practices, where contractors are paid a substantial part of the contract worth, after the UD aspects are signed off. • The implementation of universally accessible infrastructure and services is complex and the "devil is in the details". Community leaders, nongovernmental organizations (NGOs), lobby groups advocating for better transportation policies, and any other individuals or groupings representing the needs of PWDs should be included in ongoing transport infrastructure implementation projects. They can play a custodial role, assuring that investments are most effective on an ongoing basis.
Long-term, the following can be initiated: • Countries need to make sure that their constitution, and other related policy documents, are people-centric, inclusive of PWDs and other vulnerable population groups. More inclusive countries have a PWDs act that describes the needs and rights of PWDs.
• Once the rights of PWDs and other vulnerable groups have been identified, a translation of these rights into the transport policy framework is required. People-centric policies and legislation, the adoption of UD standards, and the development of guidelines that approach UD practices in a holistic manner is a good start. • African countries need to invest in the translation of inclusive transport policy frameworks into practice to address the isolation created for PWDs. This will require the strengthening of human resource capacity in municipalities where infrastructure investments are made. A further possibility is the creation of infrastructure audit capacity where new or refurbished infrastructure is assessed, based on UD practices before opening to the public. • African countries need to address the road safety burden (as well as the personal security threats) experienced by vulnerable road users, including PWDs. Besides improved infrastructure, countries can apply other road safety measures, such as improved enforcement and education. Improving road safety and personal security will reduce the isolation experienced by PWDs. • Community leaders, NGOs, and lobby groups play an important role in the African context, including in the transportation arena. However, this paper did not investigate their potential role. It is, therefore, recommended that further research is conducted in this field. • Further studies are recommended that establish the impact on other fields, such as the environment, when improving transport infrastructure and services provided for PWDs and other vulnerable groups. Furthermore, the impact in other fields, such as access to education or jobs, should be estimated for PWDs and other vulnerable groups.
Changing the way impacts are assessed will go a long way towards changing funding streams that are currently biased towards unsustainable, motorized, and private modes used by able-bodied individuals.
|
2021-09-09T13:47:00.546Z
|
2021-09-09T00:00:00.000
|
{
"year": 2021,
"sha1": "289b6cfc69f0297f8cedef0d35a4f845dec9789a",
"oa_license": "CCBY",
"oa_url": "https://health-policy-systems.biomedcentral.com/track/pdf/10.1186/s12961-021-00775-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "289b6cfc69f0297f8cedef0d35a4f845dec9789a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118376990
|
pes2o/s2orc
|
v3-fos-license
|
The Dilemma of Bose Solids: is He Supersolid?
Nearly a decade ago the old controversy about possible superfluid flow in the ground state of solid He4 was revived by the apparent experimental observation of such superflow. Although the experimentalists have recently retracted, very publicly, some of the observations on which such a claim was based, other confirming observations of which there is no reason for doubt remain on the record. Meanwhile theoretical arguments bolstered by some experimental evidence strongly favor the existence of supersolidity in the Bose-Hubbard model, and these arguments would seem to extend to solid He. The true situation thus is apparently extraordinarily opaque. The situation is complicated by the fact that all accurate simulation studies on Heuse the uniform sign hypothesis which confines them to the phase-coherent state, which is, in principle, supersolid, so that no accurate simulations of the true, classical solid exist. There is great confusion as to the nature of the ground state wave-function for a bose quantum solid, and we suggest that until that question is cleared up none of these dilemmas will be resolved.
Simulation studies of high accuracy on pure He--4 have, on the whole, failed to find any evidence of these phenomena, and a view has become popular that the whole suite of effects is caused by dislocation pinning and unpinning and that supersolidity has no theoretical validity. This view was reinforced recently by the very public retraction 2 of some-but not all 3 ----of the TO results by the originator, Moses Chan, and very careful and definitive studies on single crystal elasticity reinforce this view, though themselves presenting some interesting dilemmas. 4 A NOTE ABOUT SIMULATIONS In 2004, at the time of the original observations, there was no definitive theoretical treatment of Bose solids available other than simulations. A feature of the simulations which has not been given enough notice is that, in a sense, they beg the question of superfluidity: the success of the simulations in reproducing the physical properties of solid He is without exception based on taking advantage of the absence of the "sign problem" for Bose systems. The assumption is made that the wave function does not have any zeroes, and that the wave function is real and positive everywhere. Thus every simulation is of a sample which is totally free of vortices, which are zeroes of the complex amplitude of the Bose field, while a true ""classical" solid is by definition a condensate of vortices: no accurate simulation has thus ever been carried out of an actual physical sample of solid He at, say, .1 to 1 degree K, and we have no comparison, via simulations, of the two states of matter, solid helium without vortices and with rigid phase restrictions, as compared to helium without rigid phase in which the bosons can be confined to their separate sites.
HEURISTIC HAMILTONIAN
The present author proposed a heuristic Hamiltonian 5 for supercurrents 6 but until recently 7 had not given it a particularly sound foundation. This Hamiltonian describes the supercurrents as the result of a dependence of the energy on a phase variable φ(r) which is the average phase of the local Bose field ψ(r) . In a lattice, the simplest form for such a Hamiltonian would be the "x--y" model, is the particle current. [1] The current will be divergenceless with reference to the site lattice, because the frequency scale of J is much smaller than the Debye frequency, so that atoms cannot accumulate and any motions on a slow time scale have to be incompressible. If [1] is a component of the Hamiltonian, at T=0 φ will be uniform, φi=φj. Rotation will lead to a vector potential which causes a supercurrent to flow, as described in my earlier paper, so this is a superfluid in that sense. At some T of order J the system will undergo a thermal phase transition, in 2D of BKT type or in 3D a 3D x--y transition. But the dynamics is not trivial because the heuristic Hamiltonian [1] refers to phases on the sites; and for instance a static shear of the site lattice does not cause a vector potential in the phase field. As is the case with superconductors, the phase order parameter is a topological one, not a locally determined object: it can only be affected by overall rotation or a change in boundary conditions. The reason why currents flow with ease into and out of superconductors is the existence of Andreev scattering which converts pairs into normal current, and there is nothing like that here. In a pure sample only the equivalent of diamagnetic currents can flow. (but see below) BOSE--HUBBARD MODEL In the past few years another avenue toward Bose solids has become available: the Bose--Hubbard (B--H) model, which can be accurately modeled by cold atoms in an optical lattice. 8 A very straightforward argument 9 can be constructed that the Bose--Hubbard model in its ground state always has at least a small superfluid density if D≥2 even when the interaction parameter U/t is quite large, and the experimental observations on cold atoms, and simulations, favor this conclusion as we will explain below, although they have not yet been adequately analyzed. The difference between the Bose--Hubbard model and a true Bose solid is that the former has a predetermined periodic lattice while the lattice of the solid is formed self--consistently. The effect is to eliminate phonons as excitations in the B--H model. But as we said above, phonons do not couple very effectively to these divergenceless currents. A second, and mathematically very simplifying, difference is that the Hilbert space of wave--functions in the B--H model is restricted to be only N--dimensional, where N is the number of lattice sites: one state and one boson wave function per site. Nonetheless this system has enough freedom that it can describe a superfluid, a true solid, and, as we will see, a supersolid. The Bose--Hubbard model consists of the Hamiltonian [2] It is well understood that for large values of U/t and low T the only stable phases are those with integer <n,>, and µ=U(<n>--1). The simple case n=1, µ≅0 exhibits all the interesting physics. The obvious trial solution for the ground state is [3] and the single--particle elementary excitation energies are determined by the equations of motion of holes and particles: [4] for particles; and for holes.
[5] These two equations, if taken literally and solved straightforwardly, simply give us two bands of running--wave solutions with an intrinsic gap of U. But this is not the correct approach to the Hartree--Fock conceptual structure in the Bose solid case. The Hartree--Fock concept, of seeking a product trial function, is straightforward if one is looking variationally for a product of extended wave functions, and in the Fermion case thanks to Wannier's theorem and the exclusion principle this is always possible. But in the Bose case the correct trial function is of the form [3] of a product of local functions. Unlike the Fermion case, there is no equivalence to a product of Bloch extended functions. These local functions are not orthonormal and are not equivalent to a product of running Bloch waves; each satisfies a different wave equation, as I pointed out in 10 . The Bose--Hubbard model starts with the assumption that the local functions are an orthonormal set; it is equivalent to defining a set of orthogonal Wannier functions φ(r-ri) for the lattice, and defining [6] which obey canonical commutation rules; and assuming that the set of φ's is sufficiently complete to describe most low-energy behavior. The trial function [3], however, is clearly not the ground state, because the Hamiltonian contains the matrix elements tij which connect to states with doubly--occupied sites. Therefore the above solution is only metastable. Following Kohn 11 , we attempt to eliminate the matrix elements between low--energy and high--energy states perturbatively in succeeding orders of t/U, but in this case we cannot use a unitary matrix e iS for the canonical transformation.; we must make a linear, nonunitary transformation of the bi's into a nonorthogonal set bi': [7] and || || is the permanent. Our trial function now is [8] The overlap coefficients are to be determined perturbatively in powers of t/U. A possible procedure is to ask for an equation of motion for a hole, in the potential which exists after the particle has been removed . (This is similar to the procedure in ref 10.) We replace [5] by [9] In other words, the same equation as [4] except that the particle at site i has been removed , since it cannot experience its own repulsive potential when it is part of the ground state. A procedure essentially equivalent to this one was carried out to a high degree of accuracy as a series in t/U by H Monien and N Elstner 12 some years ago, essentially equivalent to iterating equations 4, 5 and 9 perturbatively. Their work amounts to a proof that the iterative procedure based on localized orbitals [7] converges as a series in t/U up to a critical value, which is the value at which the "Mott solid" no longer exists. How do we demonstrate the existence of superflow in this wave function? The criterion we use was proposed by Kohn, who used it to demonstrate the nonexistence of flow in a Fermionic Mott insulator: whether the energy is affected by a change in boundary conditions, which is equivalent to applying a uniform vector potential A to the system, which is equivalent to rotating a toroidal sample. As is well--known, dE/dA=J. For a conventional Fermion solid or insulator, we know that the bands are either full or empty, the crystal momentum states are equally occupied and a uniform shift by A cannot change the kinetic energy (see figure). However, if the Wannier functions are non--orthogonal, the crystal momentum distribution is not uniform but weighted to lower values (the momentum distribution P(k)varies as -Scosk r ij) and the net kinetic energy, and with it the potential energy, changes when k-->k+A. 13 The quantity p(k) is directly measured in cold atom experiments, and in fact in the higher range of t/U the measurements seem to exhibit a diffuse peak just above where the sharp condensate line disappears. (see figure) The measurement is complicated a bit by the sample inhomogeneity, but the existence of a peak is supported by the fact that it appears in Monte Carlo simulations of the Bose--Hubbard model as well. The reasoning above as to why such a peak implies supersolidity is simple and rigorous and seems to give strong, if indirect, support to the existence of supersolidity in this case. ELASTICITY The parameters Jij are functions of the interparticle distance rij , of course, and therefore when the phase is ordered its elastic constants are not the same as those of the randomly phased conventional solid. This obvious fact seems to have been ignored by all who have discussed the controversy, and it has been universally assumed that the elasticity results contradict supersolidity. Qualitatively, the phase--ordered solid should be stiffer than the conventional one, since the motivation for phase--ordering is to increase the binding energy; and therefore the results of Beamish are more confirmation than otherwise. The problem of course is explain the magnitude of the effect, and as I said above none of the simulations are of any value in this. The elastic constants depend on second derivatives of J with respect to r and thus may be extremely sensitive. Also, hexagonal He has a remarkably soft c44, so r--dependence of the J's will have a non--negligible effect. HE3 Finally, let me bring up another effect which has not been discussed at all in the theoretical literature, the effect of supersolidity on He3 "impurities". It is easily understood that if the solid is a 'condensate of vortices" so that the hopping matrix elements are phase--averaged out , both of the heliums are essentially localized quantum--mechanically. But in the phase--ordered state it becomes possible for He3 to undergo quantum diffusion. That is, there is a matrix element connecting the state with He3 localized near site i to that with it on site j. In the simplest Bose--Hubbard version of this situation, we have a hopping integral for He3 of t', while He4 is t; the effective bandwidth for He3 will then be tt'/U, taking the need for backflow into account. One would estimate that t' will be considerably larger than t, so that the He3 bandwidth will be bigger than J. Unfortunately it is not obvious how to take advantage of this to explain the magnitude of the effects of He3.
What we can say is that He3 will become a mobile entity when He3 turns supersolid.
There is a way to test experimentally whether the scenario suggested by the above considerations is correct. I have repeatedly suggested that high--sensitivity NMR studies on the He3 impurity be undertaken, and it appears that this change in diffusion behavior may be a reliable test for supersolidity. There is a popular alternative scenario centered around dislocation pinning by He3 impurities which has been advanced to explain the elasticity results, in particular the very detailed studies of large single crystals carried out by Balibar, Beamish et al., which show that it is only the c44 modulus which seems to have the strong temperature dependence (see figure, from reference 5). This is the modulus which represents shear of the hexagonal planes relative to each other, and would be affected by arrays of dislocations (or half--dislocations) lying in these planes. The measurements are very striking and their details suggest strongly that dislocations are part of the story. But recently 14 they have discovered that there is a transition when the pinning centers themselves become mobile.
CONCLUSIONS AND THOUGHTS
Many of the difficulties in understanding the data come from looking for the wrong thing. ODLRO , for instance, is not a property of the supersolid state; it isn't even a property of superconductors. 15 I was among those captured, early on, by the idea that physical vacancies were necessary, and they are not. The relevant phase comes in via the interference between the local bosons of the ground state, which occurs when they are not orthogonal and hence do not commute. In every way, coherence makes the lattice more stable: for instance, the nonorthogonal local functions are more localized and smaller than orthogonal ones, and may even interchange less often.
The improvement in binding energy may not be negligible, though it is a mystery that it shows up so little in the specific heat-but are we sure that the phase system is in equilibrium?
The Chan specific heat peak 16 takes on real importance. I don't think any of the dislocation array theories account for enough entropy to explain it. Impurities make superconductors work better; why not supersolids? Are they-and dislocations-pinning vortices and not dislocations, or both? Or are they, as I suspect, necessary to achieve equilibrium? In general, I would like to encourage people to look at the real, theoretically sound possibilities; for instance cold atoms need to be carried to much colder temperatures to confirm NCRI, which is firmly and quantitatively predicted by theory in that case.
|
2019-04-13T06:23:48.287Z
|
2013-08-02T00:00:00.000
|
{
"year": 2013,
"sha1": "a291246dc8e67c9612885f62b704b09b29985ac0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a291246dc8e67c9612885f62b704b09b29985ac0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
64522708
|
pes2o/s2orc
|
v3-fos-license
|
A Theoretical Framework for Understanding Pedestrian Behaviour Attributes Based on Spatial Interaction
Current trend shows there is an exponential increase in rail traffic passenger volume in Malaysia, causing authorities to be aware in improving the pedestrians’ facilities and safety. Kuala Lumpur Sentral Station (KLSS) is the largest transportation hub in Malaysia and, as of May 2017, the user reaches up to 180,000 users daily. In this contribution, a theoretical framework is presented to understand the pedestrians’ behaviour attributes at rail transit terminal based on pedestrians’ spatial interactions. The Distinct Element Method (DEM) will be employed to model the dynamics of pedestrian’s crowd behaviour. The validated crowd dynamics model will be demonstrated in evacuation capacity estimation in KLSS. Evacuating people quickly and safely can greatly reduce casualty and economic losses. As for this contribution, more concern is on the spatial interaction of pedestrian at the RTT and the proposed framework. This framework paper will describe the flow in details on conducting the study of spatial intersection.
Introduction
Study on dynamic of pedestrian's crowd behaviour at Rail Transit Terminal (RTT) in Malaysia is not being focus and according to Bohari [1], public buildings such as nodes, terminal and stations experiencing movement of people that generate massive walking behaviour. Kuala Lumpur Sentral Station (KLSS) is the biggest RTT in Malaysia located at the heart of Kuala Lumpur that connect almost all the public transport facilities such as rail (KTM, LRT, Monorail and MRT), buses and taxi in one hub with up to 180,000 users daily (The Star Online, 2017). Therefore, it is crucial to study the pedestrian behaviour to address the issue of security, convenience, the facility utilization, spacestructure and equipment layout of the terminal.
On the other hand, pedestrian spatial interaction represents the pedestrian behaviour towards the space around them. In recent literature review, some researches or guidelines address spatial interaction as proxemics behaviour, personal space, shy distance, buffer zone, cushion, psychological space and perception domain. Although it comes with many names, the mail objective if spatial interaction is to determine the pedestrian behaviour towards other pedestrian or obstacle around them. The Strait Times Online (2017) reported that the East Coast Rail Link (ECRL) that linking east cost to Port Klang on the west coast is slated to be completed in 2024 with estimated 5.4 million passengers and 53 million tonnes of cargo will use the service annually by 2030. Thus, study on dynamic of pedestrian's crowd behaviour at RTT in Malaysia based on pedestrian spatial interaction is crucial to support the increasing demand in rail services.
Pedestrian Behaviour Attributes
Pedestrian behaviour is normally interpreted in term of speed, flow, density and rarely in term of evacuation time. Table 1 shows the list of past researches involving pedestrian behaviour attributes and most of the researcher study the pedestrian behaviour based on the walking speed follow by flow, density and evacuation time as shown in Table 1 and Table 2. Walking speed is the most pedestrians' attributes being studied so far (Table 1 and 2). Patra [3] Von [4] Han and Liu [5] Zhang [6] Li [7] Zhao and Liang [8] Zhao [9] Zhao [10] Gotoh [11] Yeo and He [12] Pedestrian speed is mostly determined by analysing the video recording of pedestrian walking from fix distance (usually 1.5 m) over time. Some studies conducted at Malaysia, India, China, Japan and Singapore show different pedestrian walking speed. According to Bohari [1] and Abustan [2] the walking speed of pedestrian at Masjid Jamek LRT Station varies from 1.12 m/s to 1.48 m/s and at Miami Beach, Penang varies from 0.89 m/s to 1.38 m/s. Higher walking speed was observed at Masjid Jamek LRT Station, rapid transit station in Kuala Lumpur, Malaysia compared to Miami Beach, recreational place in Penang, Malaysia.Although both places have different purposes, the different in walking velocity is about 20%. As reported by Yeo and He [12], in Singapore, their pedestrian walking speed at MRT stations is in the range of 1.04 m/s to 1.30 m/s which contribute to 12% different from Bohari [1]. Zhao and Liang [8] from China reported that the pedestrian walking speed at Guangzhou Metro varies from 0.92 m/s to 1.18 m/s which contribute 20% different from Bohari [1]. Meanwhile, Secunderabad Railway, India recorded slowest walking speed which is 0.65 m/s (Patra [3]). This may be due to substantial number of user daily which reach up to 23 million pedestrians daily with comparatively less spaced level changing facilities which become bottlenecks at most of the time. Japan, country with the most advanced rail technology recorded 1.47 m/s pedestrian walking speed [8].
Pedestrian level-of-service (LOS) is calculated by counting pedestrian who cross a point over a certain period (usually 15 minutes), reducing figures to pedestrian per minute and divided by the effective width, and hence produce the flow rate (Highway Capacity Manual (HCM), 2010). The LOS is characterised as A for free flow to F as virtually no movement possible. Not many studies on pedestrian flow were conducted. Patra [3] found that the pedestrian flow at Secunderabad Railway, India is 24 ped./m/min meanwhile Zhao [10] found that the pedestrian flow at Guangzhou Metro, China was 58 ped./m/min. Up to 59% difference calculated of the pedestrian flow between China and India. This scenario may be due to the pedestrian physical difference, such as body size between those countries, and the weight and physical dimension of the pedestrian facilities. Unfortunately, there is no study found about pedestrian flow in Malaysia especially at the RTT. Density is the number of pedestrians present on an area at given moment and noted as p/m 2 . Patra [3] recorded the density of pedestrian at Secunderabad Railway, India is 1.5 p/m 2 and Zhang [6] found that the density of pedestrian at Wuhan Metro Station, China is 1.83 p/m 2 . There are 18% difference between China and India. This shows that pedestrian in India need more space compare to China. This may differ for Malaysia case due to cultural difference.
Evacuation time is important to make an effective building egress process. Evacuation time is the elapsed time between the instant that occupants receive an emergency alarm and their arrival at a destination where it is normally a safe location inside or outside the building. Many factors can affect the evacuation time such as group forming, information transmission, route planning, bottleneck effect and code of practice chosen. Von [4] studied the effect of social group towards evacuation time shows that queue in front of the door can make evacuation faster compare to broad distribution around the exit. Information sharing is important especially in pristine environment. Like in Han and Liu [5], with the used of modified social force model based on information transmission to model the evacuation process, the evacuation time is effectively shortened, and evacuation efficiency is improved. Besides, Zhang [6] reported that by improving the route planning strategies for evacuation process, it helps in reducing the evacuation time by choosing the shortest evacuation route. But, this approach has some limitation since subjective factors like physiology and psychology of the crowd were not considered in this study. Most of the rail station have turnstile that will cause bottleneck effect during emergency evacuation and prolong the evacuation time according to Li [7]. Zhao [9] suggested the use of right code of practice will affect the evacuation process of a building. In their study, the Chinese code tends to give errors in the result of evacuation time, while the American and Japanese standards show certain superiorities such that they correctly reflect the effect of the structural layout of the station and represents predictive bottlenecks.
Although there are many research done on this matter, Malaysia are still lacking in its pedestrian behaviour at RTT. Pedestrian behaviour such as speed, flow, density and evacuation time need advanced studied to help improving the operation and safety level of pedestrian at transport terminals in Malaysia. A lot of works need to be done since the above issue have many application and impact to the society. RTT in Malaysia is becoming one of the choice of transportation mode. Hence, the necessity to understand the specific behaviour of passengers in RTT is crucial. This understanding will become a reference information to the traffic engineers in designing walking infrastructures for other places (i.e.: shopping mall, side walk facilities, stadium, crosswalk facilities, etc.). On the other hand, with the establishment of crowd behaviour model, an analytical evaluation of the quality of urban space can be provided.
Spatial Interaction
Many models of crowd behaviour have been proposed so far to understand how pedestrian move, however modelling and visualizing the dynamics of pedestrian crowd behaviour in relation to pedestrian spatial interaction, particularly at railway stations is none in Malaysia. A better understanding of crowd behaviour in railway station is the key to plan and manage the pedestrians' flow in Rail Transit Terminal (RTT). There are many names used to address spatial interaction such as the "shy distance" by Highway Capacity Manual (HCM), 2010 that clearly defined as the space that pedestrian tends to keep between themselves and obstacle. Some researcher does refer "shy distance" as "buffer zone" or "cushion". The shy distance was estimated to be 30 cm to 45 cm and affected by number of pedestrian, time of day, and surrounding land use. Campanella [13] study the microscopic modelling of walking behaviour using the Nomad simulator that used three levels of spatial isolation of isolated, in-range and in-collision. Isolated is the stage where pedestrian have no other pedestrian or obstacles within their influence area. In-range is the stage where pedestrian have other pedestrian or obstacle inside or close to their influence area but no possibilities of colliding. In-collision is the stage where pedestrian is very closed to other or to obstacle.
Some study does implement spatial interaction in different concept as shown in the Table3. From those studies, only two studies of the spatial interaction were at rail station with different concept and none at RTT. Guo [14] study the spatial and temporal separation rules to reproduce self-organizing movement through bottleneck (train door) indicated that pedestrian efficiency of passing through the bottleneck can be improved by implement the spatial separation rule. Pedestrian pass through the middle tend to move to the right and pedestrian pass near two sides of the door tend to move to the left. Meanwhile, Yang [15] study the guided crowd modified social force model to predict the evacuation of Beijing South Railway Station found that pedestrians in the group who choose to follow the guide instead of walking independently before knowing the position of exit can escape with a larger velocity. Both of those concepts were studied in China. Do Malaysia RTT users follow the same spatial interaction pattern as China especially with the evacuation process described by Yang [15]? Therefore, it is important to study the spatial interaction to know the pedestrian behaviour at RTT in Malaysia. Table 3. Concept of spatial interaction used in the previous studies.
Author
Concept Mohd Ibrahim [16] Game theory Von [4] Lane formation Han and Liu [5] Group formation Liu [17] Effect of wall and door Campanella [13] Modelling of walking behaviour Guo [14] Bottleneck effect of train door Yang [15] Guided crowd at railway station Fridman [18] Impact of cultural differences Mohd Ibrahim [16] use game theory as the concept of their study and come out with 40 cm as their spatial distance and 130° as their maximum angle between conflicting agents. In this stage, real life Malaysian pedestrians at RTT are the same as the game theory needs to be clarified. Study from China conducted by Liu [17] came with spatial distance of 46 cm. Liu [17] study the effect of wall and door towards the distribution of pedestrian in a room mention that pedestrian tend to locate themselves near the boundary (wall) and far from entrance (door). This can be one of the considerations in determining the pedestrian spatial distribution at RTT. Since the study about spatial distribution at RTT in Malaysia are none, study need to be done to provide the accurate information for future development of RTT. Fridman [18] study the impact of cultural differences towards personal space or spatial distance through video recording. They found that the spatial distance for Iraq, French, England and Canada are 32.7 cm, 41.7 cm, 50.3 cm and 67.9 cm. Among those countries, Iraq and French have the closes value of spatial distance to the value produced by the game theory from Malaysia. Since, Malaysia and Iraq are Asia countries, they may share some similarities in their culture that lead to only 18.25% difference in spatial distance. French the European country may have diverged culture from the Asia but the fact that Eiffel Tower in Paris, French are the recreational place which most of the visitor are family, couples, friends and close relatives fall under categorised as intimate distance in interaction zones that lead to spatial distance of 0 cm to 45 cm give 4.25% difference in spatial distance from game theory. Modelling and simulation is one of the most used method to study pedestrian spatial interaction. Gotoh [19] developed the Distinct Element Method (DEM)-based crowd behavior simulator, which was applied to a simulation for evacuation against tsunami and Gotoh [11] modified the governing equation by adding the self-evasive force to develop self-evasive action model call DEM-based multiagent model with self-evasive action model. Figure1 illustrate the perception domain used by Gotoh [11]to describe the model of vision which is symmetrical to the travelling direction of each individual pedestrian for both physical and psychological repulsive force. The physical repulsive force acts when the distance of pedestrian i and j, or distance of pedestrian i and the virtual wall element are less or equal to the average diameter of pedestrian i and j, or pedestrian i and the virtual wall element. Meanwhile, the psychological repulsive force act when the distance of pedestrian i and j is less or equal to the psychological radius as shown in Figure 1. The angle of vision and the psychological radius were the element considered to determine the perception domain of psychological repulsive force. The psychological radius was interpreted as the representative length scale of personal space. Study the spatial interaction is crucial in planning and managing the pedestrians' flow in RTT.The dynamics of crowd motion is mainly driven by local interactions among pedestrians and their surrounding environment. Many models of crowd behaviour have been proposed so far to understand how pedestrian move, however modelling and visualizing the dynamics of pedestrian crowd behaviour in relation to pedestrian spatial interaction, particularly at railway stations is none in Malaysia.The DEM will be used to model the dynamics of crowd behaviour in RTT by considering the interactions between the pedestrians and between pedestrians and physical environment. The pedestrians are considered as assembly of particles of rigid bodies. Conceptually, particles move according to Newton's second law of motion.
A theoretical framework 3.1 Introduction
This proposed theoretical framework work to model and simulate the dynamics of pedestrian's crowd behaviour in Rail Transit Terminal (RTT) and hence to advocate the repercussions towards the evacuation behaviour of the crowd during emergency. In relation to that, a study of crowd evacuation is piloted at the largest transportation hub in Malaysia which is the KL Sentral Station (KLSS), Kuala Lumpur. Figure 2 shows the ground floor (noted as Level 1 from the building authorities) plan with estimated dimension of KLS. The KLSS is chosen as a study area due to its exponential increase in rail traffic passenger volume since its 15 years built. The Distinct Element Method (DEM) will be employed to model the dynamics of pedestrians' crowd behaviour. To confirm the reliability of the model, the validation is performed by statistical analysis. Validation through simulation and reproduction of walking scenario in KLS also will be performed. Subsequently, comparison with real life walking scenario in KLS is conducted. Then, the validated crowd dynamics model is demonstrated in evacuation capacity estimation of egress facilities in KLS. Evacuating people quickly and safely can greatly reduce casualty and economic losses. The outcomes from this study will be beneficial for future planning and design of mass transit and transit spaces. The objective of this research is to determine crowd behaviour attributes in RTT such as walking speed, spatial pattern, spatial interaction parameters (distance and angles) and local collision avoidance during locomotion. Besides, this research was conducted to formulate the dynamic of crowd behaviour based on empirical result obtained using DEM and to assess the evacuation capacity of egress facilities in RTT and determine the evacuation time. The hypothesis of this research can be said as the number of passengers per unit area (density) increase, the spatial pattern of walking pedestrians has negative effect on the walking velocity. The pedestrians interact with other pedestrians through their perception domain by keeping a constant interaction distance and angle between pedestrians and physical environment and, pedestrians have their psychological radius to avoid collision with oncoming pedestrians. Furthermore, the hypothesis is that the maximum pedestrians flow that can pass the evacuation bottleneck section of egress facilities in RTT must be within given times.
From the hypothesis, it is important to know how crowd organize in space and affect the crowd dynamics, how walking pedestrians interact with each other, how pedestrian in crowd avoid collision with pedestrians and what is the condition of current evacuation capacity of egress facilities in RTT.
Flow of Methodology
The activities involved in this research are field observations and video footage at KLSS. The video data gathering will be analysed using Adobe After Effects CS6, Autodesk MAYA 2016 and HBS tool. The empirical results obtained have important implications for the validation of a model to replicate crowd dynamics in KLSS. The DEM-based model will be established to portray the crowd dynamics of Malaysian pedestrians in railway station. This crowd dynamics model will describe how a person interacts with other pedestrians and physical environment. The validation between simulated scenarios and the real situation are compared by taking video image as references. The effect of the newly developed model is shown in the ability of pedestrian to avoid collision of oncoming pedestrians. After the reliability of the model is confirmed, the evacuation capacity (EC) of egress facilities in RTT will be performed. The overall assessment of EC will be a basis in quantifying the overall evacuation time in RTT, and to look at if the capacity of RTT can meet the evacuation demand under emergency.
This research will be conducted in five phases starting from data gathering and end at the result obtaining stage as elaboration of those five phases will be described in this sub-topic. 3.2.1. Phase 1: Data Gathering. Data collections of the study area are the crucial part in this study.Data needed are: detail layout of KLSS, population distribution in KLSS and human crowd motion. Human crowd motion, particularly walking pedestrian will be collected through video footage. Population distribution is obtained through survey. And, the detail layout of the KLSS will be obtained from the authority. 3.2.2. Phase 2: Data Analysis. During this phase, the video films will be gathered and bring to the laboratory for analysis. There are three stages involved in video analysis and three different software will be used. The video analysis will first, convert from video to image sequence, second, track of pedestrian trajectories, and third, determine the average walking velocity, physical and psychological contact area, psychological radius and angle of vision. The three software are, Autodesk MAYA 2016 (MAYA), Adobe After Effect CS6 (AE) and HBS tool (in-house developed tool). Table 5 shows the process involved in this phase. The end product of this phase will reveal features of pedestrians' crowd behaviours as stated in the outcome of Stage 2 in Table 5 and Figure 1. Velocity are used instead of speed due to the pedestrian walking with direction as shown in Figure 1 and it is based on case study that involve walking with varies direction. Compare to previous studies described in sub-topic 2.1, most of the researcher used speed because it is experiment based studied with fix walking direction and distance.
Phase 3: Modeling and Simulation.
To model the dynamics of the crowd behaviour, the DEM (Figure 1) is employed. The DEM is known as a suitable method for the simulation ofthe dynamic behaviour of an assembly of particles. This approach explicitly provides the mechanical behaviour of the individualparticles and their contacts. Its computational modelling framework allows finite Walking velocity, physical and psychological contact area, psychological radius and angle of vision.
The DEM models particles distinctly as a group of rigid bodies and the behaviour of each particle is governed by translational and rotational equations of motion of a F m = and α , where F reflects internal and external forces, T is a torque, m is a mass of a particle, i is a moment of inertia of a particle, a refers an acceleration of a particle and α is an angular acceleration of a particle. The motion of a pedestrian element in contact with neighbouring elements is described in accordance with Newton's law of motion. Each pedestrian element is governed by translational and rotational equations of motion. The combination of an autonomous driving force, repulsive forces and the self-evasive force describe the motion of a pedestrian. The autonomous driving force reflects the motivation of a pedestrian to move in a prescribed walking velocity and orientation while repulsive forces reflect inter-element (contact) forces due to collisions, and the self-evasive force treats collision avoidance and pedestrian alignment. Hence, the motion of the pedestrian i in CBS-DE is written as Formula (1). (1) Where m hi and I hi are two parameters that refer to the mass and moment of inertia of the pedestrian i, respectively. v hi is the velocity of the pedestrian i, ω hi is the angular velocity of the pedestrian i, " . " indicates a time-derivative, F inhi is the inter-element (contact) force acting on the pedestrian i, F awhi is the autonomous walking force of the pedestrian i, F sehi is the self-evasive force acting on the pedestrian i and T hi is the torque acting on the pedestrian i. The behaviour of the pedestrian element is computed explicitly by numerical integration of those equations.
Phase 4: Validation.
The validation works are executed to confirm the reliability of the model by three methods. First, statistical data analysis involving the use of statistics to analyze data for analytical method validation such as mean, standard deviation, confidence intervals, linear regression and t-test will be performed. Second, reproducing the dynamics of crowd behaviour by comparing the simulated walking pedestrian scenario with the real-life scenario of walking pedestrian in KLSS. There are two procedure performed in this validation work. The first procedure is looking for preferred scenario from the footage. The footage will manually examined. The criteria for the preferred scenario are contra-flow and involved multiple groups pedestrians who walk in KLSS. The chosen scenario is then saved in sequence images of PNG format. In the second procedure, simulation of the chosen scenario is performed by employing the established model. The third step in validation phase is visualization that involved visualized simulated crowd dynamics of pedestrian at RTT in Autodesk MAYA software. This visualization is important in order to display any deficiencies or violations in calculations performed.
|
2019-02-17T14:20:33.158Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "3e0b9b97dd5addc6be470bdc51d5b6665bd87303",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/374/1/012089",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c1fac19b8d36b830db70bdbe036ea87978623773",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
8253824
|
pes2o/s2orc
|
v3-fos-license
|
MR Prediction of Liver Function and Pathology Using Gd-EOB-DTPA: Effect of Liver Volume Consideration
Purpose. To evaluate whether the diagnostic performance of Gd-EOB-DTPA-enhanced MRI in evaluating liver function and pathology is improved by considering liver volume (LV). Methods. This retrospective study included 104 patients who underwent Gd-EOB-DTPA-enhanced MRI before liver surgery. For each patient, using the precontrast and hepatobiliary phase images, we calculated the increase rate of the liver-to-spleen signal intensity ratio (LSR), that is, the “ΔLSR,” and the increase rate of the liver-to-muscle signal intensity ratio (LMR), that is, the “ΔLMR.” ΔLSR × LV and ΔLMR × LV were also calculated. The correlation of each MR parameter with liver function data or liver pathology was assessed. The correlation coefficients were compared between ΔLSR (ΔLMR) and ΔLSR (ΔLMR) × LV. Results. The correlation coefficient between ΔLSR (ΔLMR) × LV and cholinesterase was significantly higher than that between ΔLSR (ΔLMR) and cholinesterase. The correlation coefficient between ΔLSR (ΔLMR) × LV and the degree of fibrosis or necroinflammatory activity was significantly lower than that between ΔLSR (ΔLMR) and the degree of fibrosis or necroinflammatory activity. Conclusion. The inclusion of liver volume may improve Gd-EOB-DTPA-based predictions of liver function, but not in predictions of liver pathology.
Introduction
Gadolinium ethoxybenzyl diethylenetriamine penta-acetic acid (Gd-EOB-DTPA) is a liver-specific agent, and it is widely used to improve both the detection rate of focal liver lesions and the characterization of liver tumors on magnetic resonance imaging (MRI) [1,2]. As Gd-EOB-DTPA is taken up specifically by hepatocytes, the measurement of the uptake of Gd-EOB-DTPA in the liver can be used to evaluate liver function [3][4][5]. A correlation between the uptake of Gd-EOB-DTPA and pathological liver fibrosis has also been reported [6,7]. That is, the signal intensity itself or the signal intensity change in the hepatobiliary phase decreases as the liver function or fibrosis worsens. In these previous studies, only the degree of Gd-EOB-DTPA uptake on a single slice or several slices was considered as an indicator of liver function or fibrosis. However, the liver volume (LV) is quite different among individuals. We hypothesized that the liver function or fibrosis could be more precisely estimated by using a parameter including the LV, which would represent the whole liver function.
The purpose of the present study was to evaluate whether the diagnostic performance of Gd-EOB-DTPA-enhanced MRI in evaluating liver function or fibrosis is improved by considering the LV.
Patients.
This study was approved by the institutional review board of our hospital. The requirements for informed consent were waived for this retrospective study. Referring to the medical data recorded at our hospital, we enrolled 129 consecutive patients who underwent Gd-EOB-DTPAenhanced MRI and hepatic resection for a liver tumor or liver transplantation between June 2010 and May 2013. Of them, twelve, eight, and five patients were excluded due to a history of splenectomy, a history of right or left lobectomy, and poor image quality derived from respiratory artifacts, respectively. Finally, 104 patients were enrolled in this study. The 104 patients included 69 men and 35 women (age range, 32-86 years; mean age, 64.5 years). The hepatitis C virus antibody was present in 45 cases, the hepatitis B surface antigen in 17 cases, alcoholic hepatitis in five cases, nonalcoholic steatohepatitis in five cases, primary biliary cirrhosis in two cases, autoimmune hepatitis in one case, and primary sclerosing cholangitis in one case. The grading of liver dysfunction was preoperatively evaluated based on the Child-Pugh classification, and 86, seven, and 11 patients were categorized into Grades A, B, and C, respectively. The grading of liver function or severity of liver cirrhosis in patients with chronic liver disease was evaluated according to the Child-Pugh classification [8]. The classification is based on the following five factors, graded on a scale from 1 to 3: hepatic encephalopathy, ascites, total bilirubin level, albumin level, and prothrombin time. The liver function or severity of cirrhosis was classed into three groups according to the sum of the scores: Grade A, from 5 to 6; Grade B, from 7 to 9; Grade C, from 10 to 15. The laboratory data were obtained at least within one month before surgery. For each patient, the platelet count (Plt), albumin (Alb), total bilirubin (T-bil), lactate dehydrogenase (LDH), cholinesterase (ChE), Child-Pugh score, and model for end-stage liver disease (MELD) score were recorded. An MR examination was performed at least 3 months before the surgery. No treatment was performed between the MR examination and the surgery for any of the patients.
Liver Volume Measurement.
For the LV measurement, the total of the MR images in the hepatobiliary phase was prepared for each patient. The LV of each patient was The signal intensities were measured by placing the largest possible regions of interest (ROIs) on the liver parenchyma, spleen, and erector spinae muscle, avoiding vessels, tumors, and artifacts. For the liver parenchyma, two round or oval ROIs were placed: one in the right lobe and the other in the left. semiautomatically measured using the "liver analysis" function of the volume analyzer SYNAPSE VINCENT (Fuji Film Medical, Tokyo). A part of liver tumor was not considered as LV.
MR Image Analysis.
The signal intensity of axial eTHRIVE on Gd-EOB-DTPA-enhanced MRI was measured on the same DICOM viewer. First, two abdominal radiologists with six and 19 years of experience together selected three slices without significant artifacts. On the same slices they measured the signal intensities by placing the largest possible region of interest (ROI) on the liver parenchyma, spleen, and erector spinae muscle, avoiding vessels, tumors, and artifacts in a consensus manner ( Figure 1). For the liver parenchyma, two round or oval ROIs were placed: one in the right lobe and the other in the left. The averages of the six signal intensities of the liver parenchyma and the three signal intensities of the spleen or the erector spinae muscle were calculated.
Based on these average values, the liver-to-spleen ratio (LSR) and the liver-to-muscle ratio (LMR) before and after the administration of Gd-EOB-DTPA were recorded for each patient. The same size and shape of ROI were placed at the same position for the images before and after the administration of Gd-EOB-DTPA. As indicators of liver function, the increase rates of the LSR (LMR) in the hepatobiliary phase compared with the precontrast image were calculated using the following equation: (LSR (LMR) on the hepatobiliary phase − LSR (LMR) on the precontrast image)/LSR (LMR) on the precontrast image [3,4]. We named "the increase rate of LSR (LMR)" as "ΔLSR (ΔLMR)." We also set the parameter "ΔLSR (LMR) × LV" (unit; liter) for the analysis.
Pathologic Analysis.
One pathologist with 4 years of experience who was unaware of the imaging data reviewed the hematoxylin-eosin-stained glass slides of each patient and referred to the official pathological report to determine BioMed Research International 3 the histological findings of the liver parenchyma. When the results were discordant, another experienced pathologist with 17 years of experience was consulted. The degree of liver fibrosis was classified into five groups according to the New Inuyama Classification: F0 (no fibrosis), F1 (fibrous portal expansion), F2 (bridging fibrosis), F3 (bridging fibrosis with architectural distortion), and F4 (liver cirrhosis) [9]. Similarly, the grade of necroinflammatory activity was scored as A0 (no necroinflammatory reaction), A1 (mild), A2 (moderate), and A3 (severe) [9].
2.6. Statistical Analysis. We used a linear regression analysis to examine the correlations between ΔLSR (ΔLMR) and ΔLSR (ΔLMR) × LV and the laboratory data corresponding to liver function (including Plt, Alb, T-bil, LDH, and ChE). The correlations of these four parameters with the Child-Pugh score, MELD score, the degree of liver fibrosis, and the grade of necroinflammatory activity were each examined using Spearman's rank correlation test. We also compared the correlation coefficients between ΔLSR and ΔLSR × LV and between ΔLMR and ΔLMR × LV. The statistical significance was evaluated using the following method: when the dependence of a variable ( , ) on a single independent variable ( ) was observed, we calculated the correlation coefficient ( , ), and we tested the significance of the , coefficient by means of the modified -test, the number of degrees of freedom being = − 3, using the following formula ( = sample number): (see [10]). For all tests, a value of <0.05 indicated a significant difference.
LV and Plt, Alb, LDH, Child-Pugh score, or MELD score tended to be higher than those between ΔLSR and Plt, Alb, LDH, Child-Pugh score, or MELD score. However, the correlation coefficient between ΔLSR × LV and the degree of fibrosis or necroinflammatory activity was significantly lower than that between ΔLSR and the degree of fibrosis or necroinflammatory activity ( < 0.01). The correlation coefficient between ΔLSR × LV and T-bil tended to be lower than that between ΔLSR and T-bil. Table 2 shows correlation coefficients between ΔLMR or ΔLMR × LV and the laboratory data or pathologic factors. The correlation coefficient between ΔLMR × LV and ChE was significantly higher than that between ΔLMR and ChE ( < 0.01) (Figure 2). The correlation coefficients between ΔLMR × LV and Plt, Alb, LDH, Child-Pugh score, or MELD score tended to be higher than those between ΔLMR and Plt, Alb, LDH, Child-Pugh score, or MELD score. However, the correlation coefficient between ΔLMR × LV and the degree of fibrosis or necroinflammatory activity was significantly lower than that between ΔLMR and the degree of fibrosis ( < 0.05) or necroinflammatory activity ( < 0.01). The correlation < 0.01). The correlation coefficient between ΔLMR × LV and ChE was significantly higher than that between ΔLMR and ChE. coefficient between ΔLMR × LV and T-bil tended to be lower than that between ΔLMR and T-bil.
Discussion
In our study using 3T-MRI, significant correlations between the uptake of Gd-EOB-DTPA and liver function, fibrosis, and necroinflammatory activity were obtained, as reported previously [4][5][6][7]. In light of this result, we feel that our radiological assessment is valid for evaluating liver function, fibrosis, and necroinflammatory activity. In addition, the correlation coefficient between ΔLSR (LMR) × LV and ChE was significantly higher than that between ΔLSR (LMR) and ChE. The correlation coefficients between ΔLSR (LMR) × LV and Plt, Alb, LDH, Child-Pugh score, or MELD score tended to be higher than those between ΔLSR (LMR) and Plt, Alb, LDH, Child-Pugh score, or MELD score, suggesting that we should consider "liver volume" in addition to the uptake of Gd-EOB-DTPA for setting the MR parameters. Recently, some articles have reported that the relationship between the uptake of Gd-EOB-DTPA and indocyanine green test can be improved by considering liver volume [11][12][13] and supports our result or hypothesis.
In general, liver function data are evaluated with a blood test, which includes a "whole liver" element. Therefore, the consideration of liver volume in the MR parameter could enable the correlation with liver function to be more intensive. We found in the present study that the correlation coefficient between ΔLSR (LMR) × LV and T-bil tended to be lower than that between ΔLSR (LMR) and T-bil, although the difference was only slight. T-bil includes both unconjugated and conjugated bilirubin, and the T-bil value can be affected by a number of factors including prehepatic or posthepatic disorders, hemolysis, and constitutional predisposition. Therefore, considering "liver volume" in the MR parameter might not be effective for the correlation with T-bil.
We also found that the correlation coefficients between ΔLSR (LMR) × LV and the degree of fibrosis or necroinflammatory activity were significantly lower than those between ΔLSR (LMR) and the degree of fibrosis or necroinflammatory activity. That is, the consideration of liver volume in addition to the uptake of Gd-EOB-DTPA for setting the MR parameters was not useful. Although this result was beyond the scope of our hypothesis, we propose two plausible reasons why this result was obtained. One is that fibrosis and necroinflammatory activity represent the local state of the liver parenchyma. Therefore, the consideration of "liver volume" might worsen the correlation with liver pathology. Another possible reason is that the LV does not always decrease gradually as the degree of fibrosis progresses. A report on LV change in patients with hepatic fibrosis is available [14]. The LV tends to increase with the severity of fibrosis since the number of hepatic cells accounts for 70%-80% of the liver parenchyma and then decrease. The presumed reason for the hepatic volume increase would be the ballooning of hepatocytes along with the increased fibrotic component.
We obtained a similar result; that is, LV tends to increase with the severity of fibrosis from F0 to F2 but decrease at F3 to F4, which would affect the rank correlation between ΔLSR (LMR) × LV and the degree of fibrosis. It was reported that the LV tends to increase with the aggravation of inflammatory activity (the increase of necroinflammatory activity) [14]. In our study we obtained a similar result; that is, the LV tends to increase as the degree of necroinflammatory activity advances from A1 to A3. Therefore, the LV consideration would have the opposite effect on the correlation with the degree of necroinflammatory activity. We thus suggest that "liver volume" should not be considered among the MR parameters when evaluating liver pathology using Gd-EOB-DTPA-enhanced MRI.
Our study had several limitations. First, the trial was a study with a limited patient population, and the number of cases with each degree of fibrosis and necroinflammatory activity was not uniform. Second, we used two organs, the spleen and erector spinae muscle, as signal intensity references of the liver parenchyma. As there may be persistence of contrast enhancement in the spleen and muscle, these organs might be limitations for analyses of LSR and LMR as well as motion artifacts and partial volume effects. Although a T1 map might be preferable for the quantitative analysis of the uptake of Gd-EOB-DTPA, it was difficult to generate such a map with our scanner. Third, we could not evaluate indocyanine green test results as a laboratory datum corresponding to liver function. Although 80 patients underwent this test preoperatively, the Child-Pugh classification for all of them was Grade A. That is, patients with moderate or severe liver dysfunction were not included. We judged that we should not juxtapose the comparison with ICG test to those with other liver function parameters in our study, because of the difference in patient population. Finally, tumor volumes of small lesions in the liver were not excluded from measured LV for technical difficulty, which may have led to minor overestimation of LV in some patients.
Conclusion
We have demonstrated that the inclusion of liver volume may improve Gd-EOB-DTPA-based predictions of liver function, but not in predictions of liver pathology.
|
2016-05-04T20:20:58.661Z
|
2015-11-01T00:00:00.000
|
{
"year": 2015,
"sha1": "5492cc20e4df679dece2718fc725aa1670ebd837",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2015/141853",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3da46a430ae79420f982841874fa53114ddd637c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
149714311
|
pes2o/s2orc
|
v3-fos-license
|
Age Differences in Online Communication: How College Students and Adults Compare in Their Perceptions of Offensive Facebook Posts
With the most recent US Presidential election, civility in online communication hasresurfaced as a social issue. Asurvey of 409 college students and 190 faculty / staff at a liberal arts college in northeastern Pennsylvania used open-ended questions to identifythe typesof communicative posts people of different ages have seen and considered offensive on Facebook. Content analysisidentified twenty unique themes of online inappropriateness, many of whichare similar across age groups butdo not appear in previous research. comments, sex / nudity, political references, and offending visuals. Age differences emerge in the rankings of these four themes and in the identified fifth theme, which is “other social issues” among college students and foul language for adults. Findings also indicate that students were statistically more likely than adults to consider posts involving traditional social issues (racism, sexism, LGBT issues, and alcohol / drugs) or aggression to be offensive; and, adults were more likely to consider foul language or the discussion of politics or religion to be offensive. Symbolic interaction theory is used to link perceptions of offensive posts to judgments of others, and suggestions for further research are discussed.
Literature Review
Younger and older people exhibit different behavioral norms in areas such as alcohol consumption, sexuality, nudity, and language both on and off-line. For example, many college students believe that college is a place to party and to drink alcohol (Lo, 2000;Marciszewski, 2006); and, students tend to think that visible participation in these behaviors is necessary to be socially accepted,even in a person does not participate in the behavior. Consequently young adults may be more likely than more mature adults to communicate these behaviors onlinein order to gain acceptance (Birmbaum, 2013;Brechwald & Prinstein, 2011;Ehrenreichet al., 2014;Lo, 2000;Shinew & Parry, 2005). There is some evidence for this. Peluchette and Karl (2007) studied 200 Facebook profiles and found that 42% had comments about alcohol and 53% had photos about alcohol use. The same study looked at what people posted on each other's profiles and found that 50% of the posts involved partying. More mature adults may be less likely to feel peer pressure to drink alcohol or to post their experiences; and, therefore, may also be less inclined to feel that such behavior should be communicated online.
Theview of appropriateness of sexuality and nudity in public may also be age relatedon and offline (Fix, 2016;Hestroni, 2007;Mayo, 2013;Potts & Belden, 2009). Even though sexual behavior has become more visible, there is still controversyregarding where appropriate limits lie, and this is especially evident based on age. Younger Facebook users may be more tolerant of sex and nudity in public and online than those who are older; and, young adults, like college students, may communicate this by posting sexual references or nudity because they think that this is what their friends are doing or they think these behaviors are also expected to be "cool" (Ehrenreich et al., 2014;Goodmonet al.,2014;Peluchette & Karl, 2007). Other behaviors, such as swearing, comments on different social issues, and how people present themselves online may also be age differentiated, as people of different ages may have different views on issues or different norms of self-presentation; however, this is not examined in an online environment (Chirico, 2014). tragiconline disclosures astoo much sharing, and therefore, were inappropriate. Roche and colleagues (2015) furthered this study by asking 150 college students to react to the level of appropriateness of mock Facebook feeds created after an informal poll of 20 college students.
These feeds focused on romantic relationship drama, negative emotion, passive aggression, and frequent status updates. Thefindings revealed that posts involving relationship drama were perceived as the most inappropriate, followed by passive aggressive posts. These findings support those of Brandtzaeg and colleagues (2010) regarding the self-disclosure norm violation of sharing too much, as personal information sharing in public is dubbed "too much information" or "TMI". However, according to their findings, negative emotion posts, frequent status updates and neutral posts were all deemed as relatively appropriate.
While an important step, these studies are limited in a few ways. First, they all involve college students' perceptions. However, many college students are Facebook friends with other individuals, especially family members and co-workers and therefore need to communicate with people in a variety of different networks. Because the norms of college behavior differ from adult norms in many ways, perceptions of what is appropriate to communicate on Facebook and how may differ as well;but, this is unstudied.Second, the methods of previous studiesgenerally involve hypothetical Facebook walls or posts. Both Bazarova (2012) and Roche and colleagues (2015)use quantitative analyses of student reactions to hypothetical researcher created Facebook posts or feeds. Therefore, the type of topic covered was decided by the researcher. Roche and colleagues (2015) did pick their topics after an informal poll of 20 students;but, this approach, while an improvement over purely researcher driven scenarios, isstill limited. Twenty students is a very small sample and may be biased. A larger sample of students may identifynew topics considered to be inappropriate for Facebook, but this is unable to be examined when researchers select the posts to be studied. Wolfer (2016), using focus groups who did not respond to preconceived scenarios, built on Roche's and Bazarova's studies by taking a more qualitative approach to determining what college students identified as inappropriate online communication.
Wolferfound that college students also felt that negative comments about social issues, such as race and gay marriage, or communications that were purposely embarrassing or mean were inappropriate on Facebook. While Wolfer's study did build on these previous ones by being qualitative and by identifying additional themes of inappropriateness, her study is vulnerable to the same limitation of Bazarova's (2012) and Roche and colleagues (2015) of only considering college students; and, additionally, by using focus groups, it is limited to a small sample of only 46 college students.
The desire to use Facebook to stay connected with friends and family, to foster interpersonal relationships in many different contexts (family, friends, work), and to present a positive selfimage to others,all in the context of the diverse social networks common on Facebook, point to the importance of understanding what types of Facebook posts users of different ages view as inappropriate. From a symbolic interactionist framework, people rely on the symbolic meanings of their interaction with others to learn the appropriate behavior for their group and these interpretations are situationally dependent (Blumer, 1969;Thomas, 1931). People will act towards others based on the identified situation and the corresponding meanings that they attribute to other's actions and communication in that situation (Blumer, 1960;Thomas, 1931).
When consensus in situations is high, the meaning of the symbol communicated is clear; when consensus is low, the meaning becomes ambiguous and communication becomes problematic (Thomas, 1931). Given the diverse age networks on Facebook and the ways people of different ages use Facebook, people's attributed meanings in online communication may also differ. This is especially relevant because researchers have found that sharing even a small amount of negatively perceived information leads to a negative view of the individual doing the sharing (Goodmon et al., 2014;Steeves & Regan, 2014).However, as mentioned previously, studies of Facebook not only to follow other people's lives, but also to keep tabs on their children, who may be posting behaviors to impress their peers, but which are contrary to the values adults tried to instill (Brandtzaeg et al., 2010;Steeves& Regan, 2014). Furthermore, adult Facebook users may be co-workers or people who may serve as professional social networks for younger Facebook users; therefore, identifying inappropriate posts may also have long term benefit to Facebook and consider to be inappropriate based on age. Specifically, this study has two research purposes: 1)to identify the top five posts identified by college students and by adults as inappropriate for Facebook; and, 2) to see whether there are any statistically significant age differences in perceptions of inappropriateness overall.
Methods and Sample
An online survey via a Survey Monkey link was administered to a population of undergraduate students (n=3,713), faculty (n=306), and staff (n=610) at a small liberal arts college in northeastern Pennsylvania regarding their Facebook experiences. The student response rate was 14.1% (n=572) and the faculty / staff response rate was 20.8% (n=190), which is less than desirable. Like the university from which the data was collected, the majority of both the student and the adult sample is female ( Three quarters of the adult sample has at least a four year college degree (75.8%).
More than three quarters of the student respondents (78.7%) and all but one of the adult respondents had a Facebook account at the time of the study. Similar proportions of students and adults report being on Facebook multiple times a day (54.4% of students and 53.2% of adults).
Of the 572 responding students, 409 of them listed at least one inappropriate issue they saw on Facebook, while all of the 190 faculty / staff made some type of comment describing this. Less than 10% of both students and adults (1.2% of students and 7.9% of adults) claim that they have never seen any offensive Facebook posts.
Design
The participation. Surveys and respondents were tracked separately by unique identifiers which enabled the researcher to know what students, faculty, and staff responded to the study, but did not allow the researcher to link respondents to individual survey responses. Even though colleg students are young adults, for ease of writing, they are referred to as either "younger Facebook users" or "students", while the faculty and staff will be collectively referred to as "adults".
This research utilizes open coding where descriptive labels were written for every reference of inappropriate post seen on Facebook. First the author read through all responses and color coded like statements into themes, simultaneously making a codebook. Individual respondents received a "1" if a comment related to a particular theme in the codebook and a "0" if it did not.
Online Journal of Communication and Media Technologies Volume: 7 -Issue: 4 October -2017 Sometimes two or more comments received only one code. For example, when listing the top three offensive posts seen, if an individual put "comments about gays" as one comment and "comments about transgender individuals" as a second comment, they both apply to lesbian / gay / bisexual / transgender individuals (LGBT), so even though there are two comments, they can only receive one code of "1" for the theme " LGBT issues". Similarly, some comments may have received more than one code. For example, a response of "racist comments against President Obama" would receive both a "1" for the theme of "racism" and a "1" for the theme of Table 1.
A second independent evaluator coded the same data using the coding themes developed. Interrater reliability was established via Cohen's kappa since the themes were categorical in nature.
Originally 10 of the 20 items had a Cohen's kappa of .8 or higher indicating very strong interrater reliability (McHugh, 2012;Viera & Garrett,2005). For the remaining 10 categories, the raters discussed the areas of individual areas of discrepancy for each respondent until agreement in coding was reached and changed accordingly on the master data set. The mutually decided themes have a Cohen's kappa of 1 since they were discussed until agreement was reached. The respective Cohen's kappa for each theme also appears in Table 1.
Research Question 1: Top Five Inappropriate Themes
For the most part, students and adults do not differ in their identification of the top five themes they have seen on Facebook and deem inappropriate, even though they may differ in the relative rankings of the five. groups even though the number one for the two groups differ. However, students were more likely to mention seeing inappropriate presentations of various social issues (beyond those which are their own category) in their top fivethan the adult sample, and the adults were more likely to see and consider foul language on Facebook to be inappropriate.
There are other age differences as well further down the rankings. For example, religion, general comments of hate, animal cruelty, private issues made public, and posts that the reader interprets as ignorant or lying appear in the top 10 for adults, but not for college students. Similarly, violence appears in the top 10 for college students, but not for adults. This suggests that while some topics are so inappropriate for Facebook they are agreed upon by both students and adults, differing age norms do exist.
Research Question 2: Statistically Significant Age Differences
While respondents may, for the most part, agree on the top five offensive themes, this does not necessarily mean that they agree to the same degree. Chi-square analysis reveals that college students identified a greater number of overall themes witnessed and deemed inappropriate than the older cohort. Of the 11 themes where statistically significant age differences emerged, college students were more likely to identify seven of them as inappropriate. For example, college students are more likely than adults to consider Facebook posts relating to specific traditional social issuesto be inappropriate. Students were statistically more likely than older adults to see and be offended by posts about racism (35.9% compared to 21.1%, p<.01, Table 2), sexism (13.8%, 5.9%, p<.01), LGBT issues (10.3%, 4.2%, p<.05), alcohol / drugs (5.1%, 1.1%, p<.05), and "other social issues" (16.4%, 9.5%, p<.05). College students are also more likely than adults to see and be offended by posts indicating some type of aggression or violence.
College students were twice as likely to see posts about aggression to children that they find offensive (16.1%) compared to adults (6.8%, p<.01). Likewise, 1 in 10 college students have seen some type of violent post (10.5% ) where less than half of that (3.7%, p<.01) of adults claim the same.
On the other hand, adult Facebook users were more likely to see and define posts that relate to other types of controversy, such as political discourse or foul language to be inappropriate.
Almost one quarter of adults (26.3%) compared to less than 20% (18.3%) of college students (p<.05) saw political posts that they claimed were inappropriate. Furthermore, adults were twice as likely (21.6%) than students (11.5%, p<.01)to see and consider foul language on Facebook to be inappropriate. Adults were also more likely to see and define posts involving religion(11.1% compared to 5.1%, p<.01) or posts that were "rants" as inappropriate (4.2% compared to 1.5%, p<.05).
Discussion
The concern for civility and appropriate onlinecommunication is not new (Calhoon, 2000;Thorne, 2015). Given Facebook's popularity, the diverse social networks on Facebook, and the variability in normative behavior across groups, different groups of people are likely to consider different behaviors on Facebook to be offensive. What these behaviors are though is unclear and generally un-studied. This is an significant topic because negative interpretations of Facebook Second, contrary to obvious age differences in face-to-face behavior in various areas (Fix, 2016;Mayo, 2013;Potts & Belden, 2009) five themes identifiedrace, sexuality / nudity, politics, and offensive visualsare common to both age groups. This similarity is remarkably consistent when considering that these were selfidentified topics that were not prompted by the researcher.
Nevertheless, similar does not mean equal. Age differences did emerge. For example, college students were more offended by communication about additional ("other") social issues than were adults and adults were more likely than college students to consider foul language on Facebook to be inappropriate, the latter of which is supported by research in face-to-face interaction (Chirico, 2014). Statistically the younger cohort, possibly contrary to expectations, also identified a greater number of themes witnessed and deemed inappropriate examples of online communication than did the older cohort. Students were more likely than older adultsidentify posts about racism, sexism, LGBT issues, and alcohol / drugs as inappropriate.
These differences are not completely unexpected as they mirror the types of issues frequently discussed and analyzed on college campuses. If, as Shoenberger and Tandoc (2014) argue, college students use Facebook to explore their views and try and influence others, then it follows that their posts may reflect material that they are encountering in their classes. Research here and by Wolfer (2016), however, suggests that using Facebook to test one's views or influence others about issues learned on campus, may not be well-received by others, especially peers, since these types of communicative posts were deemed inappropriate on Facebook.
On the other hand, adult Facebook users were statistically more likely to see and define posts regarding political discourse, foul language, religious views, or posts that were "rants" as inappropriate. Age differences involving the use of foul language and rants suggests that adults have different norms for self-presentationin online communication than do college students (Chirico 2014). While no research has examined people's views of rants on or off-line, it is feasible to link rants to inappropriate self-presentation, given that Facebook, especially among older individuals, is generally used for more entertaining purposes (Leung, 2013 are particularly likely to see politics as controversial and, therefore a violation of Facebook's "real" purpose of entertainment (Leung, 2013 (Goodmon et al., 2014;Oldmeadowet al., 2013;Steeves& Regan, 2014 Roche et al., 2015), but expanding the topics of these posts given the findings of this study. This can be followed with asking respondents whether they have actually seen any of the posts described.Second, future research might want to explore why people of different ages find these types of posts inappropriate. This will give more insight to understanding the dynamics between Facebook motivation (e.g. entertainment and social connectedness) and emerging values involved inFacebook use. Last, this sample and the population from which it was drawn are rather homogenous in terms of race and gender; and, has a relatively low response rate. Using a qualitative approach with a different population may identify other themes or suggest more group differences in themes that Facebook users deem to be inappropriate.
|
2019-05-12T14:24:55.466Z
|
2017-10-10T00:00:00.000
|
{
"year": 2017,
"sha1": "79437d88a5bb25e8dda61cfee69108736a0e7153",
"oa_license": "CCBY",
"oa_url": "http://www.ojcmt.net/download/age-differences-in-online-communication-how-college-students-and-adults-compare-in-their-perceptions.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7ccb98f9856981358bd981efe863c8ccb3e656d7",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
202911992
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative Characterization of Pore Connectivity and Movable Fluid Distribution of Tight Sandstones: A Case Study of the Upper Triassic Chang 7 Member, Yanchang Formation in Ordos Basin, China
The pore connectivity and distribution of moveable fluids, which determines fluid movability and recoverable reserves, are critical for enhancing oil/gas recovery in tight sandstone reservoirs. In this paper, multiple techniques including high-pressure mercury intrusion porosimetry (MIP), nuclear magnetic resonance (NMR), scanning electron microscopy (SEM), and microcomputer tomography scanning (micro-CT) were used for the quantitative characterization of pore structure, pore connectivity, and movable fluid distribution. Firstly, sample porosity and permeability were obtained. Pore morphology and the 3D distribution of the pore structures were analyzed using SEM and micro-CT, respectively. The pore-size distribution (PSD) from NMR was generally broader than that from MIP because this technique simply characterized the connected pore volume, whereas NMR showed the total pore volume. Therefore, an attempt was made to calculate pore connectivity percentages of pores with different radii (<50 nm, 50 nm–0.1 μm, and 0.1 μm–1 μm) using the difference between the PSD obtained from MIP and NMR. It was found that small pores (r<0.05 μm) contributed 5.02%–18.00% to connectivity, which is less than large pores (r>0.05 μm) with contribution of 36.60%–92.00%, although small pores had greater pore volumes. In addition, a new parameter, effective movable fluid saturation, was proposed based on the initial movable fluid saturation from NMR and the pore connectivity percentage from MIP and NMR. The results demonstrated that the initial movable fluid saturation decreased by 14.16% on average when disconnected pores were excluded. It was concluded that the effective movable fluid saturation has a higher accuracy in evaluating the recovery of tight sandstone reservoirs.
Introduction
In recent years, unconventional fossil resources such as shale gas, coal-bedded methane, tight gas, and tight oil have derived remarkable success in North America and China [1][2][3]. The success of unconventional oil and gas exploration and development has resulted in fast growth in oil and gas production, which is based on the development of new techniques such as horizontal well drilling and hydraulic fracturing.
Unconventional oil and gas reservoirs, including shale gas, tight gas, and tight oil reservoirs, are characterized by low permeability and low porosity [3,4]. The complex system of pores in tight sandstone makes it difficult to characterize pore structure and connectivity. Many researchers have tried to enhance oil recovery from these tight-rock reservoirs [5,6]. These investigations have included quantitative characterization of pore structure using various techniques [7][8][9], connectivity analysis of tiny pores [10][11][12][13][14], and predictions of migration of movable fluids and oil/gas production [15,16]. Tight sandstone reservoirs have extremely complex pore systems and low recovery. To obtain a better understanding of the characteristics of these reservoirs, multiple techniques have been used to characterize the pore systems, including high-resolution scanning electron microscopy (SEM), low-temperature liquid nitrogen adsorption, high-pressure mercury intrusion porosimetry (MIP), computer tomography scanning (CT scanning), and nuclear magnetic resonance (NMR). These techniques have been combined to characterize pore structures of tight sandstone [7,17,18]. Among these techniques, MIP can reveal the petrophysical properties and pore-size distribution (PSD), CT scanning can illustrate the 3D distribution of pores, and NMR can describe the PSD and moveable fluids [13,19]. Each technique has its own principles and limitations; therefore, researchers have generally combined these different techniques to accurately characterize PSD [7,[20][21][22].
Movable fluid saturation and connectivity of the pore system in a tight sandstone reservoir is critical for enhancing oil recovery. The connectivity of nanoscale and micron-scale pore systems has been discussed extensively [23][24][25]. Previous investigators analyzed connectivity using spontaneous imbibition (SI) and saturation tracer diffusion behaviors [5,10,11,26]. However, the mercury intrusion method and spontaneous imbibition can only characterize interconnectivity, because external fluids cannot invade isolated pores. On the other hand, NMR is a radiation method and is sensitive to hydrogen fluids within samples [13]. Therefore, NMR methods can probe the total pore space in the sample using transverse relaxation times (T 2 ) and other NMR signals [27].
The previous literature has shown that SEM observations, mercury intrusion, CT scanning, and NMR can predict pore structures in unconventional oil and gas reservoirs, and explains weak connectivity and low recoveries to some extent [11,24,28]. NMR can explain moveable fluid saturation, and MIP can describe connectivity. However, effective fluid movability analysis for tight sandstone reservoirs has rarely been reported. This study has investigated pore-structure characterization methods and effective fluid movability for tight sandstone reservoirs. For this work, several tight sandstone samples were collected, and several techniques, including MIP, SEM, CT scanning, and NMR, were used to infer pore structure, pore connectivity, and effective fluid recoverability of the tight sandstone reservoir in the Upper Triassic Yanchang Formation Chang 7 Member, Ordos Basin. These analyses together will provide a better understanding of porestructure characterization and pore connectivity in tight sandstones, enabling efforts to enhance recovery in these tight reservoirs.
Geological Background
Recently, great breakthroughs in unconventional oil and gas production in the Ordos Basin are focused on tight sandstone reservoirs. The Ordos Basin was formed on the western side of the North China platform, which is the second largest oilbearing sedimentary basin in China [29]. The study area is located in the midwestern part of the Yishan slope, in the Ordos Basin (Figure 1(a)). The structure of the Chang 7 Member is a westward-dipping monocline, with a dip angle of 0.5 [30]. Oil and gas production in the Ordos Basin has increased rapidly year by year, and the leading tight sandstone formations (the Chang 7 Member of the Mesozoic Triassic Yanchang Formation) in the Ordos Basin have contributed approximately 20 × 10 8 tons to the geological reserve [31].
The thickness of the Chang 7 Member in the study area is about 80 and 120 m, and it can be divided into three layers based on the lithology. The tight sandstone reservoirs are mainly distributed in the Chang 7 1 and Chang 7 2 layers of the local area, where subaqueous distributary channels and estuarine dam microfacies of the delta front have developed. The Chang 7 3 layer is a major hydrocarbon source in Mesozoic oil-bearing systems [32,33]. The tight sandstone of the Chang 7 Member, which formed at the most expansive stage of the lacustrine basin, is lithologically complex and dominated by fine sandstone, siltstone, and argillaceous siltstone ( Figure 1(b)). The tight sandstone reservoir is adjacent to a widely distributed hydrocarbon source and is characterized by tight, poor physical properties and high source-reservoir matching [34].
Samples and Preparation.
Samples were taken from the tight sandstone reservoir of the Upper Triassic Yanchang Formation, Chang 7 Member in the Dingbian area, Ordos Basin. Three typical cylindrical plug samples were drilled from two drilling cores, parallel to the formation (Figure 1(a)), with a diameter of 25 mm. The lithology of the samples was siltstone and fine sandstone. Each sample was divided into several pieces for a series of experiments, including porosity and permeability tests, SEM, MIP, CT scanning, and NMR to characterize their pore structure, pore connectivity, and movable fluid distribution. Alcohol was used remove residual asphalt from the samples before the experiments commenced. The samples were dried at 110°C for more than 24 hours until constant weight, placed in a drying dish, and cooled to 25°C to avoid moisture readsorption.
Scanning Electron Microscopy (SEM).
A JSM-6610LV scanning electron microscope was used to observe the microand nanopores of the samples at high resolution, with an acceleration voltage of 15 kV, a temperature of 20°C, and a relative humidity of 50%. Minerals around the pores were analyzed using an IE250 energy-dispersive X-ray spectrometer. The freshly polished surface of each sample was observed under an electron microscope.
Porosity and Permeability.
Porosity and permeability analysis of each sample was carried out using the gas-pulse attenuation method, using an AP-608 automatic permeabilityporosity tester with a minimum porosity of 0.1% and a minimum permeability of 0.001 mD. Firstly, dry samples were placed in the core clamping device, and helium gas was allowed to isothermally expand into the sample until equilibrium. Porosity was calculated by the grain volume and bulk 2 Geofluids volume of the sample, and the average value after three tests was selected [8]. Gas permeability was measured by the unsteady-state pulse decay technique and the average value was used. The experimental operating procedure was in conformity with the SY/T 5336-2006 standard conventional core analysis method used by the Chinese oil and gas industry.
Mercury Intrusion Porosimetry (MIP)
. Dry samples were subjected to MIP immediately after porosity and permeability tests, using an AuToPoRE IV 9500 mercury intrusion meter (McMurray) according to SY/T 5346-2005 standards. Mercury injection pressures ranged from 0.004 to 208 MPa, giving a corresponding pore-throat radius of 0.003 μm. When the pressure gradually recovered to zero, sample mercury intrusion and mercury withdrawal capillary pressure curves were obtained. Then, the pore-throat size distribution could be obtained using the Washburn model based on mercury volume at different pressures [35]: where P c is the mercury entry pressure, psi; σ is the interfacial tension (485 mN/m); θ is the contact angle (140°); and r c is the corresponding pore-throat radius, μm.
Microcomputer Tomography (Micro-CT).
A sample can be analyzed nondestructively using the CT scanning technique, and 3D pore distribution data can be obtained by 3D digital reconstruction based on sample scanning slices [36]. Micro-CT was carried out on the high-resolution 3D X-ray microXCT-400 imager, produced by Xradia, USA, with a maximum theoretical 3D spatial resolution of less than 3 Geofluids 1 μm. The procedure refers to the ACTIS/600 industrial CT operation manual. Firstly, X-rays were focused, by an optical lens, through the sample. X-ray penetration was measured by a specific detection device, and 2D scanning slices of the sample section were generated. Then, 2D scanning slices were reconstructed by the 3D model using the 3D modeling image-processing software. Next, a density distribution reconstruction map was obtained for the sample, enabling visual analysis of the 3D space of internal pores in the core.
3.2.5. Nuclear Magnetic Resonance (NMR). Nuclear magnetic resonance (NMR) nondestructively analyzes pore characteristics and the fluid distribution of samples, based on correlations between the movement of hydrogen atoms in water or hydrocarbon fluids and pores in rocks [37]. This experiment was carried out on a MacroMR12-150H-I NMR instrument produced by Newman, with the maximum number of echoes in the CMPG being 18,000 and the shortest echo time being less than 420 μs. Experiments were carried out in two groups, one in 100% saturated water and the other in bound water. All samples were thoroughly saturated with saline (80 g/l KCl) for several days before testing until the weight no longer increased, and then samples in a saturated-water state were tested to obtain the T 2 distribution. Next, to achieve the ideal bound-water state, samples were centrifuged at 417 psi to achieve the optimal centrifugal force corresponding to a throat radius of 0.05 μm [38], which is the lower throat radius limit for a movable fluid. Then, the second set of samples was tested to obtain the T 2 distribution in the bound-water state. The volume relaxation and diffusion relaxation terms of the fluid are usually negligible for NMR applications in petroleum, and therefore, the relaxation time T 2 can be approximated as where T 2 is the transverse relaxation time, ms; ρ 2 is the transverse surface relaxation strength, μm/ms; and S/V is the specific surface of a single pore, μm 2 /ms 3 . Previous studies have shown through a large number of statistical experiments that T 2 has a power function relationship with the PSD [39,40]. The relationship between the specific surface and the pore diameter is S/V = F s /γ for spherical and columnar pore-structure simplifications. In addition, C = ρ 2 F s , and where γ is the pore radius, μm; F s is the single-pore shape factor; and n is the power exponent.
Pore Morphology by SEM.
Pores were identified through SEM observation and classified into three types: residual interparticle pores, intergranular pores, and dissolution pores ( Figure 2). Residual interparticle pores were rare because of strong compaction and diagenesis, but intergranular pores in different minerals were relatively large in number. The size of these intergranular pores is generally controlled by the size and shape of mineral crystals and was typically less than 1 μm (Figures 2(c)-2(f)). Typical clays found in these samples were chlorite (Figure 2(a)), mica (Figure 2(b)), mixed-layer illite and smectite (Figure 2(c)), and kaolinite ( Figure 2(e)). Dissolution pores included both intergranular and intragranular dissolution pores and were the most important pore type in the study area (Figures 2(g) and 2(h)). Furthermore, dissolution pores mainly originated from the dissolution of feldspar, and occasionally feldspar leaching was seen (Figure 2(i)). The surface porosity of these samples ranged from 0.97% to 1.83%, with the average being 1.19% (Figure 3). Dissolution pores in feldspar contributed most to the range from 0.20% to 0.95%, with the average value being 0.52%. Interparticle and intergranular pores contributed to a smaller range from 0.27% to 0.76%, with the average being 0.48%, and the average was 0.09% for lithic fragment pores.
Petrophysical Properties and PSD by MIP.
Petrophysical property test results showed that the porosity of the collected samples ranged from 2.98% to 10.90%, with an average of 7.39%, and the permeability ranged from 0.004 to 0.194 mD, with an average of 0.1 mD ( Table 1). The Chang 7 reservoir is a typical tight sandstone reservoir because it has low porosity and permeability. Figure 4(a) shows the intrusion-extrusion curves obtained by MIP. All curves are S-shaped, with no horizontal steps. The average displacement pressure was 2.06 MPa, and the injected mercury pressure started to rise sharply around 20 MPa. The median saturation pressure of sample DT40 was 205.44 MPa, which was much higher than the 8.96 MPa and 11.12 MPa obtained for samples DT18 and DT44, respectively. However, the mercury input saturations of the three samples differed greatly. The highest mercury input saturation was seen in sample DT18, with a value of 72.40%, and the lowest was seen in sample DT40, with a mercury input saturation of only 40.35%. The mercury removal efficiency of each sample was relatively low, with an average value of 26.8% (Table 1). Results showed that the average throat radius of the samples was 0.11 μm and that pore space was mainly contributed by pores within the 10-500 nm range, with pores of <10 nm and >0.5 μm making very little contribution (Figure 4(b)).
PSD by NMR T 2
Spectrum. According to the principle of NMR, the NMR signal strength of the hydrogen atoms in the fluid inside the pores of porous media is proportional to the size of the pores [37]. This means that the T 2 value reflects the pore radius and that the amplitude of the T 2 spectrum represents the content of the pores. Therefore, the NMR T 2 spectrum of samples measured under saturated singlephase fluid conditions can reflect the distribution of total pores, including connected and disconnected pores. Figure 5 shows the NMR T 2 spectrum of each sample in a saturated-water state. The T 2 values mainly ranged from 0.1 ms to 200 ms, 0.1 ms to 100 ms, and 0.1 ms to 1000 ms. All T 2 sample spectra showed a bimodal pattern, with the 5 Geofluids amplitude of the left peak being higher than that of the right peak. The inflection point was near T 2 = 10 ms, and the peak was near T 2 = 1 ms. In general, the collected samples contained mainly small pores (r < 0:05 μm) based on the basic principle of positive correlation between the T 2 value and pore size, whereas the PSDs of large pores (r > 0:05 μm) were different. In particular, sample DT44 contained more than 40% large pores. Figure 6(c)), and the 3D pore distribution shows a certain stratification with a zonal distribution (Figure 6(f)). Some researchers believe that the development of bedding can improve rock permeability to a certain extent, accompanied by the formation of larger pores or microfractures [13,41]. However, the 3D pore distributions in each sample show large numbers of isolated pores, which cannot provide an effective channel for oil and gas migration. Pore connectivity will be discussed further in the following chapters.
Discussion
5.1. Pore Connectivity Analysis 5.1.1. Full PSD Calculated by MIP and NMR. The capillary pressure curves and the NMR T 2 spectra were both directly related to pore-structure characteristics for the same sample. The PSD from NMR was calculated from the T 2 spectrum with reference to the literature based on the theory discussed in Section 3.2.5 [40]. Figure 7 shows the conversion between the NMR T 2 spectrum and the pore-throat radius. According to the principle of MIP, mercury preferentially enters the larger connected pore throats with increasing displacement pressure. Hence, volume information can be obtained only for pores below the maximum mercury injection pressure, whereas the NMR T 2 spectrum reflects the total pore space. For conversion accuracy, pore-throat radii below the maximum mercury saturation were chosen for interpolation calculations with the T 2 spectrum. The error-minimizing values of C and n were obtained by fitting rðiÞ~T 2 ðiÞ according to the least-squares principle. By substituting the result into Equation (3), the PSD can be obtained from NMR. The conversion coefficients of the samples are presented in Table 2. Figure 8 shows the PSDs obtained by NMR and MIP. For the sake of comparison and analysis, one-to-one correspondence between the pore sizes of each measurement was used. This showed that the maximum pore radii obtained by NMR and MIP were both less than 1 μm and that the PSD from NMR was generally greater than that from MIP.
Comparison of PSD between NMR and MIP.
According to the results from NMR, the pore sizes were concentrated in two ranges: pores less than 100 nm (period I) and pores greater than 100 nm (period II). The nanoscale pores ranging from 10 nm to 100 nm in period I were relatively well developed and contributed about 75.6%-92.1% to the total pore volume of these samples, whereas the pores of period II contributed about 7.9%-24.4% to the total pore volume. This indicated that nanoscale pores provide extremely important reservoir space for tight sandstone reservoirs in the study area. PSD from MIP illustrated that the connected pores mainly ranged from 10 to 500 nm.
Note that the amplitude of the MIP curve is higher than that of the NMR curve around 100 nm for sample DT40. The cause of this phenomenon is due to the influence of the testing principle. MIP is primarily sensitive to throats, not pores. The PSD obtained by MIP reflected the total volume of all throats and their connected pores under a certain pressure. However, there was no displacement process during the NMR experiment. The PSD obtained by NMR represented the total volume of all throats and pores with a certain radius, whether connected or not.
Quantitative Characterization of Pore Connectivity.
The PSD from MIP calculated from the volume of mercury at different pressures reflects the connected pores that mercury can invade. In contrast, the PSD obtained from NMR shows the total pore-volume distribution because disconnected pores are also filled with fluid. Therefore, the PSD from NMR is generally greater than that from MIP, and the difference between them is distinct in tight sandstones [13,14]. The authors think that this gap represents the disconnected pores in sandstones (Figure 8).
This paper proposes that the pore connectivity percentage (PCP) is the ratio of the cumulative pore volume obtained by MIP to the cumulative total pore volume obtained by NMR in a certain pore-throat radius range. On 6 Geofluids this basis, the PCP of total pore space and pores of different radii (<50 nm, 50 nm-0.1 μm, and 0.1 μm-1 μm) of the three sandstone samples were calculated by MIP and NMR (Table 3). Moreover, 50 nm is the lowest throat-radius limit for movable fluid, whereas 0.1 μm is the dividing point between periods I and II from NMR. The results showed that the pore volume by NMR of the tight sandstone samples ranged from 0.0011 ml/g to 0.0219 ml/g, with an average value of 0.0101 ml/g ( Table 3). The total PCPs of the tight sandstone samples were relatively low, with values of 25.11%, 36.30%, and 24.25% ( Table 3). The PCP varied dramatically when the pore radius was greater than 50 nm (the cutoff point). The PCP increased at larger pore radii, varying from 36.60% to 92.00% with an average value of 63.68% in the pore-size ranges of 50 nm < r < 0:1 μm and r > 0:1 μm (Figure 9(b)). However, the connectivity of pores less than 50 nm was extremely poor, with values ranging from 5.02% to 18.18% (Figure 9(b)). Nevertheless, pores in this same range accounted for most of the pore volume of each sample, ranging from 0.0099 ml/g to 0.0219 ml/g, with an average value of 0.0173 ml/g (Figure 9(a)). In other words, tight sandstone has a great many disconnected pores. The reason for this phenomenon may be that the tight sandstones in the study area are delta front subfacies and are mainly composed of fine particles with strong sorting, which makes nanoscale and microscale pores dominate in the tight sandstone samples [29,30,34]. In addition, during densification of sedimentary rock, finer particles were more likely to form dead pores, resulting in low connectivity.
T 2 Cutoff Values and Movable
Fluids. The T 2 cutoff was the key parameter for calculating movable fluid saturation in the NMR experiment, the left side of the T 2 cutoff represents bound fluid, and the right side represents movable fluid [42]. The T 2 cutoff varied according to differences in the specific surface area of each sample [43]. Figure 10 shows the method of calculating the T 2 cutoff value. First, the cumulative proportion of the T 2 spectrum under saturated water (CPS) and irreducible water (CPI) conditions was obtained. Then, a horizontal line was drawn starting from the maximum CPI and intersecting with the CPS at a point. The corresponding T 2 value at that point was the T 2 cutoff value. Note: Poro: porosity; Perm: permeability; P t : displacement pressure of mercury injection; P 50 : pressure of median mercury saturation; S max : maximum intrusion mercury saturation of sample; S r : extrusion mercury saturation of sample; r a : average throat radius of sample.
Geofluids
The calculation results showed that the T 2 cutoff values of the samples ranged from 1.70 ms to 3.18 ms, which were less than the empirical value (13 ms) for low-permeability reservoirs. The movable fluid saturation of the three tight sandstone samples was obtained from the T 2 cutoff values ( Table 4). The movable fluid saturation is mainly used to characterize the fluid movability of reservoirs, which represents the proportion of the volume of movable fluid to the total pore volume. The movable fluid saturations of the samples (DT18, DT40, and DT44) were 43.01%, 10.20%, and 40.09%, respectively. The value of sample DT40 was much lower than that of the other two samples, indicating that the fluid movability of sample DT40 was extremely low.
Effective Movable Fluid.
Movable fluids were calculated through the T 2 cutoff values of the samples, and movable fluids occurred in pores greater than 50 nm. These pores included both connected and disconnected pores. The pore volume at a scale greater than 50 nm can be divided into three conditions. The first includes pores connected by an adjacent throat (>50 nm), where fluids can break through during centrifugation (Figure 11(a)). The second condition includes pores connected by an adjacent throat (<50 nm), but where fluids cannot break through the throat during centrifugation (Figure 11(b)). The third condition includes isolated pores without a connecting throat, from which fluids cannot be centrifuged out. The fluid in isolated pores was retained during rock deposition and diagenesis and prevented pore collapse at high pressure [14]. Therefore, it was not accurate to use the initial movable fluid saturation calculated using the NMR T 2 cutoff value to evaluate fluid movability. The effect of disconnected pores on movable fluid saturation should be excluded.
The pores that mercury could invade were all the connected pores under a certain displacement pressure from the MIP experiment [13]. When the displacement pressure corresponding to the cutoff throat radius (F c ) was higher than the capillary pressure (F i ) (Figure 11(c)), the pore volume that mercury could invade was identical to the first condition discussed above (Figure 11(a)). On the contrary, the 8 Geofluids rest of the connected pore volume conformed to the second condition (Figure 11(d)) when F c was less than F i (Figure 11(b)).
Therefore, a new parameter, effective movable fluid saturation (S e ), was proposed on the basis of the physical concepts of movable fluid saturation and pore connectivity. It represents the ratio of pore-throat volume greater than the cutoff throat radius to total pore volume in a unit volume and is equal to the initial movable fluid saturation (S i ) times the pore connectivity percentage (β) greater than the cutoff pore-throat radius: where S e is the effective movable fluid saturation, %; S i is the initial movable fluid saturation, %; and β is the connectivity percentage of pores greater than the cutoff pore-throat radius, which is a constant. The effective movable fluid saturations of the three tight sandstone samples were calculated using Equation (4). S e ranged from 8.78% to 24.63%, with an average of 16.94% ( Figure 12). These results show that the initial movable fluid saturation (S i ) decreased by 14.16% on average after eliminating disconnected pores. It can be concluded from this that the low recovery of tight sandstone reservoirs is due to the heterogeneity and weak connectivity of tight sandstones. It is essential to exclude disconnected pores when calculating the recovery of a reservoir. Effective movable fluid saturation is a comprehensive reflection of pore structure and fluid distribution characteristics, which is a positive step towards exploitation and productivity evaluation of tight sandstone reservoirs.
Conclusions
Multitechniques were used to characterize the pore structure of tight sandstone. Pore connectivity and movable fluid 10 Geofluids distribution were characterized quantitatively based on MIP and NMR. The main findings of this work can be summarized as follows: (1) SEM observation showed that the main pore types for tight sandstones are interparticle pores between different minerals and dissolved pores in feldspar, with most pores smaller than 1 μm. The 3D pore distribution from micro-CT results showed some stratification with a zonal distribution (2) The PSD from NMR showed that pore sizes were concentrated in two ranges: less than 100 nm (period I) and greater than 100 nm (period II). PSD from MIP showed that the connected pores mainly ranged from 10 to 500 nm. PSD comparison between NMR and MIP indicated that PSD from NMR is generally greater than that from MIP because MIP characterizes only the volume of connected pores, whereas NMR shows the volume of all pores (3) Tight sandstones have weak connectivity percentages, 28.6% on average. Movable fluids are mainly distributed in pores over 50 nm, and these pores have higher connectivity percentages ranging from 36.6% to 92.0%, although they have smaller pore volume (4) A new parameter, effective movable fluid saturation (S e ), was proposed based on the initial movable fluid saturation (S i ) from NMR and the pore connectivity from MIP and NMR. The effective movable fluid saturation (S e ) was calculated for three tight sandstone samples, and it was found that the movable fluid saturation decreased by 14.16% on average when unconnected pores were excluded
Data Availability
The data used to support the findings of this study are available from the corresponding authors upon request. T 2c Figure 10: Illustration of the method to calculate the T 2 cutoff value (T 2c ) using sample DT18. IPS: incremental proportion at saturatedwater condition; IPI: incremental proportion at irreducible water condition; CPS: cumulative proportion at saturated-water condition; CPI: cumulative proportion at irreducible water condition. Figure 11: Distribution of movable fluids in pores and mercury injection of connected pores. Valid pore space (first condition) for movable fluid (a); void pore space (second condition) for bound water (b); pores with a throat greater than 50 nm (first condition) that mercury can invade (c); pores with a throat less than 50 nm (second condition) that mercury cannot invade (d). F i : capillary pressure; F c : displacement pressure corresponding to the cutoff throat radius.
|
2019-09-27T05:43:32.542Z
|
2020-01-23T00:00:00.000
|
{
"year": 2020,
"sha1": "153becad009a7f542a48a41079a919902f457993",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/5295490",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1bcc3cbe313f787679bc3f21f2a34d5f5a6110b6",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
233907754
|
pes2o/s2orc
|
v3-fos-license
|
Design and Synthesis of a Fluorescent Probe Based on Copper Complex for Selective Detection of Hydrogen Sulfide
A novel fluorescence probe NA-LCX was rationally designed and synthesized for the sequential recognition of Cu and H2S by the combination of hydroxyl-naphthalene and diformylphenol groups. The response properties of NA-LCX for Cu ions and H2S with “on-off-on” manner were investigated by fluorescence emission spectra. A highly selective and sensitive response of complex NALCX-Cu for H2S over other competing amino acids was observed with a limit of detection at 2.79μM. The stoichiometry of NALCX toward Cu ions was determined to be 1 : 1 by the UV-Vis absorption spectrum, and the coordination configuration was calculated by density functional theory (DFT) calculations. Moreover, probe NA-LCX was applied successfully for the recognition of Cu ions and H2S in living cells.
Introduction
Hydrogen sulfide (H 2 S), the simplest biomercapto compound, is not only a rotten egg smelling gas pollutant but also the third gasotransmitter and cellular signaling molecule after CO and NO [1,2]. The endogenous H 2 S could regulate vascular smooth muscle tension and cardiac contractile function, anti-inflammatory and antioxidative stress, neurotransmitter transmission, and insulin signaling inhibition, which plays an important role in the physiological and pathological processes of the cardiovascular, nervous, immune, and digestive systems [3][4][5][6][7]. The concentrations of H 2 S in the normal metabolism often maintain dynamic equilibrium, while abnormal changes of the H 2 S level could induce serious health problems, such as heart diseases [8,9], chronic obstructive pulmonary diseases [10,11], cirrhosis [12,13], and Alzheimer [14,15]. Hence, it is crucial to exploit a highly sensitive and selective method for the detection of hydrogen sulfide in living systems.
Many conventional methods for H 2 S detection have been developed, including colorimetric method [16,17], electro-chemical analysis [18,19], liquid chromatography mass spectrometry [20,21], and fluorescence analysis [22][23][24]. Among them, fluorescence analysis is more desirable due to its simple operation, high sensitivity, wide dynamic range, high fluorescence quantum yield, good biocompatibility, noninvasiveness, and ability of in situ real-time detection in living systems [25]. In recent years, many fluorescent probes for H 2 S detection have been reported on account of different types of strategies such as reduction reactions [26,27], nucleophilic addition reactions [28,29], dinitrophenyl ether/sulfonyl ester cleavage [30,31], and metal sulfide precipitation reaction [32][33][34][35][36][37][38][39][40]. However, there are some limitations to those reaction methods as well as the products obtained via those reactions. For example, those reactions are insensitive, complex, and time-consuming; moreover, fluorescent probes prepared via those reactions are sometimes not biocompatible and sometimes unstable in the presence of biological thiols (glutathione, cysteine, etc.) [31]. The strategy by using a metal displacement approach is in high demand for its fast response and high sensitivity and selectivity. Sulfide is known to react with copper ion to form very stable CuS with a very low solubility product constant K sp = 6:3 × 10 −36 (for cyanide, K sp = 3:2 × 10 −20 ). Thus, the utilization of the higher affinity of Cu 2+ towards sulfide for designing a specific Cu 2+ sensor to sequentially identify H 2 S has received considerable attention because it can effectively eliminate the interference of other analytes in the system.
Naphthalene derivatives with an electron donor-π-acceptor (D-π-A) structure have been widely used due to good optical properties, such as high fluorescence quantum yield, good biocompatibility, and light stability. Herein, we synthesized a new fluorescent probe NA-LCX based on hydroxylnaphthalene and diformylphenol which have excellent coordination ability to metal ions. The probe showed an obvious "on-off" fluorescence quenching response toward Cu 2+ , and the NA-LCX-Cu 2+ complex showed an "off-on" fluorescence enhancement response toward H 2 S in a DMSO/HEPES (3 : 2 v/v, pH = 7:4). The photophysical capabilities of probe NA-LCX for Cu 2+ and NA-LCX-Cu 2+ for H 2 S were studied in details from fluorescence spectroscopy, absorption spectroscopy, and fluorescence images in vivo.
General
Method for Cell Imaging. Human liver cancer HepG-2 cells were cultured in a 12-well plate, and when the cell saturation exceeded 80%, ligand NA-LCX and probe NA-LCX-Cu 2+ solution were added. The mixture was then incubated for 3 hours in a CO 2 incubator and washed three times with precooled PBS, followed by the addition of 1 mL PBS. The resulting mixture was observed under a Leica DMI8 inverted fluorescence microscope. Journal of Sensors ligand NA-LCX to Cu 2+ was determined as 1 : 1 based on the continuous changes in absorbance at 438 nm ( Figure S4).
Fluorescence Spectroscopy
Recognition of Probe NA-LCX toward Cu 2+ . The sensitivity of probe NA-LCX for Cu 2+ was investigated by fluorescence titration. Probe NA-LCX showed a strong fluorescence emission peak at 575 nm upon excitation at 312 nm. As shown in Figure 2, the fluorescence intensity of the ligand NA-LCX gradually decreased upon the addition of Cu 2+ ions and became constant until about 2 equiv. of Cu 2+ ions were added. The quenching rate was extremely high, indicating that probe NA-LCX was highly sensitive to Cu 2+ , which could be due to the photoinduced electron transfer of Cu 2+ ions and/or the d-d electron paramagnetic quenching effect [41][42][43]. Moreover, the fluorescence emission intensity showed a good linear relationship (R 2 = 0:99) with the concentration of Cu 2+ ions in the range of 1~20 μM. The quenching constant value of probe NA-LCX with Cu 2+ ions was determined from the titration plots.
The corrected Stern-Volmer fitting indicated the value of 2:6 × 10 4 mol −1 · L ( Figure S5). The fluorescence response of probe NA-LCX to other metal ions in DMSO : HEPES (3 : 2, v/v) was shown in Figure S6. It could be found that many other metal ions, such as Co 2+ , Fe 3+ . It could be found that many other metal ions, such as Co 2+ , Fe 3+ , Fe 2+ , Ni 2+ , Zn 2+ ,
Journal of Sensors
Cd 2+ , and Mn 2+ , also exhibited a similar fluorescence quenching response.
Fluorescence Spectra Response of Complex NA-LCX-Cu 2+
toward H 2 S. The complex formed by probe NA-LCX and Cu 2+ was used as a new sensor NA-LCX-Cu 2+ for sequential recognition of H 2 S. Upon the addition of Na 2 S, as shown in Figure 3, the fluorescence intensity was gradually increased and remained unchanged up to 10 equiv. The probe NA-LC-Cu 2+ released Cu 2+ ions due to the strong reaction between sulfide and copper ions, which restored the original fluorescence of the probe. Furthermore, the detection limit was 2.79 μM according to the formula LOD = 3σ/S ( Figure S7). The responses of ligand NA-LCX to other metal ions such as Co 2+ , Cd 2+ , Zn 2+ , Ni 2+ , Fe 2+ , Mn 2+ , and Fe 3+ and the subsequent addition of 10 equiv. of Na 2 S are shown in Figure S8. It could be found that the complex NA-LCX-Cu 2+ possess the highest fluorescence intensity ratio of recovery/quenching, and hence, the probe NA-LCX-Cu 2+ was chosen to pursue H 2 S detection.
To further explore whether probe NA-LCX-Cu 2+ could be used as a highly selective H 2 S sensor, the fluorescence response of probe NA-LCX-Cu 2+ (20 μM) to different amino acids was tested. As described and shown in Figure 4, only the addition of H 2 S instantly caused an obvious fluorescence enhancement. The fluorescence intensity of probe NA-LCX-Cu 2+ remains unchanged in the presence of 10 equiv. of different mercapto-amino acids such as glutathione, cysteine, N-acetyl-L-cysteine, homocysteine, and non-mercaptoamino acids. And other reactive sulfur species (S 2 O 3 2-, SO 4 2-, and HSO 3 -) also did not cause obvious fluorescence changes. The competitive experiments further showed significant fluorescent enhancement without being interfered by other amino acids and reactive sulfur species, which further indicated the good selectivity of the probe NA-LCX-Cu 2+ for H 2 S detection.
DFT Calculation.
To gain further insight into the nature of coordination configuration and optical response of sensor NA-LCX toward Cu 2+ , the different coordination structures of NA-LCX-Cu 2+ were examined by density functional theory calculation. All calculations were performed by Gaussian 09 program. The geometries were optimized at the B3LYP/6-31G(d)/SDD level, and the interaction energies were calculated based on the single point energies obtained at the B3LYP/6-31+G(d)/SDD level. As shown in Figure S9, it was obvious that the interaction energy of structure C was higher than structures A and B, which verified the experimental results and presumed the complexation mode of probe NA-LCX with Cu 2+ .
Effect of pH on the Performance of Probe NA-LCX and
Complex NA-LCX-Cu 2+ . To investigate the effect of pH value, fluorescence intensity of probe NA-LCX, complex NA-LCX-Cu 2+ , and complex NA-LCX-Cu 2+ in the presence of S 2was investigated in a wide range of pH values. No significant changes in fluorescence intensity were found at lower pH (pH ≤ 6) ( Figure S10). However, It could be observed that significant fluorescence changes in ligand NA-LCX and NA-LCX-Cu 2+ -Na 2 S at pH > 6, indicating potential of probe NA-LCX-Cu 2+ to detect H 2 S in physiological environments.
3.6. Cell Imaging Experiments. Inspired by the excellent selectivity at physiological pH levels, the cell imaging application of sensor NA-LCX for detection of Cu 2+ and H 2 S was further investigated. Prior to the cell imaging experiment, the MTT cell toxicity assay for probe NA-LCX-Cu 2+ was performed in human liver cancer cells (HepG-2) shown in Figure S11, and no significant cytotoxicity was found in the range of 0~10 μM, even after incubating for 24 h. As shown in Figure 5, significant intracellular green fluorescence in the HepG-2 cells was observed in the presence of probe NA-LCX when excited with blue light (Figure 5(a)), indicating that the sensor NA-LCX was well permeable. However, the complex NA-LCX-Cu 2+ was added to the wells, and the green fluorescence in HepG-2 cells was quenched to a large degree, as expected ( Figure 5(b)). Upon subsequent addition of 2 and 5 equiv. of Na 2 S solution, obvious fluorescence recovery was observed (Figures 5(c) and 5(d)). The fluorescence imaging results suggested the potential of probe NA-LCX-Cu 2+ for in vivo detection of H 2 S.
Conclusion
In this study, a novel two-armed naphthalene derivative probe NA-LCX was synthesized and its spectral performance for sequential recognition of Cu 2+ and H 2 S was studied. The probe NA-LCX showed an obvious "on-off-on" fluorescence response toward Cu 2+ and H 2 S. The probe NA-LCX showed a 1 : 1 binding stoichiometry to Cu 2+ with a complexation constant of 2:6 × 10 4 M −1 . Fluorescence study indicated sen-sitivity and selectivity of the probe NA-LCX-Cu 2+ for H 2 S detection without interference from other amino acids. The detection limit for H 2 S was calculated to be 2.79 μM. The cell imaging results further showed the potential of the probe for Cu 2+ and H 2 S detection in living cells.
Data Availability
The data used to support the findings of this study are included within the article and supplementary information file(s).
|
2021-05-08T00:04:06.949Z
|
2021-02-15T00:00:00.000
|
{
"year": 2021,
"sha1": "c9f3a52e49a5a152f2e40106d98a84160275da0a",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/js/2021/8822558.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5509d9ea8f18d81c998b0cd7b382b1d8af7fa0e3",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science",
"Chemistry"
]
}
|
40467569
|
pes2o/s2orc
|
v3-fos-license
|
Policy and Management of Medical Devices for the Public Health Care Sector in Benin
Health technology, according to WHO is the application of organized knowledge and skills in the form of devices, medicine, vaccines, procedures and systems development to solve a health problem and improve quality of lives4. When used in this paper, the term healthcare technology means the different types of devices or equipment used in health facilities. Its encompasses: medical equipment for clinical use; hospital furniture; vehicles; service Supplies; plant; communication equipment; fire fighting equipment; fixtures built into the building; office equipment; office furniture; training equipment, walking aids and workshop equipment.
Introduction
Health technology, according to WHO is the application of organized knowledge and skills in the form of devices, medicine, vaccines, procedures and systems development to solve a health problem and improve quality of lives 4 .When used in this paper, the term healthcare technology means the different types of devices or equipment used in health facilities.Its encompasses: medical equipment for clinical use; hospital furniture; vehicles; service Supplies; plant; communication equipment; fire fighting equipment; fixtures built into the building; office equipment; office furniture; training equipment, walking aids and workshop equipment.
Healthcare technologies offer many benefits and have greatly enhanced the ability of health professionals to prevent, diagnose and treat diseases 11 .They are one of the essential elements for the delivery of health services.The use of technology in health care systems in developing and transition countries faces a great number of difficulties.Since about 95% of the healthcare technology used in these countries is imported 30 ; mismatches occur because the technology development process has not usually considered the needs and realities of the target environments.These mismatches in the technology transfer process to countries with financial and technical constraints are often of great significance.Thus, in Benin, medical devices and equipment represent a significant proportion of national health care expenditure.Each year, more than 10,600,000 US$, (about 20%) 20 of the national health budget, are spent on procurement of medical devices and equipment for healthcare facilities.Despite this great amount of money spent each year on an ever-increasing array of medical devices and equipment, not enough attention is paid to the equipment use and maintenance.Management of medical devices is not yet recognised as an integral part of public health policy.Planning, follow up and maintenance of the equipment are inefficient and ineffective 12, 13, 14, 15, 16, 17, 18, 19, 20, and 21 .This study, supported by the Netherlands Organisation for International Cooperation in Higher Education (NUFFIC) from 2007 was conducted in Benin Ministry of Health (MoH) and at the University of Abomey-Calavi in collaboration with the Athena Institute, Vrije Universiteit Amsterdam from 2006-2008 aimed to identify factors appearing between 1998 and 2008 that adversely affected the healthcare technology management cycle i.e., planning, budgeting, selection, procurement, distribution, installation, training, operation, maintenance and disposal of medical devices.The results will allow to identify the key factors of mismanagement and critical maintenance system of medical devices in Benin and to formulate recommendations to improve the system.The first part of this paper gives background information on the country, its health system and an overview of its healthcare technology management state.The second part describes the methods and materials used and the third part presents the results, followed by discussion, comments and recommendations in the final section.
Background information 2.1 Benin: The country
Located on the West coast of Africa, the Republic of Benin is small (114,763 square kilometers), with a coastline on the Gulf of Guinea nestled between Nigeria, Niger, Burkina Faso, and Togo (Figure 1).The population, estimated at 7,839,914 in 2006, includes a multitude of ethnic and linguistic groups.Benin remains one of the world's least developed countries and has been ranked 163 of 177 on the United Nations Human Development Index (2005).Demographic and health indicators are given below (Table 1).
Healthcare technology management and maintenance
Healthcare technology management and maintenance remains one of the main challenges of the developing countries healthcare systems in general and, of Benin particularly.Thus, although many financial resources are used for procurement of devices, not enough attention is paid to their future.While some of the equipment were donated, a significant portion was purchased with loans provided by bilateral and multilateral agencies and will have to be paid back with great sacrifice 26 .One of the root causes of the equipment idleness is the lack of effective management.It is important to point out that despite the several initiatives undertaken by the ministry of health to improve the healthcare technology management cycle no significant changes have been noticed 13, 14, 15, 16 and 17 .
Many facilities, especially Zone Hospitals, continue to lack the basic technologies they need to provide quality care to the patients, because equipment is unavailable, inoperative, misused or inappropriate.The situation is most severe in the Communal and Arrondissement health facilities far from the first referral hospitals.This has far-reaching implications for the prevention and treatment of disease and disability and often leads to a waste of scarce resources.
Materials and methods
The
Desk research and short survey
This study focused on the procurement management of medical devices in the Republic of Benin and aimed to identify the main weak points in the procurement management system of medical devices from 1998 to 2008.It was based on data collected from documents (such as national procurement magazines and health equipment public procurement and bidding contracts from the Ministries of Health and Economy and Finances), and on interviews and informal discussions with ten local accredited suppliers of medical devices in Benin.
A comparative study was done concerning the selling prices of ten medical devices procured by Benin MoH further to international tenders.The steps were i) Ten medical devices were selected from the available essential medical device list.ii) Their mean reference selling prices (based on their specifications) were determined from 10 local medical device accredited suppliers based on the prices the devices were sold to the private health facilities.iii) The mean prices at which the same devices were sold to the Ministry of Health following open tenders public procurement were identified, in three periods: 1998 to 1999; 2001 to 2004 and 2005 to 2008 when the procurement evaluation process has been changed and improved.iv) The mean prices at which they were sold to the MoH were compared to the ad hoc mean reference selling prices provided by the private healthcare facilities and/or from the local suppliers' price list for private facilities.
Surveys
Two surveys were carried out in 321 healthcare facilities of the six southern departments (provinces).The first, entitled "management and maintenance of healthcare technology", was conducted in 2006 in 11 health centers and hospitals.It aimed to identify the weaknesses in the healthcare technology management and maintenance system in order to make recommendations for its improvement.Data were collected through observational visits, interviews and questionnaires.The second, entitled "healthcare technology assessment in the southern Benin public healthcare facilities" was carried out in 310 health centers and hospitals in 2006 and 2007.The first objective was to determine the extent of disparity between what medical devices/equipment were planned and what was actually available in each selected health facility to facilitate procurement for the poorly equipped health facilities of the essential medical devices.The second objective was to identify weaknesses in the whole Benin healthcare technology management cycle.Data were collected through observational visits and reading reports, interviews, and questionnaires (inventory sheets).The steps were i) Equipment inventory was done at all the public healthcare facilities in southern Benin; ii) Healthcare equipment in these facilities were compared to the MoH available Essential Medical Device List of each health facility level iii) The needs assessment of each healthcare facility was done using a pilot asset assessment software.Finally, interviews were held with a range of stakeholders including policy makers of the MoH, healthcare facility managers, equipment users (physicians, nurses, midwives, lab technicians, X-ray machine technician ….) and, maintenance technicians.
Results
The results of the study are summarised in tables 2 to 6 and figure2.Tables 2, 3 and 4 show the mean ad hoc reference selling prices of selected medical devices in comparison with the prices the same devices were sold to the Ministry of Health from 1998 to 1999, 2001 to 2004 and 2005 to 2008.Table 5 and figure 2 show the trends of [MoH device acquisition prices/Ad hoc device reference selling prices] ratio during the three periods of years.The ten equipment studied were: 1) blood pressure device 2) spectrophotometer 3) electric suction unit 4-) Electrocardio-graph 5) X-ray apparatus 6) hot air sterilizer 7) autoclave 8) ventilator 9) anaesthesia system and 10) blood bank refrigerator.
The letter X that may be a, b, c, d, e, f, g, h, i or j represents respectively the "ad hoc reference prices" (the private healthcare facilities device acquisition prices) of each device in local currency.The letter Y that may be A, B, C, D, E, F, G, H, I or J are respectively the MoH same device acquisition prices through public procurement.Table 6 presents the findings of the two surveys and shows the factors affecting the healthcare technology management cycle in 321 health centers and hospitals in southern Benin.The factors were grouped (but not ranked) in sixth categories which were respectively maintenance and repair; distribution; use; technology assessment; policy, planning and budgeting; and procurement.
The key factors that have been identified so far include the high acquisition costs; the lack of insight of the government on medical device market prices, the lack of capacity to monitor reasonable prices from suppliers, the lack of insight into the cost/performance ratio of various brands of medical devices, an unequal distribution of devices among health care facilities, an unbalanced allocation of resources to acquisition of devices compared to infrastructure, and maintenance.Other key factors identified included the insufficiency of human resources with appropriate capacity to manage equipment, the unavailability of spare parts, and the lack of an annual maintenance budget.In a nutshell, the lack of policy and management tools like "the up to date essential medical devices list and "the reference prices list for essential medical devices" to support the implementation of the existing policy.The latter allows health sector authorities to monitor financial diversions occurring in public procurement contract awards, while the former serves as a reference tool to assess availability of fully operational devices at different hierarchical levels of healthcare facilities.6. Factors affecting the healthcare technology management cycle in 321 health centers and hospitals in southern Benin.
Discussion and recommendations
Goods acquisition, especially healthcare technology, represents an important part of any health budget and need to be looked with close attention.Through the results shown in Tables 2, 3, 4, 5 and, figure 2, it is clearly seen that, independently of the procurement years, the device acquisition prices by the MoH remain higher than the private healthcare facilities same device acquisition prices.Although the Benin first Goods and Services Procurement Code was implemented during the years 2001 to 2004 and has also been amended in 2004 and be implemented from 2005 to 2008, no significant improvements were found regarding the higher prices of medical equipment paid by the MoH.One can notice that the MoH pays too much for medical devices acquisition through public procurement and this was at its worst in 2001-2004.When analysing year by year available data of this period it was found that the highest acquisition prices were critical in 2003 and 2004.It is important to deeply understand the real reasons that underlie this phenomenon.Many hypotheses could be drawn to explain this fact but, it will be more interesting to increase the sample size (>10 medical devices) of the study for more reliability.The internal and external validities of the findings could be improved if a quasi-experimental study was designed.Thus, widely surveys will be conducted in the next papers with more representative sample size and strong method as controlled interrupted time series based on segmented regression analysis to infirm or to confirm the present findings and also to understand the true reasons of the ineffective management of healthcare equipment in Benin.
The Ministry of Health still needs a national public procurement policy and management tool like a reference prices list of the most widely used devices to overcome and to master the increasing and unreasonable medical device prices.It is normal to have the device acquisition costs paid by the government a bit higher than the reference set prices because of financial and administrative fees involved when the suppliers submit tenders.It is acceptable and reasonable to have the average device selling prices comprised between 1.1 to 1.2 times higher than the ad hoc reference prices.But, when the device selling prices offers by a supplier are m or e th an t hat , t h ey c ou l d b e c o n s i d er e d a s outbidding.I t i s t hu s urg en t f or th e Be n i n government especially the MoH to have an insight on that fact, to encourage the development of policies and laws regarding a reference price lists document of medical devices.The availability of the reference prices of the essential medical devices will allow the health sector authorities to monitor the usual financial diversion occurring during the procurement management activities.It is expected that once this document becomes available, the MoH could buy value-based pricing equipment each year and save a lot of money that can be used to improve the health of Benin population through other investments.
The results of the two surveys: i) "management and maintenance of healthcare technology" and ii) "healthcare technology assessment in the southern Benin public healthcare facilities" have revealed many weaknesses in the Benin health system through its healthcare technology management cycle.The results show failures in each link of the cycle (planning, budgeting, selection, procurement, distribution, installation, training, operation, maintenance and disposal of medical devices) resulting in low overall community health effectiveness.It is necessary to point out that the findings of the two surveys, i.e. the factors affecting healthcare technology management were only grouped (but not ranked) in sixth categories.The ranking of the factor categories (I, II, III, IV, V and VI) in order to set up priority actions will be discussed in the next paper.
As recommendations, twenty actions need to be taken by the government to overcome this situation in order to achieve its goal to improve the quality of/and access to health services that taking into account the poor and indigent.It is thus urgent to develop and implement a good medical device national policy which can include the following: i)An improved national list of essential medical devices and equipment based on evidence from the studies; ii) A national policy and plan for medical devices; iii) A national functional regulation authority in medical device empowered with legislation; iv) A document on assessment of medical device needs; v) National regulations based on ISO standards or WHO specifications; vi) National procurement procedure; vii) National policy for acceptance of donations; viii) Negotiated pricing list of each item of equipment; ix) National guide for management and use of medical devices; x) An inventory of suppliers and medical in use; xi) The cost of all the equipment of each level of Benin health facility related to the cost of infrastructure; xii) The service life span of each medical device or equipment in use in Benin health care facility or hospital in order to plan the replacement at a systematic time; xiii) The list of medical devices which have the highest risk; xiv) The spare parts which have the highest failure rate in order to plan their procurement; xv) The list of critical equipment and instrument affected by the electrical power outages power anomalies in Benin hospitals; xvi) Good software based planning and management tools for management and maintenance of medical devices; xvii) A post-market surveillance/vigilance system for alerts, notifications and recalls; xviii) A national budget for devices, using costing, budgeting and financing; xix) Standard operating procedures and best practices that cover every stage in the life span of medical devices; xx) Creation of an independent Direction of Healthcare Technology Management and maintenance within the Ministry of Health.
The following Healthcare Technology Management Cycle (Figure 3) could be used as a framework for health equipment management in developing country, providing a guideline for the necessary regulations and systems.
The Healthcare Technology Management Cycle 11 :An example of a framework for health equipment management in developing country.
Conclusion
Management and maintenance of healthcare technology in developing countries especially in the poor sub-Saharan Africa countries, remain a challenge.From the planning to the disposal of the devices many actions need to be undertaken to improve the Healthcare Technology Management Cycle.The achievements in the public healthcare sector depend on the full involvement of each stakeholder, but the main responsibility is still that of the governments.They need the political willingness and commitment to recognize management and maintenance of devices as an integral part of public health policy in order improve the quality and access to healthcare in each country.
Fig. 1 .
Fig. 1.Map of Benin (Source: USAID, 2006) study was carried out in the MoH, 321 healthcare facilities of the southern part of the country, the Ministry of Economy and Finance, some representatives of external support agencies in Benin and ten accredited suppliers of medical device companies.It consisted of surveys undertaken in 2006 and 2007 and of desk research (content analysis) based on 1998 to 2008 procurement collected data.It aimed to determine the factors that adversely affect the healthcare technology management cycle (planning, budgeting, selection, procurement, distribution, installation, training, operation, maintenance and disposal of medical devices) in Benin.
Table 1 .
Selected demographic and health indicators of Benin
Findings of the desk research and short surveyTable 2 .
Comparison of the mean ad hoc reference prices of medical devices to the Ministry of Health same device acquisition prices, 1998 to 1999.
Table 3 .
Comparison of the mean ad hoc reference prices of medical devices to the Ministry of Health same device acquisition prices, 2001 to 2004.
Table 4 .
Comparison of the mean ad hoc reference prices of medical devices to the Ministry of Health same device acquisition prices, 2005 to 2008.
Table 5 .
Trend of the [MoH device acquisition price/Ad hoc device reference prices] ratio during the three periods of years: 1998-1999; 2000-2004 and 2005-2008.
|
2017-09-17T06:17:17.765Z
|
2008-05-21T00:00:00.000
|
{
"year": 2008,
"sha1": "3cfb47b5438ed1157821ff2bcfcab0f1b6917b72",
"oa_license": "CCBY",
"oa_url": "https://openresearchlibrary.org/ext/api/media/758b8309-eb7f-4ad6-8096-cf3d15f2a58f/assets/external_content.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d3b724eebda55deda5784ed6138d057527a3bdbc",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
257142091
|
pes2o/s2orc
|
v3-fos-license
|
Towards an Understanding of Large-Scale Biodiversity Patterns on Land and in the Sea
Simple Summary Among such questions as the origin of the universe or the biological bases of consciousness, understanding the origin and arrangement of planetary biodiversity is one of the 25 most important scientific enigmas according to the American journal Science (2005). This review presents a recent theory called the ‘macroecological theory on the arrangement of life’ (METAL). METAL proposes that biodiversity is strongly influenced by the climate and the environment in a deterministic manner. This influence mainly occurs through the interactions between the environment and the ecological niche of species sensu Hutchinson (i.e., the range of species tolerance when several factors are taken simultaneously). The use of METAL in the context of global change biology has been presented elsewhere. In this review, I explain how the niche–environment interaction generates a mathematical constraint on the arrangement of biodiversity, a constraint called the great chessboard of life. The theory explains (i) why biodiversity is generally higher toward low-latitude regions, (ii) why biodiversity peaks at the equator in the terrestrial realm and why it peaks at mid-latitudes in the oceans, and finally (iii) why there are more terrestrial than marine species, despite the fact that life first appeared in the marine environment. Abstract This review presents a recent theory named ‘macroecological theory on the arrangement of life’ (METAL). This theory is based on the concept of the ecological niche and shows that the niche-environment (including climate) interaction is fundamental to explain many phenomena observed in nature from the individual to the community level (e.g., phenology, biogeographical shifts, and community arrangement and reorganisation, gradual or abrupt). The application of the theory in climate change biology as well as individual and species ecology has been presented elsewhere. In this review, I show how METAL explains why there are more species at low than high latitudes, why the peak of biodiversity is located at mid-latitudes in the oceanic domain and at the equator in the terrestrial domain, and finally why there are more terrestrial than marine species, despite the fact that biodiversity has emerged in the oceans. I postulate that the arrangement of planetary biodiversity is mathematically constrained, a constraint we previously called ‘the great chessboard of life’, which determines the maximum number of species that may colonise a given region or domain. This theory also makes it possible to reconstruct past biodiversity and understand how biodiversity could be reorganised in the context of anthropogenic climate change.
Introduction
The discipline of biology covers all living systems, from the simplest organic molecules (molecular biology) to large biomes (biogeography), by crossing many organisational levels, such as cells, tissues, organs, species, biocoenoses, and ecosystems [1]. It is essentially a science of complexity (Box 1) [2,3]. Since the origin of life, whether on Earth or elsewhere [4], biological systems have constantly evolved to adapt to their environment [5]. Species have emerged or died at gradual or sometimes more sudden rates, apparent balances punctuated by periods when changes occur relatively quickly [6]. The variety of species is not only perceptible from a morphological or anatomical point of view but is also reflected in many life history traits (size, growth, lifespan) that influence reproduction and individual survival [7]. The diversity exhibited by the living is almost inexhaustible, and evolutionary tinkering may obscure a form of intelligibility that researchers aim to discover [8,9]. It is a subtle mix between chance and necessity [10]: chance because diversity finds its origin in the genetic variability maintained by mutations and intra and inter-chromosomal mixing, and necessity because there are fundamental limits, whether physical, genetic, physiological, or ecological. Box 1. Complexity in ecology and the scientific approach we have adopted to consider it within the framework of METAL.
Complexity in biology + Innumerable actors and factors. + All elements are interconnected and interdependent. + Multiple actions and feedbacks at different organizational levels and spatio-temporal scales. + Nonlinearity (threshold effect, hysteresis). + Emergence of new properties that are difficult to predict from properties of the parts. How to deal with this complexity within the framework of METAL theory? + The system is complex but it can be simplified at certain organizational levels (consideration of emergent properties) and to some spatio-temporal scales (i.e. at the largest scales). + At some organizational levels and spatio-temporal scales, the laws influencing the arrangement of biodiversity are simple. + Non-linearity can be overcome (e.g. the concept of niche considers elegantly the non-linear responses of species to environmental fluctuations). + The use of ecological properties at the organizational level and at the relevant spatio-temporal scales enables one to unify phenomena, patterns of variability and biological events that govern the arrangement of biodiversity. + Their unification gives a high level of coherence to the phenomena and observed events and improves their understanding and predictability.
The origin and evolution of biodiversity are now better known. Charles Darwin, and neo-Darwinism, laid the solid theoretical foundations [11][12][13][14][15]. However, there remains a fundamental question to be resolved. How is biodiversity and the species that compose it organised on our planet and how is the abundance or number of species modified in space and time [16]? These questions are fundamental because biodiversity strongly influences the functioning of ecosystems and thus regulates services such as atmospheric carbon dioxide sequestration, but also provisioning services, i.e., the exploitation of ecosystems [17][18][19][20]. Moreover, to understand how anthropogenic climate change will affect individuals, species, and biocoenoses, the essential prerequisite is (i) to understand how these biological systems are naturally organised and (ii) to identify cardinal factors and mechanisms responsible for the alterations to then anticipate the modifications caused by environmental changes.
In this review, I present the macroecological theory on the arrangement of life (METAL), a theory that proposes that biodiversity is strongly influenced by the climatic and environmental regime in a deterministic manner (https://biodiversite.macroecologie.climat.cnrs.fr; accessed on 1 February 2023). This influence mainly occurs through the interactions between the ecological niche of species sensu Hutchinson (i.e., the range of a species tolerance when several factors are taken simultaneously) and the climate and environment [17]. The niche-environment interaction is therefore a fundamental interaction in ecology that enables one to predict and unify (i) at a species level, local changes in abundance, species phenology, and biogeographic range shifts, and (ii) at a community level, the arrangement of biodiversity in space and time as well as long-term community/ecosystem shifts, including regime shifts [21][22][23][24][25][26][27][28][29][30]. This theory offers a way to make testable ecological and biogeographical predictions to understand how life is organised and how it responds to global environmental changes [26]. More specifically, I show how METAL helps in understanding (i) why there are more species at low latitudes than at the poles, (ii) why the peak of biodiversity is located at mid-latitudes in the oceanic domain and at the equator in the terrestrial domain, and (iii) finally, why there are more terrestrial than marine species, despite the fact that biodiversity has emerged in the oceans. METAL has not been tested on prokaryotes (Bacteria and Archaea) yet because the species concept is fuzzy in this group, being replaced by the concept of operational taxonomic unit (i.e., taxa defined by molecular data analysis) [31,32]. Moreover, the ecological niche of prokaryotes can be more diverse and extreme, especially for Archaea [33,34], and their geographical ranges can be wide [35]. Therefore, all ecological principles examined in this review are only relevant for eukaryotes.
Patterns of Variability in Nature
For millennia, humans have detected recurring patterns of variability in nature or cycles [17,[36][37][38][39][40][41]. The multitude of environments that our planet conceals forces clades to adapt to the local conditions, a process that rapidly fills the niche space [42]. Biogeographic studies have provided compelling evidence that some species are present only in tropical environments, while others are exclusively found in temperate or polar regions [41,[43][44][45][46]. For example, Figure 1 shows that the spatial distribution of marine zooplankton (here copepod crustaceans) exhibits distinct patterns of variability [47], some species are present in the cold Labrador current (Calanus glacialis), others essentially along the European continental shelf (Candacia armata) or in the waters of the north (Paraeuchaeta norvegica) or the south (Clausocalanus spp.) of the North Atlantic, or finally at the transition between these waters along the North Atlantic Current (Metridia lucens). level, the arrangement of biodiversity in space and time as well as long-term community/ecosystem shifts, including regime shifts [21][22][23][24][25][26][27][28][29][30]. This theory offers a way to make testable ecological and biogeographical predictions to understand how life is organised and how it responds to global environmental changes [26]. More specifically, I show how METAL helps in understanding (i) why there are more species at low latitudes than at the poles, (ii) why the peak of biodiversity is located at mid-latitudes in the oceanic domain and at the equator in the terrestrial domain, and (iii) finally, why there are more terrestrial than marine species, despite the fact that biodiversity has emerged in the oceans. METAL has not been tested on prokaryotes (Bacteria and Archaea) yet because the species concept is fuzzy in this group, being replaced by the concept of operational taxonomic unit (i.e., taxa defined by molecular data analysis) [31,32]. Moreover, the ecological niche of prokaryotes can be more diverse and extreme, especially for Archaea [33,34], and their geographical ranges can be wide [35]. Therefore, all ecological principles examined in this review are only relevant for eukaryotes.
Patterns of Variability in Nature
For millennia, humans have detected recurring patterns of variability in nature or cycles [17,[36][37][38][39][40][41]. The multitude of environments that our planet conceals forces clades to adapt to the local conditions, a process that rapidly fills the niche space [42]. Biogeographic studies have provided compelling evidence that some species are present only in tropical environments, while others are exclusively found in temperate or polar regions [41,[43][44][45][46]. For example, Figure 1 shows that the spatial distribution of marine zooplankton (here copepod crustaceans) exhibits distinct patterns of variability [47], some species are present in the cold Labrador current (Calanus glacialis), others essentially along the European continental shelf (Candacia armata) or in the waters of the north (Paraeuchaeta norvegica) or the south (Clausocalanus spp.) of the North Atlantic, or finally at the transition between these waters along the North Atlantic Current (Metridia lucens). Maximum abundance values are in red and zero abundances are in dark blue. The absence of colour corresponds to an absence of sampling. Some copepods are present in the icy or cold waters of the North Atlantic Ocean (Pareuchaeta norvegica or Calanus glacialis). Others occur in subtropical waters (Clausocalanus spp., Neocalanus gracilis and Euchaeta marina). The Para-Pseudocalanus group is present in temperate waters, Metridia lucens at the limit between temperate and cold waters, and Candacia armata mainly south of the European continental slope. These examples show that the distribution of species is not random on a large scale and that there are therefore control mechanisms. Redrawn, from Beaugrand and colleagues [47]. Bioclimatologists and ecologists have noted the existence of cycles where periods of high abundance alternate with periods of low abundance or even absence [37,40,[48][49][50]. In temperate ecosystems (e.g., the North Sea), some species flourish in the spring; we speak of spring phenology. Others bloom in the summer; we then speak of summer phenology [49,50]. The presence of these recurrent patterns of variability in space or time suggests the existence of control mechanisms, whether autogenic or allogenic [50].
The Difficult Identification of Patterns in Ecology
It is more difficult than it seems to identify these patterns of variability in nature, and sometimes-especially on a small scale-it may seem that there are no rules governing the arrangement of biodiversity [17]. Take the example of the simulated distribution of individuals from a fictitious species in a hypothetical region ( Figure 2). corresponds to an absence of sampling. Some copepods are present in the icy or cold waters of the North Atlantic Ocean (Pareuchaeta norvegica or Calanus glacialis). Others occur in subtropical waters (Clausocalanus spp., Neocalanus gracilis and Euchaeta marina). The Para-Pseudocalanus group is present in temperate waters, Metridia lucens at the limit between temperate and cold waters, and Candacia armata mainly south of the European continental slope. These examples show that the distribution of species is not random on a large scale and that there are therefore control mechanisms. Redrawn, from Beaugrand and colleagues [47].
Bioclimatologists and ecologists have noted the existence of cycles where periods of high abundance alternate with periods of low abundance or even absence [37,40,[48][49][50]. In temperate ecosystems (e.g., the North Sea), some species flourish in the spring; we speak of spring phenology. Others bloom in the summer; we then speak of summer phenology [49,50]. The presence of these recurrent patterns of variability in space or time suggests the existence of control mechanisms, whether autogenic or allogenic [50].
The Difficult Identification of Patterns in Ecology
It is more difficult than it seems to identify these patterns of variability in nature, and sometimes-especially on a small scale-it may seem that there are no rules governing the arrangement of biodiversity [17]. Take the example of the simulated distribution of individuals from a fictitious species in a hypothetical region ( Figure 2). , the presence of individuals of the same species (blue squares, 1 × 1 m square) seems random. (b) On a more regional scale (19 × 19 km), the number of individuals is counted in each 100 × 100 m square. The density of individuals in the target region still seems random, although this density is between 2.4 and 3.5 (in decimal logarithmic scale). (c) On a large scale (1000 × 1000 km), a pattern of variability is clearly observed and the abundance of the species is greater towards the centre of the geographical domain. The transition from small to large scale is called scaling.
If we identify the number of individuals in an imaginary geographical square of 100 × 100 m, the distribution of individuals in this square appears random because no pattern of variability is identifiable (Figure 2a; the blue squares 1 × 1 m represent an (Figure 2b). Now imagine that we can examine the distribution of the number of individuals of this same species in a large region of 1000 × 1000 km (there are in Figure 2c 1,000,000/100 squares of 100 × 100 m, i.e., 10,000 × 10,000 = 100 million): we now see a pattern of variability emerging. The species is more abundant towards the centre of the region (Figure 2c). Looking at the pattern by taking height, that is to say, from a local to a large spatial scale, allows us to precisely identify the contours of the spatial distribution of this fictitious species (Box 1). An ecologist, who often studies biological systems on a small scale, may conclude that there are no detectable patterns of variability and that, therefore, the distribution of individuals is random and does not obey any rules (Box 1). On the other hand, a biogeographer may conclude that there is a structure, which implies the existence of underlying control mechanisms. The problem arises if researchers from these different disciplines extrapolate their results from the small to the large scale or inversely. In such a case, an ecologist may conclude that there are no principles governing the spatial distribution of a species and a biogeographer may establish certain predictions that are likely to be challenged on smaller scales. We touch here on the burning problem of scaling at the origin of so much controversy [51][52][53]. Referring to the analogy of an ecological theatre made by Hutchinson [54], Wiens [53] said, "to understand the drama, we must view it on the appropriate scale". Note that this phenomenon is also observed along the time dimension. It is therefore essential in the construction of all theories to specify its limits according to the spatio-temporal scales one considers [55].
Towards a Better Understanding of Principles of Biodiversity Organisation and Climate Change Biology
METAL (macroecological theory on the arrangement of life) has recently been proposed to connect a large number of phenomena observed in biogeography (spatial distribution of species, communities and biodiversity), ecology (phenology, gradual or abrupt changes in communities or biodiversity), paleoecology (past distribution of species, communities and biodiversity) and bioclimatology (biogeographic and phenological shifts, temporal changes in abundance and biodiversity at local or regional scales) [17,21,22,24,29,30,49,50,56] (https://biodiversite.macroecologie.climat.cnrs.fr; accessed on 1 February 2023).
The unification of these phenomena is obtained by using the concept of the ecological niche of Hutchinson [57,58], which constitutes the elementary macroscopic brick of the theory, giving meaning and coherence to all phenomena, patterns of variability or events cited above ( Figure 3). METAL considers the fundamental niche (i.e., the niche without the influence of species interaction), and current models do not explicitly include the influence of biotic interaction yet [25,29]. The niche can be divided into five components: (i) climatic, (ii) physico-chemical, (iii) substrate, or trophic with (iv) dietary and (v) resource concentration components [23,25,49]. It integrates all environmental conditions where a species' individual can ensure its homeostasis, grow, and reproduce. A species' niche therefore includes phenotypic plasticity, encompassing polyphenism and reaction norm (i.e., a species niche integrates the niches of all individuals of that species).
Therefore, the niche-environment interaction is considered to be a fundamental interaction in biology that explains and unifies a large number of patterns observed in ecology, biogeography and climate change biology [26]. This occurs because the genome controls many processes at infraspecific organisational levels (e.g., molecular processes) that affect physiological and morphological traits that in turn influence individual performance and fitness and finally determine the ecological niche of a species [50] (Figure 3). The use of the niche makes it possible (i) to implicitly consider these infraspecific processes without having to model them and (ii) to integrate the emergence of new biological properties impossible to anticipate from the property of the individual parts when crossing one or several organisational levels (here from the molecular to the specific level)(Box 1) [59,60]. affect physiological and morphological traits that in turn influence individual performance and fitness and finally determine the ecological niche of a species [50] (Figure 3). The use of the niche makes it possible (i) to implicitly consider these infraspecific processes without having to model them and (ii) to integrate the emergence of new biological properties impossible to anticipate from the property of the individual parts when crossing one or several organisational levels (here from the molecular to the specific level)(Box 1) [59,60]. The ecological niche of a species is quantified by simultaneously considering all the ecological factors that influence its abundance. The concept is therefore multidimensional. The ecological optimum represents the values of the ecological parameters for which the maximum abundance is observed. Ecological amplitude is the degree of ecological valence that a species tolerates. Put simply, it is the width of the ecological niche. The use of the ecological niche within METAL makes it possible to integrate molecular, physiological, biological and behavioural processes controlled in part by the genome and the environment. Such processes are impossible to model for all living species on our The ecological niche of a species is quantified by simultaneously considering all the ecological factors that influence its abundance. The concept is therefore multidimensional. The ecological optimum represents the values of the ecological parameters for which the maximum abundance is observed. Ecological amplitude is the degree of ecological valence that a species tolerates. Put simply, it is the width of the ecological niche. The use of the ecological niche within METAL makes it possible to integrate molecular, physiological, biological and behavioural processes controlled in part by the genome and the environment. Such processes are impossible to model for all living species on our planet using a reductionist approach. Moreover, the concept of niche makes it possible to consider the emergence of new properties at a specific organisational level. The niche-environment (including climatic) interaction makes it possible to explain, unify and predict a large number of patterns observed in ecology, paleoecology, biogeography and climate change biology. The nicheenvironment interaction affects the species genome through processes involved in natural selection.
Also known as species distribution models (SDMs) or bioclimatic envelope models [61][62][63], METAL integrates ecological niche models (ENMs) in its framework. ENMs primarily focus on the realised niche, which is based on past or contemporary spatial distribution and some key environmental (including climatic) variables. They then use the realised niche to project the likely distribution of a species in the past, present or future. ENMs have been extensively applied to project future species spatial distributions in the context of global climate change [61,[64][65][66][67][68][69]. METAL provides a robust scientific baseline for ENMs and shows that this niche approach can be extended to explain many different phenomena at different organisational levels and spatio-temporal scales [22,70].
The niche-environment interaction is crucial for explaining, unifying and predicting a very large number of phenomena, patterns of variability or biological events observed in nature (Figure 4a) [26]. At an individual organisational level, the niche-environment interaction controls a large number of physiological and behavioural responses, such as the phenomena of thermotaxis and chemotaxis ( Figure 4b) [17]. At a population organisational level, the niche-environment interaction controls species' phenology and long-term changes in local abundance, including the arrival or extirpation of individuals of a species in a given area ( Figure 4c) [49,50]. At a specific organisational level, the niche-environment interaction controls the distributional range of a species and even its extinction (Figure 4d) [23]. ment interaction affects the species genome through processes involved in natural selection.
Also known as species distribution models (SDMs) or bioclimatic envelope models [61][62][63], METAL integrates ecological niche models (ENMs) in its framework. ENMs primarily focus on the realised niche, which is based on past or contemporary spatial distribution and some key environmental (including climatic) variables. They then use the realised niche to project the likely distribution of a species in the past, present or future. ENMs have been extensively applied to project future species spatial distributions in the context of global climate change [61,[64][65][66][67][68][69]. METAL provides a robust scientific baseline for ENMs and shows that this niche approach can be extended to explain many different phenomena at different organisational levels and spatio-temporal scales [22,70].
The niche-environment interaction is crucial for explaining, unifying and predicting a very large number of phenomena, patterns of variability or biological events observed in nature (Figure 4a) [26]. At an individual organisational level, the niche-environment interaction controls a large number of physiological and behavioural responses, such as the phenomena of thermotaxis and chemotaxis ( Figure 4b) [17]. At a population organisational level, the niche-environment interaction controls species' phenology and long-term changes in local abundance, including the arrival or extirpation of individuals of a species in a given area ( Figure 4c) [49,50]. At a specific organisational level, the niche-environment interaction controls the distributional range of a species and even its extinction (Figure 4d) [23]. At a community level, the niche-environment interaction helps in understanding how communities are formed and modified, thus providing a theoretical basis for synecology and phytosociology ( Figure 4e) [49,50]. The theory explains the seasonal succession observed in the marine planktonic environment, the gradual or abrupt modifications in communities, the biogeographical changes of biocoenoses (or assemblages), their contractions or expansions, and their eventual disappearance ( Figure 4e) [23,28]. We can thus explain and anticipate major biological changes but also understand how biodiversity is organised and how it can be altered in the context of climate change [27,56].
Note, however, that human activities now influence a large number of processes that can interfere with the niche-environment interaction. For example, the extinction of a species or its long-term changes can be explained by anthropogenic pressures such as fishing, land use, hunting and pollution, pressures that are not further mentioned here [25,[71][72][73]. A METAL model has recently considered fishing pressure and the niche together to explain the long-term changes in cod spawning stock biomass in the North Sea since the beginning of the 1960s [25].
The use of METAL in the context of climate change biology has been presented elsewhere [26]. In this review, I show how the niche-environment interaction generates a mathematical constraint on the large-scale arrangement of biodiversity and explains why there are more species on land than in the marine realm. To make progress on these questions, the scientific community continues to collect and inventory species and to study their biology [74,75]. A study suggested that the number of terrestrial and marine species could be 8,740,000 and 2,210,000, respectively [76]. Because the scientific team estimated that 1,233,500 species had been inventoried in the terrestrial environment and 193,756 in the marine environment (bottom and surface), this suggests that between 9% (marine) and 14% (terrestrial) of species have been named and described so far. (Note, however, that there exist many estimations in the scientific literature [75,[77][78][79].) Meanwhile, ecologists continue to investigate the multiple interactions of these species with the environment, including the climate, but also biotic interactions, an essential prerequisite for understanding their spatial distributions (biogeography), their temporal patterns of reproduction (phenology) and their fluctuations from seasonal to centenary and millennial, as well as changes occurring on a geological time scale (ecology, palaeoecology and bioclimatology) [56,[80][81][82][83][84][85][86]. With such a poor fundamental knowledge on species' biology, how can we understand how factors and processes affect large-scale biodiversity patterns and design models to reconstruct them?
A Brief Overview of the Main Hypotheses or Theories That Have Attempted to Explain Large-Scale Biodiversity Patterns
Why do some regions of the globe have more species than others? Among such scientific questions as the origin of life, the biological basis of consciousness or the composition of the universe, this question was cited as one of the 25 most important enigmas by the American journal Science in 2005 [16,87]. Indeed, for most taxonomic groups, it has been noticed that warm regions contain a higher number of species than polar regions [41,[88][89][90][91][92]. Biogeographers generally speak of latitudinal biodiversity gradients to describe the largescale biodiversity patterns observed in nature. The plural is important because the gradient may be different from one taxonomic group to another and from one domain to another (e.g., terrestrial and marine) [41]. For example, a maximum is obtained at the equator for a large number of terrestrial taxonomic groups, while it is rather subtropical for most oceanic taxonomic groups [41,88]. Although the existence of this biogeographical pattern has been known since Alexander von Humboldt in 1807 while he was in Central America and Charles Darwin after the return of the second expedition of the HMS Beagle in 1836, and that many hypotheses have been formulated for decades, no consensus has been reached [93][94][95][96][97][98][99][100][101].
What causes the latitudinal gradient in biodiversity, whether on land or in the sea, has been a topic of debate for decades, and more than 20 hypotheses or theories have been proposed [41,[101][102][103][104][105][106][107][108][109]. It is beyond the scope of the present paper to review and discuss all of them, and below I only briefly review the main hypotheses or theories. While some authors have propounded that the biodiversity gradients are related to the larger area of the tropical belts [96,110], others have proposed null models of biodiversity, such as the neutral theory of biodiversity and biogeography [99] and the mid-domain effect (MDE) [111,112]. Moreover, it has been suggested that time is an important factor because speciation needs it to operate [90,[113][114][115][116][117]. The tropics may assemble more species over a longer time period because they are more climatically stable than higher latitudes [95], and studies have provided evidence that the tropics are both a species cradle (higher origination rates) and a museum (more long-term climatic stability) [118,119]; See Vasconcelos and colleagues [120], however. Some studies have suggested that richer taxa have quicker diversification rates [121] and the metabolic theory of ecology predicts that the molecular clock is affected by body mass and temperature through metabolism [122].
Another popular hypothesis invokes the positive role of energy on biodiversity [95,123,124]. The energy hypothesis is frequently divided into two [123]: (i) exosomatic energy, where climatic factors such as temperature, precipitation and photosynthetically active radiation positively affect biodiversity, and (ii) endosomatic energy, which is the level of energy contained in the biomass that affects individuals and therefore the number of species. The latter hypothesis may be tested by using chlorophyll concentration or primary production [17]. Climate stability has also been invoked to explain the higher biodiversity in the tropics [125], along with magnitude, severity and frequency of environmental perturbations that are thought to limit species richness in temperate and polar regions [126,127]. In space, environmental heterogeneity promotes higher biodiversity [128,129]. For example, island species richness is positively correlated with habitat diversity [129]. The niche-assembly theory posits that there is more species richness in the tropics because there are more ecological niches, the niche being defined in term of resources [130]. Some hypotheses have invoked biotic interaction as a cause of speciation and therefore high species richness [131]. For example, Emerson and Kolm have provided evidence that the proportion of endemic species in an island covaries positively with biodiversity, suggesting that species richness increases speciation [17,[131][132][133]; see, however, [134,135]. The argument seems tautological to some authors in terms of the search for the primary cause of these large-scale biodiversity patterns [17,95].
Perhaps the most compelling hypotheses are those that invoke an environmental control of biodiversity, such as environmental stability or energy availability [88,136,137]. Climatic hypotheses have been frequently proposed because large-scale biodiversity patterns correlate well with environmental parameters [88,137]. Among hypotheses, it has been suggested that global climate change may have shaped the large-scale patterns of biodiversity prevailing on Earth today because most clades originated in warm habitats, as temperatures have been predominantly warm during its history [138]. This hypothesis is known as tropical niche conservatism (TNC) [139]. Temperature has often been suggested to explain large-scale patterns in the distribution of marine organisms [88,140,141]. However, the exact mechanisms (e.g., metabolic theory of ecology [98]) by which the parameter may influence large-scale biodiversity patterns remain uncertain [100][101][102]. Finally, many authors have also suggested that many causes or factors interact to shape large-scale biodiversity patterns [30,142,143].
Modelling Biodiversity in METAL
Understanding the spatio-temporal arrangement of biodiversity on a large scale requires the development of numerical models where biological, environmental and climatic knowledge are put into equations [27,29,30,100]. In the context of the application of METAL, the fundamental bases of the biodiversity model are simple [100]. A large number of fictitious species is generated. Each fictitious species (called hereafter a pseudospecies) has unique physiological preferences that define their ecological (fundamental) niches, that is to say, their responses to climatic and environmental variability [23]. We can initially consider a simple niche, considering only the bioclimatic dimensions temperature and water availability (here precipitation). Temperature is an essential factor controlling the physiology of all species living on our planet and precipitation is a proxy for water availability, a variable just as important as temperature for terrestrial species. These climatic dimensions are fundamental, and many studies have underlined their importance [46,88,140]. Figure 5 shows an example with two marine pseudospecies, one being more eurythermal (i.e., tolerating a greater range of thermal variation) and the other being more stenothermal (i.e., tolerating a smaller range of thermal variation). In this example, we see that the stenothermal pseudospecies is characterised by a more limited range and lower abundance than the more eurythermal pseudospecies [21,23].
These climatic dimensions are fundamental, and many studies have underlined their i portance [46,88,140]. Figure 5 shows an example with two marine pseudospecies, one being more eur thermal (i.e., tolerating a greater range of thermal variation) and the other being mo stenothermal (i.e., tolerating a smaller range of thermal variation). In this example, we s that the stenothermal pseudospecies is characterised by a more limited range and low abundance than the more eurythermal pseudospecies [21,23].
Figure 5.
Idealised relationship between the ecological niche of a marine species and its spatial d tribution. In this example, the ecological niche is a thermal niche with a Gaussian distribution ch acterised by two parameters: the optimum temperature and the thermal amplitude (parameter clo to the standard deviation). The optimum temperature (Topt) is 15°C for the two fictitious niches (a The thermal amplitude (ts) is higher for (a) than (c). The spatial distribution is wider and the abu dance of the species higher when the species has a thermal niche with a large thermal amplitu (b,d). In reality, the niche of a species is multidimensional. From Beaugrand and colleagues [23].
We can thus create a multitude of pseudospecies by varying the optimum and t ecological amplitude (i.e., niche breadth) of each niche dimension. Figure 6 shows the c ation of marine pseudospecies from a simple Gaussian thermal niche [21,23]. Note th different types of niches can be used: from rectangular to trapezoidal [25,100] and fro logistic to beta distribution [25,27], symmetrical or asymmetrical [25], parametric or no parametric [23]. Moreover, the niche can be multidimensional [144], including nutrien solar radiation or mixed-layer depth for phytoplankton, bathymetry and sediment typ for fish, soil pH and composition for plants [25,65,66,[144][145][146]. So far, most METAL si ulations have been based on niches that vary between 0 (i.e., absence of a species for given environmental regime) and 1 (i.e., highest abundance, or presence in case of a re tangular niche). Therefore, all species can reach the same level of maximum abundan Although this assumption may possibly hold for a clade composed of species with a si ilar size, this is not so for a group that exhibits large size variability (e.g., mammals) [14 Figure 5. Idealised relationship between the ecological niche of a marine species and its spatial distribution. In this example, the ecological niche is a thermal niche with a Gaussian distribution characterised by two parameters: the optimum temperature and the thermal amplitude (parameter close to the standard deviation). The optimum temperature (T opt ) is 15 • C for the two fictitious niches (a,c). The thermal amplitude (t s ) is higher for (a) than (c). The spatial distribution is wider and the abundance of the species higher when the species has a thermal niche with a large thermal amplitude (b,d). In reality, the niche of a species is multidimensional. From Beaugrand and colleagues [23].
We can thus create a multitude of pseudospecies by varying the optimum and the ecological amplitude (i.e., niche breadth) of each niche dimension. Figure 6 shows the creation of marine pseudospecies from a simple Gaussian thermal niche [21,23]. Note that different types of niches can be used: from rectangular to trapezoidal [25,100] and from logistic to beta distribution [25,27], symmetrical or asymmetrical [25], parametric or nonparametric [23]. Moreover, the niche can be multidimensional [144], including nutrients, solar radiation or mixed-layer depth for phytoplankton, bathymetry and sediment types for fish, soil pH and composition for plants [25,65,66,[144][145][146]. So far, most METAL simulations have been based on niches that vary between 0 (i.e., absence of a species for a given environmental regime) and 1 (i.e., highest abundance, or presence in case of a rectangular niche). Therefore, all species can reach the same level of maximum abundance. Although this assumption may possibly hold for a clade composed of species with a similar size, this is not so for a group that exhibits large size variability (e.g., mammals) [147][148][149]. Note, however, that this assumption does not affect biodiversity when the selected indicator is species richness (see below).
Different thermal optima and amplitudes are used [21,23]. In this example, when distributional ranges originating from one thermal niche are spatially separated, it is considered that they represent different species; therefore, one niche can give several species, in agreement with Buffon's law, also known as the first principle of biogeography [41]. We see that thermal niches with lower thermal amplitudes give more species, although they exhibit smaller distributional ranges ( Figure 6, left maps vs. right maps). Figure 5 shows that there is a relationship between the average abundance of a species and its area of distribution, a relationship already demonstrated empirically by Brown [150]. We extended this relationship by indicating that there is a positive link between the ecological amplitude of a species, its average abundance and its distribution area [23] (Figures 5 and 6). These relationships hold for species of the same size [147,148] and trophic guild. agreement with Buffon's law, also known as the first principle of biogeography [41]. We see that thermal niches with lower thermal amplitudes give more species, although they exhibit smaller distributional ranges ( Figure 6, left maps vs. right maps). Figure 5 shows that there is a relationship between the average abundance of a species and its area of distribution, a relationship already demonstrated empirically by Brown [150]. We extended this relationship by indicating that there is a positive link between the ecological amplitude of a species, its average abundance and its distribution area [23] (Figures 5 and 6). These relationships hold for species of the same size [147,148] and trophic guild. ., (a,b) and (e,f)). The current location of continents at the equator and in the northern latitudes allows more species to form by allopatric speciation. Methods, from Beaugrand and colleagues [29]. Figure 6. Different types of spatial distribution of marine species generated from thermal niches by varying the thermal optimum and amplitude. The different colours on the map represent different species generated from the same thermal niche. The same niche can give rise to several species if and only if individuals from different species cannot meet (allopatric speciation). Niches with a low thermal amplitude generate more species (e.g., (a,b) and (e,f)). The current location of continents at the equator and in the northern latitudes allows more species to form by allopatric speciation. Methods, from Beaugrand and colleagues [29].
Examples from Figure 6 show that a niche can lead to more pseudospecies in the Northern than in the Southern Hemisphere (Figure 6b-d). This is due to the current location of continents that act as a barrier against gene flux, triggering more allopatric speciation in the Northern than the Southern Hemisphere (towards high latitudes). When the thermal amplitude is larger, the pseudospecies are more eurygraph and a single niche leads to fewer pseudospecies, e.g., only one in Figure 6a in each hemisphere. Moreover, the current configuration (i.e., south to north configuration) of the continents also enables more pseudospecies to emerge in the tropics (Figure 6g-h), especially when the pseudospecies are stenoecious and therefore stenograph (Figure 6e,f). Note that parapatric and sympatric speciations are not accounted for in this example. Allopatric speciation is thought to be a widespread mode of speciation in the marine environment, despite more evidence that other modes of speciation might also play a role [151]. Parapatric speciation is thought to be possible in the ocean [152][153][154]. Clinal parapatric speciation has been suggested for salps and some benthic species [151,155]. Sympatric speciation might also be frequent for marine invertebrates [156].
To reproduce the large-scale arrangement of biodiversity, we can build a model that first creates millions of niches, which then allow pseudospecies to establish themselves in a given region as long as environmental fluctuations are suitable [17,29,30,100]. The principle of the model is simple. It starts to create a large number of niches where both optima and amplitudes with respect to temperature only for the marine realm and both temperature and precipitation for the terrestrial realm vary. Many niches (i.e., with all possible optima and amplitudes), which can also overlap, are created (i) for temperature between −1.8 • C and 44 • C in both realms and (ii) for precipitation between 0 and 3000 mm in the terrestrial realm only. The full procedure is described in [29]. At the end of the procedure, there are a maximum of 101,397 and 94,299,210 niches in the marine and terrestrial realms, respectively. About 25% and 1% of these niches are chosen randomly to perform the simulations in the marine and terrestrial realms, respectively [29]. The use of fictitious niches and species is especially useful, since we have only inventoried 9% of marine and 14% of terrestrial biodiversity and we know little about the biology of most species (see Section 5.1). A niche can give rise to several pseudospecies if individuals from different regions never come into contact (e.g., Figure 6f) [29]. Pseudospecies are gradually colonising the terrestrial and marine environment (surface and bottom). During the simulations, the species organize themselves into communities and the biodiversity. More precisely, here the number of species in a given region is reproduced.
Beaugrand and colleagues [29] used this approach to model the biodiversity of the terrestrial and marine realm, including the surface and the bottom of neritic and oceanic regions (Figure 7). These numerical experiments (or simulations) correctly reconstruct largescale biodiversity patterns as they are observed nowadays for a large number of taxonomic groups in the terrestrial and marine environment (e.g., crustaceans, fish, cetaceans, plants, birds) [29]. The biodiversity maps for the ocean floors (Figure 7c,f) remain provisional, as few observations have been made to date to confirm these predictions [29]. The model also reproduces well past biodiversity patterns of the Last Glacial Maximum and the mid-Pliocene (e.g., foraminifera), as well as the Ordovician (e.g., acritarchs) [27,56].
The Great Chessboard of Life
The reconstruction of large-scale biodiversity patterns observed in nature is possible because the niche-climate interaction generates a mathematical constraint on the maximum number of species that can establish in a given region [30]. We have named this constraint the great chessboard of life (Figure 8) [30]. This particular chessboard has a number of geographical squares (i.e., wide squares in Figure 8) that correspond to different regions (marine or terrestrial). Note that these geographical squares are limited on the figure (i.e., 6 × 8 = 48 squares), but should be higher to correctly represent the variety of
The Great Chessboard of Life
The reconstruction of large-scale biodiversity patterns observed in nature is possible because the niche-climate interaction generates a mathematical constraint on the maximum number of species that can establish in a given region [30]. We have named this constraint the great chessboard of life (Figure 8) [30]. This particular chessboard has a number of geographical squares (i.e., wide squares in Figure 8) that correspond to different regions (marine or terrestrial). Note that these geographical squares are limited on the figure (i.e., 6 × 8 = 48 squares), but should be higher to correctly represent the variety of environments, e.g., one for every degree of latitude and longitude (i.e., 180 latitudes × 360 longitudes = 64,800 squares). Each square on the chessboard is composed of sub-squares (i.e., the narrow squares in Figure 8), which represent the number of climatic niches that determines the maximum number of species that can colonise a square (i.e., a region or a wide square). Only one species can establish in a sub-square (i.e., a climatic niche) of the chessboard according to the competitive exclusion principle of Gause [157], thereby the more sub-squares (L) in a given region, the higher the maximum number of species that an area may contain (Figure 8). S is the number of species that a square (i.e., an area) actually contains. Therefore, L represents a fundamental limit (what I call here a mathematical constraint) on species richness, even if the actual number can still vary according to other processes (see below). The different pieces on the chessboard (e.g., king, queen, pawn) symbolize the different biological properties of the species (e.g., their differences in terms of life history traits, such as reproduction). Q represents niche saturation, with Q = (S/L) × 100. A saturation of 100% means that all niches or potential species that a square may contain are occupied. Biological (degree of clade origination) and climatic (repeated Pleistocene glaciations) causes influence the percentage of saturation of the niches in each geographical square so that there remains a degree of valence on the number of species present on the great chessboard of life [30]. determines the maximum number of species that can colonise a square (i.e., a region or a wide square). Only one species can establish in a sub-square (i.e., a climatic niche) of the chessboard according to the competitive exclusion principle of Gause [157], thereby the more sub-squares (L) in a given region, the higher the maximum number of species that an area may contain (Figure 8). S is the number of species that a square (i.e., an area) actually contains. Therefore, L represents a fundamental limit (what I call here a mathematical constraint) on species richness, even if the actual number can still vary according to other processes (see below). The different pieces on the chessboard (e.g., king, queen, pawn) symbolize the different biological properties of the species (e.g., their differences in terms of life history traits, such as reproduction). Q represents niche saturation, with Q = (S/L) × 100. A saturation of 100% means that all niches or potential species that a square may contain are occupied. Biological (degree of clade origination) and climatic (repeated Pleistocene glaciations) causes influence the percentage of saturation of the niches in each geographical square so that there remains a degree of valence on the number of species present on the great chessboard of life [30]. The number of maximum niches fixes an upper limit on the number of species that can colonise a given region by speciation or immigration [30]. Few species can colonise areas located towards the minimum (e.g., −1.8 °C) and maximum limits (e.g., 44 °C) of temperature and precipitation (e.g., 0 and 3000 mm for precipitation) [29]. The choice of these minimum and maximum values in the METAL models is therefore important because it affects the results [100]. In marine polar areas, corresponding nowadays to the lowest limit of temperature (close to −1.8 °C), the number of species that can establish are fundamentally limited by L, since two species having the same niche cannot coexist at the same time and in the same place [157]. On the chessboard (Figure 8), the number of subsquares is two in the wide square between the polar front and North Pole. Figure 8. The great chessboard of life that illustrates the mathematical influence on current largescale biodiversity patterns in the marine realm. Each square on the chessboard, which represents a region, is composed of sub-squares, which represent the number of climatic niches that determines the maximum number of species that can colonise a square (i.e., a region). The different pieces on the chessboard (e.g., king, queen, pawn) symbolize the different biological properties of the species (e.g., their differences in terms of life history traits, such as reproduction). Note that it is also applicable on land. From Beaugrand and colleagues [30].
At low latitudes, since the theoretical upper limits are not observed nowadays (i.e., the upper limit for temperature is frequently fixed to 44°C; for a justification of the threshold, see [100]), terrestrial biodiversity is maximum at the equator and marine biodiversity in subtropical regions (Figure 7) [27,29,56,100]. (I will come back to this point in Section 7.) The great chessboard of life therefore suggests that there is more species richness in Figure 8. The great chessboard of life that illustrates the mathematical influence on current largescale biodiversity patterns in the marine realm. Each square on the chessboard, which represents a region, is composed of sub-squares, which represent the number of climatic niches that determines the maximum number of species that can colonise a square (i.e., a region). The different pieces on the chessboard (e.g., king, queen, pawn) symbolize the different biological properties of the species (e.g., their differences in terms of life history traits, such as reproduction). Note that it is also applicable on land. From Beaugrand and colleagues [30].
The number of maximum niches fixes an upper limit on the number of species that can colonise a given region by speciation or immigration [30]. Few species can colonise areas located towards the minimum (e.g., −1.8 • C) and maximum limits (e.g., 44 • C) of temperature and precipitation (e.g., 0 and 3000 mm for precipitation) [29]. The choice of these minimum and maximum values in the METAL models is therefore important because it affects the results [100]. In marine polar areas, corresponding nowadays to the lowest limit of temperature (close to −1.8 • C), the number of species that can establish are fundamentally limited by L, since two species having the same niche cannot coexist at the same time and in the same place [157]. On the chessboard (Figure 8), the number of sub-squares is two in the wide square between the polar front and North Pole.
At low latitudes, since the theoretical upper limits are not observed nowadays (i.e., the upper limit for temperature is frequently fixed to 44 • C; for a justification of the threshold, see [100]), terrestrial biodiversity is maximum at the equator and marine biodiversity in subtropical regions (Figure 7) [27,29,56,100]. (I will come back to this point in Section 7). The great chessboard of life therefore suggests that there is more species richness in regions where there are more ecological niches (sensu Hutchinson [58]), providing compelling evidence for the niche-assembly hypothesis, although the niche in this hypothesis has been usually defined in terms of resources [130] (see Section 5.1).
The biogeographical constraints (i.e., low and high number of niches in high and low latitudes, respectively) imposed by the chessboard on biodiversity may be quickly detectable because clade diversification (not implemented in this METAL model) takes place relatively rapidly on a geological time scale [158]. Moreover, in the marine environment, taxa such as plankton have high dispersal capabilities [44,159,160] and may rapidly conform to the chessboard. Niche saturation (i.e., the number of observed species on the theoretical number of available niches) may help measuring the degree of conformity of the different taxonomic groups on the chessboard [30]. Although it is commonly assumed that niche saturation increases towards the equator (i) because evolutionary rates are thought to increase from cold to warm regions [122,161] and (ii) because of the presence of strong climate-induced environmental perturbations in extra-tropical regions that limit species richness [116,162], niche saturation is frequently highest towards the poles [30]. These apparently counterintuitive results suggest that the few sub-squares (i.e., the climatic niches) available on each region (i.e., wide square) of the chessboard are frequently occupied in polar regions (Figure 8). This means that low polar biodiversity should not always be attributed to low diversification rates (origination minus extinction) [114,115], but rather to a smaller maximum number of species' niches (i.e., the parameter L on the chessboard in Figure 8) at saturation that locally limits biodiversity. This low number of niches over polar regions, and inversely the high number of niches equatorwards, originating from the niche-environment interaction, represent a mathematical constraint on the arrangement of biodiversity. Although there remains a great degree of freedom on the type and number of species that can establish in a region (e.g., origination and diversification of a clade), this number cannot exceed a threshold set by the niche-climate interaction.
In the marine realm, large-scale patterns of niche saturation differ among taxonomic groups, which suggests the existence of a particular chessboard for each group that might originate from taxon-specific diversification history [163,164]. For example, pinnipeds, which exhibit an inversed latitudinal biodiversity gradient, originate from Arctoid carnivores 25-27 Ma in the cold regions of the North Pacific [165]. Place of origin and time of emergence may therefore blur large-scale biodiversity patterns imposed by the great chessboard of life. Moreover, life history traits of each group make the great chessboard of life specific to a given taxonomic group, which sometimes explains the lack of universality of large-scale biodiversity patterns (Figure 8) [30].
The rate of diversification remains an important parameter because it determines the degree of niche occupation in a given geographical cell. Indeed, when polar regions are excluded, niche saturation of most groups (e.g., plankton and fish) but mammals is higher over permanently stratified regions [147]. Moreover, many clades should exhibit latitudinal biodiversity gradients towards the equator because their probability of emergence should be higher in the tropics, where there are more available niches, and palaeontological data have provided compelling evidence of greater rates of origination for tropical clades [166]-the hypothesis of tropical niche conservatism [139].
Beaugrand and colleagues suggest that the total number of species on the chessboard diminishes with organismal complexity [30], which can be explained by basic ecological and evolutionary processes. Endosomatic energy decreases from primary producers to top predators as a consequence of the second law of thermodynamics, decreasing the number of individuals and thereby species richness and niche saturation from producers to higher trophic levels [17,41]. Positive relationships between numbers of individuals and species richness has often been proposed to explain large-scale biodiversity patterns, e.g., the productivity theory [167], the area hypothesis [168], and the unified neutral theory of biodiversity and biogeography [99]. In addition to diminishing the number of individuals [169], a larger body also increases generation time [147], which slows down evolution [122,169]. Therefore, the likelihood that a taxon exhibits a large-scale biodiversity pattern different from the one imposed by the great chessboard of life is greater when its mean niche saturation is lower. This is especially the case for marine mammals. Pinnipeds, which have a low degree of niche saturation (<1% [30]), exhibit a pattern that does not conform to the great chessboard of life [30]. Note that it is possible to do specific simulations to account for the biology of a specific clade or taxonomic group, such as euphausiids, fish, coral reefs, or mangroves [29,145].
Differences in Latitudinal Biodiversity Gradients between the Terrestrial and the Marine Domains
A biodiversity peak is observed at the equator in the terrestrial realm and around the subtropical regions in the marine realm (Figure 7a-d). This distinction is related to the differential influence of atmospheric pressure fields in the marine and terrestrial realms [5,17,170,171]. Indeed, the high-pressure centres (i.e., the large-pressure high linked to the descending branches of the Hadley and Ferrel cells) provide climatic stability and heat, which increase surface biodiversity in the marine environment ( Figure 9) [17]. However, above the continents, the high-pressure centres strongly limit precipitation, and biodiversity is therefore very low due to the lack of water availability [5,17,170,171]. The maximum values of biodiversity are reached in regions where precipitation is regular (towards the equator) and decreases when moving away from the influence of the intertropical convergence zone (ITCZ) [5], i.e., when it occurs a few weeks a year (monsoon areas). Finally, the cold ocean floors do not show a typical biodiversity gradient but a very homogeneous biodiversity pattern, except in high-latitude regions, where biodiversity decreases slightly, and over seamounts, where it is higher (Figure 7e-f).
Biology 2023, 11, x 16 of 24 different from the one imposed by the great chessboard of life is greater when its mean niche saturation is lower. This is especially the case for marine mammals. Pinnipeds, which have a low degree of niche saturation (<1% [30]), exhibit a pattern that does not conform to the great chessboard of life [30]. Note that it is possible to do specific simulations to account for the biology of a specific clade or taxonomic group, such as euphausiids, fish, coral reefs, or mangroves [29,145].
Differences in Latitudinal Biodiversity Gradients between the Terrestrial and the Marine Domains
A biodiversity peak is observed at the equator in the terrestrial realm and around the subtropical regions in the marine realm (Figure 7a-d). This distinction is related to the differential influence of atmospheric pressure fields in the marine and terrestrial realms [5,17,170,171]. Indeed, the high-pressure centres (i.e., the large-pressure high linked to the descending branches of the Hadley and Ferrel cells) provide climatic stability and heat, which increase surface biodiversity in the marine environment ( Figure 9) [17]. However, above the continents, the high-pressure centres strongly limit precipitation, and biodiversity is therefore very low due to the lack of water availability [5,17,170,171]. The maximum values of biodiversity are reached in regions where precipitation is regular (towards the equator) and decreases when moving away from the influence of the intertropical convergence zone (ITCZ) [5], i.e., when it occurs a few weeks a year (monsoon areas). Finally, the cold ocean floors do not show a typical biodiversity gradient but a very homogeneous biodiversity pattern, except in high-latitude regions, where biodiversity decreases slightly, and over seamounts, where it is higher (Figure 7e-f). The chessboard of life reorganizes when climate changes, which makes it dynamic from small to large temporal scales (i.e., geological scales) [30]. Indeed, large-scale biodiversity patterns are not stable over time [172], and METAL suggests that they were sometimes very different from those currently observed [56]. For example, during a cold period at the end of the Ordovician (510 million years ago), a very significant contrast probably The chessboard of life reorganizes when climate changes, which makes it dynamic from small to large temporal scales (i.e., geological scales) [30]. Indeed, large-scale biodiversity patterns are not stable over time [172], and METAL suggests that they were sometimes very different from those currently observed [56]. For example, during a cold period at the end of the Ordovician (510 million years ago), a very significant contrast probably existed between tropical biodiversity and the biodiversity of high-latitude regions (Figure 10a). Conversely, during the warm period of Stage 4 of the Cambrian (510 million years), the latitudinal gradient of biodiversity was probably reversed (Figure 10b). The current latitudinal gradient of biodiversity, characterised by a more or less regular increase in biodiversity from the poles to the equator, has therefore probably not always been observed since the appearance of eukaryotes on our planet [56]. Mannion and colleagues [172] also proposed that the current latitudinal gradient of biodiversity has not been a permanent feature through the Phanerozoic. They suggested that a biodiversity peak occurred during cold icehouse climatic regimes, whereas temperate peaks (or flattened gradients) were observed during warmer greenhouse regimes. In the context of current climate change, the use of METAL also suggests that the contrast between regions of low and high biodiversity may diminish towards the end of the century because a rise in temperature over permanently stratified regions (e.g., tropics and subtropics) reduces surface biodiversity, whereas an augmentation in temperature over temperate and polar regions increases biodiversity [27]. biodiversity from the poles to the equator, has therefore probably not always been observed since the appearance of eukaryotes on our planet [56]. Mannion and colleagues [172] also proposed that the current latitudinal gradient of biodiversity has not been a permanent feature through the Phanerozoic. They suggested that a biodiversity peak occurred during cold icehouse climatic regimes, whereas temperate peaks (or flattened gradients) were observed during warmer greenhouse regimes. In the context of current climate change, the use of METAL also suggests that the contrast between regions of low and high biodiversity may diminish towards the end of the century because a rise in temperature over permanently stratified regions (e.g., tropics and subtropics) reduces surface biodiversity, whereas an augmentation in temperature over temperate and polar regions increases biodiversity [27].
Why Are There More Terrestrial Than Marine Species?
At a high taxonomic level, the number of phyla of metazoans is higher in the ocean than on land, and this number is greater for the benthic than the pelagic realm [173]. In sum, 32 phyla are found in the sea and 21 are exclusively marine, whereas 12 are found on land, with only one being endemic to this realm (Onychophora) [173], while 27 phyla inhabit the benthos (with 10 endemic) and only 11 the pelagos (with one endemic-Loricifera). These estimates strongly suggest that diversification began in the sea, and proba-
Why Are There More Terrestrial Than Marine Species?
At a high taxonomic level, the number of phyla of metazoans is higher in the ocean than on land, and this number is greater for the benthic than the pelagic realm [173]. In sum, 32 phyla are found in the sea and 21 are exclusively marine, whereas 12 are found on land, with only one being endemic to this realm (Onychophora) [173], while 27 phyla inhabit the benthos (with 10 endemic) and only 11 the pelagos (with one endemic-Loricifera). These estimates strongly suggest that diversification began in the sea, and probably in or close to the benthic realm [174]. However, at a species level, there are more terrestrial than marine species. Robert May assessed that~85% of all species are terrestrial [173,175], Michael Benton~75% [176], and more recent estimates suggest they may be closer tõ 80% [76]. About 77% of animal species live on land, the remaining being found in freshwater (11%) and in the marine environment (12%), and 93% of plants live on land, 5% being freshwater and 2% marine [177]. Why are there more terrestrial than marine species? This conundrum is all the more incomprehensible, since marine biodiversity appeared long before terrestrial species [176,178]. Since more time has passed since the emergence of life in the oceans, the higher number of terrestrial species over marine species seems to be a counterintuitive observation.
METAL reproduces the difference in biodiversity observed between the marine and terrestrial domains well [29]. Although results depended upon the choice of the total number of niches, modelled biodiversity scaled to catalogued (and estimated) species gave 1,111,186 (8,825,091) for the terrestrial domain and 316,069 (2,242,908) for the marine domain. These estimates are close to those given in Section 4 [76]. Moreover, an estimate of the deep-sea benthic biodiversity (894,881 benthic species in areas below 2000 m and 256,278 in areas between 2000 m and 200 m) is close to what has been calculated in some studies [179,180]. Species density is expected to be higher over the shelf (200-2000 m) than deep sea, but because the latter realm is larger (301 vs. 36 million km 2 ), there are more species in the deep-sea benthic realm [29].
Two mechanisms may explain why METAL reconstructs the difference between land and sea biodiversity well [29]. Firstly, Beaugrand and colleagues [29] suggested that the difference between the number of climatic dimensions among terrestrial and marine environments is fundamental. Water is evidently present everywhere in the ocean, which is not the case in the terrestrial environment. The additional discriminating climatic dimension in the terrestrial environment (water availability) arithmetically increases the number of climatic niches and thus the number of species that can establish in the terrestrial environment [29]. The addition of one climatic dimension increases the number of potential niches by~100 [29].
Secondly, but to a lesser extent, Beaugrand and colleagues [29] suggested that the addition of a supplementary climatic dimension, combined with more pronounced geographical variations in the terrestrial environment (i.e., an increase in habitat heterogeneity), further fragment the geographical range of a species [181], increasing the possibility of allopatric speciation, i.e., the creation of species by prolonged or permanent geographic isolation of populations [29,41,182].
To conclude on this part, there are more terrestrial than marine species because there are more available climatic niches on land. This additional dimension inflates considerably the maximum number of niches and thereby species on the great chessboard of life (i.e., parameter L in Figure 8). This second fundamental dimension inflates considerably the number of species that the terrestrial realm may contain. The addition of the water availability dimension in the terrestrial realm (in addition to temperature) also morcellates species spatial distribution and increases the effect of landscape heterogeneity and the possibility for allopatric speciation [29]. In the ocean, the seascape is more uniform because there is only a single climate dimension (temperature). See, however, Ref. [181] for other important environmental dimensions. This influence is more prominent in the pelagic than the benthic environment, so it probably explains why there are more benthic than pelagic species [29]. The influence of seascape heterogeneity strongly affects local biodiversity over seamounts and shelves [183]. Therefore, local biodiversity should be higher over these areas, including heterogeneous shallow ones [29].
Conclusions
A central objective of biology and its sub-disciplines (e.g., biogeography, ecology) is to reveal the laws or general principles that govern the arrangement of life, but the sources of variations and exceptions seem inexhaustible. However, simple laws have been discovered in other areas of science, such as physics. Galileo Galilei wrote "Philosophy [nature] is written in that great book which ever is before our eyes-I mean the universe-but we cannot understand it if we do not first learn the language and grasp the symbols in which it is written. The book is written in mathematical language, and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth". We show here that the niche-environment interaction is fundamental because it controls a large number of phenomena, patterns of variability, and biological events. In this review, I only show how METAL can help in understanding the arrangement of biodiversity, but the theory also explains other phenomena, such as spatial range, biogeographical shifts, phenology, annual plankton succession, long-term changes in species abundance, and community composition, gradual or abrupt [26].
Like the Italian scholar of the Renaissance Galileo Galilei, I propose that the great book of life is also written in mathematical language. In particular, the niche-environment interaction, controlled in part by the climatic regime, generates a mathematical constraint on the large-scale arrangement of biodiversity. We have named this constraint the great chessboard of life ( Figure 8). The mathematical effect is probably considerable such that an inverted latitudinal gradient is impossible under present climatic conditions for most taxonomic groups that presently exhibit increasing biodiversity from the poles to the equator. Moreover, a similar mathematical effect explains why there are more terrestrial than marine species, even if the number of phyla is higher in the marine than terrestrial realm. The establishment of a global theory of biodiversity, however, requires taking into account a large number of biological processes that also influence biodiversity (e.g., diversification rate and origination place of a clade), and Theodosius Dobzhansky was greatly inspired when he wrote his article 'Nothing in biology makes sense except in the light of evolution' [12]. In addition to other key ecological factors discussed in Section 5, METAL should therefore consider more explicitly some key evolutionary processes in the future. In the process of developing such a global theory of biodiversity, considering all the complexity of biological systems (Box 1), it is important to recognize that mathematical constraints caused by (i) the number of key dimensions that the niches include in the terrestrial and marine realms and (ii) the niche-environment interaction also control the arrangement of biodiversity.
Funding: This work has been partially financially supported by CNRS, Université du Littoral Côte d'Opale, the IFSEA Graduate School, the regional CPER programme IDEAL and the ANR ECO-BOOST.
Data Availability Statement:
The main data used in this paper are available from the corresponding author on reasonable request.
|
2023-02-24T17:04:00.695Z
|
2023-02-21T00:00:00.000
|
{
"year": 2023,
"sha1": "f896923c4141df94101d14212371c7004d348e51",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-7737/12/3/339/pdf?version=1676976002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f99c2dd15c1b75bb88d5c1934820ca3c0ff645b0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259675789
|
pes2o/s2orc
|
v3-fos-license
|
Do All Types of Restorative Environments in the Urban Park Provide the Same Level of Benefits for Young Adults? A Field Experiment in Nanjing, China
: Previous research has consistently shown that exposure to natural environments provides a variety of health benefits. The purpose of this study is to investigate the restorative benefits of non-virtual environments in field experiments as well as the differences in physiological and psychological effects between different types of restorative sites for stressed young adults. This controlled study design used the Positive and Negative Affect Schedule (PANAS), electroencephalogram (EEG), and heart rate variability (HRV) as psychophysiological indicators of individual affect and stress. We used a “stress imposition-greenspace recovery” pre-and post-test mode to simulate the most realistic short-term recovery experience in the park (Grassplots, Square, Forest, and Lakeside) under relatively free conditions. The experimental results show that all four natural spaces in the park have some degree of recovery. However, there were discernible differences in the restorative effects of four selected natural sites. Lakeside and Forest demonstrated the most robust restorative properties in terms of both negative emotion reduction and positive emotion enhancement. In contrast, Square showed the weakest facilitation of recovery, while Grassplots promoted moderate resilience. Physiologically, we found that the EEG-α % of the Square was significantly lower than the EEG-α % of the Forest (t = − 3.56, p = 0.015). This means that stressed young adults were much more relaxed in the forest than in the paved square. The study answers which types of natural spaces, when considered together, would provide greater restorative benefits to stressed young people participating in natural therapies in urban parks. The study’s policy implications include the need to create more green natural spaces, especially forests with multiple plant levels, as well as to improve the restorative nature of urban parks through appropriate landscape space design.
Introduction
With urbanization, populations are growing and cities are becoming denser.Highdensity cities are crowded with gray land, so access to greenspace is becoming more limited.Environmental pollution and car-dependent lifestyles have become important factors in the changing spectrum of human health and illness [1].Increases in chronic conditions such as respiratory diseases, cardiovascular diseases, and obesity have occurred [2,3].In addition, the city dwellers' fast-paced lifestyle has increased stress and mental health problems.The balance between adapting to modern city life and staying healthy has become critical.Especially post-pandemic, people have become more aware of the importance of health.Research has consistently confirmed that exposure to natural environments generates various dimensions of health benefits [4,5].Specifically, urban greenspaces (UGS), such as parks, are essential public resources for improving human health [6,7].Researchers have continued to demonstrate the benefits of greenspaces for human wellbeing, including improving psychological state and mental health [8,9], reducing stress [10], increasing positive emotions [11], and eliminating anxiety [12,13].In addition, greenspace enhances young people's cognitive functioning and wellbeing [14,15].Therefore, such exposure can be regarded as a nature-based solution for promoting urban resilience and public health.Some studies identified three potential relationships between green space and health: harm reduction, restoring capacities, and capacity building [16] (e.g., encouraging physical activity and promoting social cohesion).
Previous research on the short-term health effects of nature exposure was mostly based on Stress Recovery Theory (SRT) and Attention Restoration Theory (ART).SRT states that the natural environment significantly impacts people's recovery from mental fatigue.Through contact with nature, release of beneficial neurotransmitters helps diminish harmful thoughts and emotions, reducing the stress response [4,17,18].ART emphasizes how plants and other natural features restore effective attention, allowing the remaining neurological and cognitive mechanisms to function [5,19].Based on SRT and ART, researchers have investigated the emotional and state restoration effects of greenspace or environments with natural features under specific conditions.A series of studies have compared the restorative effects of natural and urban environments on psycho-physiological functioning.The research suggests that exposure to natural environments with greenspace and water (instead of urban ones) can improve positive mood by reducing stress and restoring attention [20].Studies have found that natural environments are more beneficial for physiology than urban ones for reducing stress, and improving emotional valence and cognitive abilities [21].Forest visits have health advantages that lower the risk of stress-related illnesses and diseases linked to a sedentary lifestyle and are linked to better mental and physical health [22].Physical activity in such green environments is better for physical and mental health than under other conditions [23,24].Such studies are not limited to "in-person experience".Some studies showed how natural environments are beneficial, even indirectly through observation from windows or rooftops [25,26].Furthermore, virtual nature contact benefits have been observed.Virtual research has evolved from still photographs to video and, finally, virtual reality (VR).Several studies have compared VR with live nature exposure and found that both had a restorative effect.However, only the outdoor environment measurably increased pleasant emotions [27].
Evaluating the potential of various environmental factors to bring restoration is essential for evidence-based health design [28]."Natural versus urban" may gloss over meaningful within-category variability regarding the restorative potential of different physical environments [29].Some scholars have conducted studies on the restorative nature of indoor and outdoor environments [30].Additionally, the greenspace's type, quality, and context should be considered in assessing its relationship with wellbeing [31].Studies have investigated the relationship between greenspace characteristics and restorative effects.These characteristic elements include the type of greenspace [32,33], general quality [34,35], biodiversity [36,37], landscape appearance [38,39], and composition of the internal environment [40,41].Some researchers believe that the greatest influence on psychological recovery is the greenspace's vegetation and biodiversity.Its facilities and topography also have an affect [42].Additional studies have demonstrated that park type and size, impermeable ground areas, and water bodies have varying degrees of association with favorable visitor effects [43].In addition to this, the differential effects of forest ages and types on recovery have been demonstrated in other experiments [44].Overall, several studies examined largescale correlations between green environmental exposures and population health and their action pathways.Other studies used relatively small-scale physiological-psychological experiments in different natural settings.Most of these studies used psychological measures (e.g. in the recovery of different natural features, which may have quite a lot of confounding factors.In contrast, it is easier to control variables in virtual indoor experiments. However, natural restoration is a dynamic integrated process with visual and auditory [46,47], tactile [48], olfactory [49], and other multifaceted perceptual pathways.The population directly perceives the restoration benefits of greenspaces through various senses [50].Other types of perception other than audio-visual are difficult to mimic and reproduce in virtual experiments.For example, a correlation exists between psychological recovery and bird diversity and insects (butterflies and bees) [51,52].Measuring these types of perceptions requires live experiences in nature rather than virtual environments.Research has examined the effects of different landscape types on population health.However, few studies discuss the differing restorative effects of various landscape components, especially in non-virtual settings.We therefore need to design a controlled experiment that can accurately measure the physiological and psychological indicators of visitors, that meets the need for visitors to personally interact with nature in terms of perception and experience while controlling variables as much as possible in a non-laboratory setting.
This study aims to investigate the restorative benefits of non-virtual environments in field experiments as well as the differences in physiological and psychological effects between different types of restorative sites for stressed young adults.We hypothesize that natural spaces made up of various components have a restorative effect on participants and the benefits of different scenes differ significantly.Furthermore, we presume Forests have the best restorative properties in comparison, and the psychological scale data correlate strongly with each physiological index.This study adopts a control experiment design to obtain more accurate physiological-psychological data in real scenarios.It combines the PANAS scale and physiological indicators (EEG and HRV) obtained by wearing wireless devices to experiment in four similarly sized different landscape spaces through the preand post-test mode of "stress imposition greenspace recovery".This experimental model recording the initial state allows for clarification of the differences in restorative benefits between scenarios.In addition, the physiological equipment uses wireless sensors that guarantee the real-time transmission of outdoor data.This capability collects participants' physiological indicators in a natural setting more accurately.
Participants
According to recent studies, intense social competition and involution have resulted in an increase in stress among today's youth, particularly among new young workers and soon-to-be-employed college students.As a result, there is a critical need for research on this young adult population [53,54].We pre-computed the required sample size using G*Power software (version 3.1.9.7) and analyzed the effect sizes of existing restorative environmental studies.ANOVO: Repeated measures (within-between interaction) and means (difference between two dependent means (matched pairs)) were selected based on the analytical approach of the experiment.Based on the results of the two tests described above, the total sample size required for one site was 32 individuals.We recruited volunteers for the experiment by means of posters and social media.Participants were required to meet specific conditions, including having no psychiatric disorders (such as depression, schizophrenia, or mood disorders) and no visual, hearing, or cognitive impairments.Furthermore, the age range is restricted to young adults over the age of 18, and no more than two years of work experience if they have work experience.Finally, 39 young adults (19 males and 20 females) aged 18 to 28 (mean 24.5) were chosen to participate in this study.These include college students, young company workers, and civil servants.All participants underwent a brief training session and were asked to avoid alcohol and psychotropic drug use during the experiment.Before giving written consent, they were fully informed about the experiment.
We used a repeated measures approach.Each participant (numbered 1-39) was involved in four different experimental settings.This method yields a larger sample size for the experiment with fewer participants.To control order effects, the participants were divided into four groups (10, 10, 10, and 9).Each group participated in these four restorative experiments in a different order during the experimental days.The specific experimental arrangement and the order of participation are shown in the Supplementary Materials (Table S1).This study was approved by the Ethics Committee of Nanjing Forestry University.
Experimental Sites
Xuanwu Park in Nanjing was selected as the experimental site.It is the largest comprehensive park in central Nanjing, covering a total area of 5.13 km 2 .It is a wellequipped scenic spot with a diverse landscapes and rich biodiversity.Setting the research in large city parks is beneficial.Urban parks, as inclusive, open spaces, can include a variety of greenspace types with varying characteristics while reducing experimental error due to site location, climate, elevation, and inconvenient transportation of equipment and people.The authors selected four natural environments of similar size within the park (see Figure 1 and Table 1): Grassplot (site A), Square (site B), Forest (site C), and Lakeside (site D).Site A is a large lawn with an open, gentle topography where visitors can relax.Site B is a sculpture square with ample seating and prominent markers.Site C is a dense forest with more shadiness and a complex plant hierarchy.Site D is a ribbon corridor consisting of a chain of wooden boardwalks and a few lakefront platforms with wide views of the distant skyline.The experiment was conducted from 11 October to 5 November 2021.With bad weather (heavy rain and wind) and non-working days excluded, there were a total of 16 measured days.The Supplementary Materials (Table S1) provide specific time and location information for the measurement.
and psychotropic drug use during the experiment.Before giving written consent, they were fully informed about the experiment.
We used a repeated measures approach.Each participant (numbered 1-39) was involved in four different experimental settings.This method yields a larger sample size for the experiment with fewer participants.To control order effects, the participants were divided into four groups (10, 10, 10, and 9).Each group participated in these four restorative experiments in a different order during the experimental days.The specific experimental arrangement and the order of participation are shown in the Supplementary Materials (Table S1).This study was approved by the Ethics Committee of Nanjing Forestry University.
Experimental Sites
Xuanwu Park in Nanjing was selected as the experimental site.It is the largest comprehensive park in central Nanjing, covering a total area of 5.13 km 2 .It is a well-equipped scenic spot with a diverse landscapes and rich biodiversity.Setting the research in large city parks is beneficial.Urban parks, as inclusive, open spaces, can include a variety of greenspace types with varying characteristics while reducing experimental error due to site location, climate, elevation, and inconvenient transportation of equipment and people.The authors selected four natural environments of similar size within the park (see Figure 1 and Table 1): Grassplot (site A), Square (site B), Forest (site C), and Lakeside (site D).Site A is a large lawn with an open, gentle topography where visitors can relax.Site B is a sculpture square with ample seating and prominent markers.Site C is a dense forest with more shadiness and a complex plant hierarchy.Site D is a ribbon corridor consisting of a chain of wooden boardwalks and a few lakefront platforms with wide views of the distant skyline.The experiment was conducted from 11 October to 5 November 2021.With bad weather (heavy rain and wind) and non-working days excluded, there were a total of 16 measured days.The Supplementary Materials (Table S1) provide specific time and location information for the measurement.
Stress Induction
We elicited a suboptimal state before the restorative experiment to ensure that an initially pleasant one did not obscure the recovery difference.This phase was conducted using two simultaneous approaches.Firstly, we used stimulating audio to induce unpleasant emotions and mental fatigue.The stimulating audio, created using Adobe Audition (CS6, Adobe Systems Incorporated, Mountain View, CA, USA), can be found in the Supplementary Documents Audio S1.It consisted of gradually accelerating heartbeats, street traffic noise, and sharp, harsh sounds.Secondly, as the participants were students, we induced mental stress with a timed (3 min) math test.Previous studies have used exams and calculations to induce stress [55,56].We printed four math test question sets with comparable difficulty coefficients to avoid the stressful effects of repeating questions in other scenarios.Participants were administered a different five-question test in each experimental setting.
Measurements
The assessment used subjective mood scales and objective device measures.The benefits of assessing resilience using self-report combined with objective measures have been previously supported by research [45].
Psychological Measurements
The PANAS Scale was developed by Watson, Clark, and Tellegen (1988).The scale is designed to measure positive and negative feelings in a person's current state.The final score is derived from the sum of ten items for each of the positive and negative dimensions.These twenty items describe emotions, stress, and mental states.They can reflect changes in the subject's affect to some extent [57].Using the PANAS scale to describe the restorative quality of natural environments has shown high consistency (89%) with physiological indicators in previous analyses [45].The PANAS has been translated into Chinese so that native Chinese speakers can have a more accurate measure of their psychological state.The content of this version of the PANAS is shown in Supplementary Materials (Table S2).The PANAS uses a 5-point Likert scale, with scores from 1-5 representing very slightly/not at all, a little, moderately, quite a bit, and extremely.We used a pre-and post-test model to study changes in participants' psychological states.The pre-test asked participants to complete the PANAS based on their current psychological state immediately after stress administration.The post-test asked participants to complete it based on their condition after spending time in experimental spaces.As Watson stated, "When used with short-term time frame instructions (i.e., moment or today), the PANAS scales are sensitive to changing internal or external circumstances [57]".This sensitivity is well-suited to describing changes in participants' measured mood before and after the restorative experiment.The Positive Mood Scale total ranges from 10 to 50.It is the sum of items 1, 3, 5, 9, 10, 12, 14, 16, 17, and 19, with higher total scores representing higher levels of positive mood.Similarly, the total score of the Negative Emotion Scale ranges from 10 to 50 and is the sum of items 2, 4, 6, 7, 8, 11, 13, 15, 18, and 20, with higher scores representing greater emotional distress.The difference between the post-test and pre-test scores, ∆D, is used to indicate recovery.∆DP = total post-test positive emotion score − total pre-test positive emotion score.∆DP > 0 means that the post-test positive emotional level is higher than the pre-test, while ∆DP < 0 signifies that the post-test positive emotion is lower than the pre-test level.
Forests 2023, 14, 1400 6 of 19 ∆DN = total score of post-test negative emotion − total score of pre-test negative emotion.∆DN > 0 means the post-test negative emotion level is higher than the pre-test level, while ∆DN < 0 means the post-test negative affect score is lower than the pre-test negative affect score.Thus, unpleasant emotion was somewhat reduced.
Physiological Measurements
Electroencephalogram (EEG) is a powerful tool for assessing the emotional impact of various landscapes and architectural environments [58].The EEG power spectrum components consist of five common frequency bands (waves): δ (1-4 Hz), θ (4-8 Hz), α (8-13 Hz), β (13-30 Hz), and γ (>30 Hz) [59].Alpha waves (α) indicate that the person is relaxed and less susceptible to external interference [60].Earlier studies confirmed that α power is elevated under more restorative and natural environmental conditions [17].Lower frequency EEG bands might characterize comfortable or restorative environments and associated pleasant emotions [61].Beta waves (β) are associated with higher arousal and alertness levels.Previous studies have concluded that natural window-scape produced more beta waves than urban ones [25].In addition, β waves were associated with greater mental tension and in-depth thinking [62].Significantly higher β waves are produced in stressful than in calm conditions.
The EEG data were continuously recorded using a semi-dry wireless EEG instrument (Kingfar Technology Co., Ltd., Beijing, China).The instrument records raw EEG signals from parietal (P3, P4), frontal (F3, F4), occipital (O1, O2), and frontal midline (FpZ, Fz) electrodes and transmits them to a wirelessly connected laptop.All electrodes were referenced to the linked earlobe (A1, A2) and grounded at the midpoint between Fpz and Fz [63,64].Semi-dry electrodes and fewer EEG channels save time and reduce the adverse impact on participants.Pre-processing can reduce EEG channel numbers without reducing accuracy [65].The experimental sampling rate was 256 Hz, and the bandpass filter was set between 0.5 Hz and 100 Hz.A 50 Hz band-stop filter was used to filter the I.F.signal.
HRV has been widely used as a reliable indicator with high temporal resolution in previous recovery experiments [56,66].The low/high frequency (LF/HF) ratio is an HRV frequency domain parameter indicating sympathetic and parasympathetic activity [67].It is suitable for objective measures of stress, with higher LF/HF values indicating an overactive sympathetic nervous system associated with increased stress and anxiety [68].The HRV data were recorded with a wearable Human Factors Logger (Kingfar Technology Co., Ltd., Beijing, China).The data were pre-processed using the HRV module of ErgoLAB V3.0.
Experimental Procedure
Before the experiment, all participants completed a personal basic information questionnaire, including age and gender, via WeChat and email.Each participant was informed that they could withdraw if they experienced any discomfort during the experiment.Upon arrival at the site, participants will be guided through the day's experimental site by our staff and informed of the site boundaries.Figure 2 shows the five experimental steps: preparation (T0), baseline (T1), pressure application (T2), restoration (T3), and post-test interview (T4), over approximately 44 min.Physiological measurements were taken during baseline (T1), pressure application (T2), and restoration (T3).Psychological measurements were taken before and after the restoration stage.
Preparation: The staff reviewed the main procedures and precautions with the participants.Next, they assisted participants with wearing the semi-dry wireless EEG instrument and Human Factors Logger.Ten minutes, on average, were needed to achieve good skin-electrode contact quality.
Baseline: Participants relaxed with their eyes closed.Wireless noise-canceling headphones (1MORE ComfoBuds Z) measured their physiological index for three minutes without visual and auditory interference [56].Preparation: The staff reviewed the main procedures and precautions with the participants.Next, they assisted participants with wearing the semi-dry wireless EEG instrument and Human Factors Logger.Ten minutes, on average, were needed to achieve good skin-electrode contact quality.
Baseline: Participants relaxed with their eyes closed.Wireless noise-canceling headphones (1MORE ComfoBuds Z) measured their physiological index for three minutes without visual and auditory interference [56].
Pressure application: Stimulating audio was played during a three-minute cognitive (math) test, inducing unpleasant emotions and mental stress through auditory stimuli and deep thinking.The math test questions were not scored but induced mental fatigue.At the end of the pressure induction stage, participants completed a modified Positive/Negative Affect Scale (PANAS) as a pre-test.
Restoration: In previous studies, it usually lasted from 3-15 min [69,70].We controlled for the effects of physical activity, social cohesion, and other pathways as much as possible through prior training of participants [14].Participants were asked to walk slowly or sit within a pre-defined field area for twelve minutes and observe the park scenery casually.They were instructed not to eat, drink, or talk to others.Physiological data were continuously measured and recorded in real-time.Throughout the restoration period, an experimental assistant with a laptop computer monitored the participant closely but nonintrusively to ensure their safety and comfort, while ensuring that the wireless data signal from the physiological sensors was transmitted intact.After the restoration stage, the Positive/Negative Affect Scale (PANAS) was administered again.
Post-test interview: Some participants agreed to a semi-structured in-depth interview to better explain and understand the experiment.They were asked if contingencies such as weather and other visitors impacted their affective changes.All field interviews were audio-recorded and later transcribed for statistical analysis.
Statistical Analysis
The paired-samples t-test analyzed whether each landscape space had a restorative psychological effect.The mood scores before and after each site's recovery stage were paired to verify whether the difference between positive/negative mood recovery before and after was significant.The independent variable type was a within-subjects variable.Each participant underwent the restorative trials of the four sites in their entirety to investigate further whether the difference in restorative of each site was significant.Therefore, Pressure application: Stimulating audio was played during a three-minute cognitive (math) test, inducing unpleasant emotions and mental stress through auditory stimuli and deep thinking.The math test questions were not scored but induced mental fatigue.At the end of the pressure induction stage, participants completed a modified Positive/Negative Affect Scale (PANAS) as a pre-test.
Restoration: In previous studies, it usually lasted from 3-15 min [69,70].We controlled for the effects of physical activity, social cohesion, and other pathways as much as possible through prior training of participants [14].Participants were asked to walk slowly or sit within a pre-defined field area for twelve minutes and observe the park scenery casually.They were instructed not to eat, drink, or talk to others.Physiological data were continuously measured and recorded in real-time.Throughout the restoration period, an experimental assistant with a laptop computer monitored the participant closely but non-intrusively to ensure their safety and comfort, while ensuring that the wireless data signal from the physiological sensors was transmitted intact.After the restoration stage, the Positive/Negative Affect Scale (PANAS) was administered again.
Post-test interview: Some participants agreed to a semi-structured in-depth interview to better explain and understand the experiment.They were asked if contingencies such as weather and other visitors impacted their affective changes.All field interviews were audio-recorded and later transcribed for statistical analysis.
Statistical Analysis
The paired-samples t-test analyzed whether each landscape space had a restorative psychological effect.The mood scores before and after each site's recovery stage were paired to verify whether the difference between positive/negative mood recovery before and after was significant.The independent variable type was a within-subjects variable.Each participant underwent the restorative trials of the four sites in their entirety to investigate further whether the difference in restorative of each site was significant.Therefore, the difference before and after the positive/negative mood restoration (∆D) was used as the independent variable, and a repeated-measures ANOVA tested whether the four scenes' restorative effects differed.Then, multiple comparisons compared the difference between each site's ∆D.Normality tests were conducted on the before-and-after ∆DP and ∆DN.The Shapiro-Wilk method test showed the difference between the before-and-after positive and negative sentiment scores for each site followed normal distribution, allowing for the repeated-measures ANOVA.First, we performed Mauchly's test of sphericity.If the sphericity hypothesis was satisfied, dependent variable data for repeated measures exhibited equal variance-covariance matrices.Thus, the Greenhouse-Geisser analysis results for within-subject effects could be used.Otherwise, the results of Roy's Largest Root in Multivariate Tests was used.
The total percentages of alpha and beta wave bands (EEG-α, EEG-β) and the low-/high-frequency ratio (LF/HF) were used as physiological indicators of participants' stress and mental status.The change in EEG is a process.Thus, we chose the proportion of α and β and LF/HF throughout the stage rather than at a point or during a segment.Two-way repeated-measures ANOVAs verified each landscape space's physiological restoration effects and whether they significantly differed.Additional algorithms were performed to analyze the interaction effects of site*time and can be found in the Supplementary Materials (Equation S1).Pearson correlation was used to analyze the associations between various parameters.
Descriptive Statistics
Table 2 shows the mean total scores and standard deviations of our calculated psychological and physiological measures.The Cronbach's alpha for the Chinese version of the full PANAS was 0.863.The positive and negative PANAS scales were 0.855 and 0.881, respectively, indicating good reliability.The Shapiro-Wilk normality test showed each landscape space's positive and negative psychological scores were normally distributed.
Table 3 indicates that the p-values for the PANAS-P and PANAS-N paired-sample t-tests were below 0.001 (***), the post-to pre-test positive affect difference was ∆DP > 0, and the negative difference was ∆DN < 0. This discrepancy indicates that the PANAS-P Forests 2023, 14, 1400 9 of 19 score was significantly elevated.The PANAS-N score was reduced for all four scenarios after the restoration stage.Figure 3 displays trends in psychological indicator changes.The Cronbach's alpha for the Chinese version of the full PANAS was 0.863.The positive and negative PANAS scales were 0.855 and 0.881, respectively, indicating good reliability.The Shapiro-Wilk normality test showed each landscape space's positive and negative psychological scores were normally distributed.
Table 3 indicates that the p-values for the PANAS-P and PANAS-N paired-sample ttests were below 0.001 (***), the post-to pre-test positive affect difference was ΔDP > 0, and the negative difference was ΔDN < 0. This discrepancy indicates that the PANAS-P score was significantly elevated.The PANAS-N score was reduced for all four scenarios after the restoration stage.
Physiological Restorative Effects
Two-way repeated-measures ANOVAs were performed for each EEG-α, EEG-β, and LF/HF at baseline (T1), stress (T2), and restoration (T3) stages for the four sites.Table S3 in the Supplementary Materials presents the results.EEG-α (F = 340.27,p < 0.001 ***, Partial η 2 = 0.98), EEG-β (F = 53.15,p < 0.001 ***, Partial η 2 = 0.76), and LF/HF (F = 140.54,p < 0.001 ***, Partial η 2 = 0.88) were highly significant for the time effect.However, neither EEG-α (F = 2.12, p = 0.13, Partial η 2 = 0.52), EEG-β (F = 1.62, p = 0.175, Partial η 2 = 0.09), nor LF/HF (F = 0.21, p = 0.943, Partial η 2 = 0.01) were significant in terms of interaction effects, and further pairwise comparisons are needed.Further pairwise comparisons were made between the baseline (T1), stress (T2), and restoration stages (T3).The results showed extremely significant differences between the baseline stage (T1) and the pressure application (T2) for all data.This result indicates that applying pressure had a marked intervention effect.The pressure application (T2) was compared to the restoration stage (T3), showing significant differences in EEG for all sites except B. Restoration-α was distinctly greater than Stress-α, and Restoration-β was distinctly less than Stress-β for sites A, C, and D, as shown in Table 4. Figure 4 shows the EEG trends from the pressure application to restoration.Only the LF/HF of Site A changed significantly during the pressure application (T2) compared to restoration (T3), while no significant differences were seen for the remaining sites.We analyzed the differences in the change of PANAS scores (ΔD) for each site with a repeated-measures ANOVA tested.Table 5 shows the results of the 2 × 2 comparison.
Differences in Psychological Recovery Benefits
We analyzed the differences in the change of PANAS scores (∆D) for each site with a repeated-measures ANOVA tested.Table 5 shows the results of the 2 × 2 comparison.
Variability of Physiological Indicators during the Restoration
The multivariate simple effects analysis results for the sites showed that the EEG-α and EEG-β of the groups differed significantly during the restoration stage: EEG-α (F = 4.82, p = 0.015 < 0.05, Partial η 2 = 0.49); EEG-β (F = 4.80, p = 0.015 < 0.05, Partial η 2 = 0.49).Otherwise, no significant differences in EEG-α and EEG-β were detected between the groups during the baseline stage and pressure application.This suggests that the groups' initial states were uniform and the restorative experiments' outcomes were comparable.For LF/HF, the results of the multivariate tests showed no significant differences in LF/HF among scenarios during the restoration stage (F = 0.19, p = 0.903 > 0.05, Partial η 2 = 0.03) (Table S4 in Supplementary Materials).
The changes of EEG in the restoration stage were further explored through pairwise
Variability of Physiological Indicators during the Restoration
The multivariate simple effects analysis results for the sites showed that the EEG-α and EEG-β of the groups differed significantly during the restoration stage: EEG-α (F = 4.82, p = 0.015 < 0.05, Partial η 2 = 0.49); EEG-β (F = 4.80, p = 0.015 < 0.05, Partial η 2 = 0.49).Otherwise, no significant differences in EEG-α and EEG-β were detected between the groups during the baseline stage and pressure application.This suggests that the groups' initial states were uniform and the restorative experiments' outcomes were comparable.
For LF/HF, the results of the multivariate tests showed no significant differences in LF/HF among scenarios during the restoration stage (F = 0.19, p = 0.903 > 0.05, Partial η 2 = 0.03) (Table S4 in Supplementary Materials).
The changes of EEG in the restoration stage were further explored through pairwise comparisons for each site.Square's EEG-α was significantly lower than Forest's EEG-α (t = −3.56,p = 0.015 < 0.05).Square's EEG-β was significantly higher than Lakeside's EEG- β (t = 3.79, p = 0.009 < 0.01).No significant differences were detected between the other sites, and Table S5 in Supplementary Materials shows these results.Figure 6 illustrates the differences in EEG indicators between sites in the restoration stage.
The multivariate simple effects analysis results for the sites showed that the EEG-α and EEG-β of the groups differed significantly during the restoration stage: EEG-α (F = 4.82, p = 0.015 < 0.05, Partial η 2 = 0.49); EEG-β (F = 4.80, p = 0.015 < 0.05, Partial η 2 = 0.49).Otherwise, no significant differences in EEG-α and EEG-β were detected between the groups during the baseline stage and pressure application.This suggests that the groups' initial states were uniform and the restorative experiments' outcomes were comparable.For LF/HF, the results of the multivariate tests showed no significant differences in LF/HF among scenarios during the restoration stage (F = 0.19, p = 0.903 > 0.05, Partial η 2 = 0.03) (Table S4 in Supplementary Materials).
The changes of EEG in the restoration stage were further explored through pairwise comparisons for each site.Square's EEG-α was significantly lower than Forest's EEG-α (t = −3.56,p = 0.015 < 0.05).Square's EEG-β was significantly higher than Lakeside's EEG-β (t = 3.79, p = 0.009 < 0.01).No significant differences were detected between the other sites, and Table S5 in Supplementary Materials shows these results.Figure 6 illustrates the differences in EEG indicators between sites in the restoration stage.
Index Correlation
Pearson correlation was used to analyze the associations between self-reported and objective measures of recovery effects.The results showed a significant (p < 0.05) inverse relationship between EEG-β and the amount of positive mood change during the restora-
Index Correlation
Pearson correlation was used to analyze the associations between self-reported and objective measures of recovery effects.The results showed a significant (p < 0.05) inverse relationship between EEG-β and the amount of positive mood change during the restoration stage, with a strength of r = −0.223*.This is consistent with our hypothesis.No significant correlations were detected between PANAS indicators for other physiological indicators.In addition, we found a significant inverse correlation (p < 0.01) between EEG-α and heart rate variability index LF/HF in the restoration stage, with a correlation strength of r = −0.342**.As in previous studies, a highly significant negative correlation was found between EEG-α and EEG-β.However, we found a non-significant association between the amount of positive and negative affect change.Figure 7 shows the correlation coefficients among restorative indicators.As in previous studies, a highly significant negative correlation was found between EEG-α and EEG-β.However, we found a non-significant association between the amount of positive and negative affect change.Figure 7 shows the correlation coefficients among restorative indicators.Psychologically, the pre-and post-test data of the restoration stage in all types of selected environments showed significant differences, with a considerable increase in positive mood scores and a substantial reduction in negative mood scores.All four sites were restorative for participants' emotions, consistent with the results of previous relevant studies [71,72].Some researchers found that changes in positive emotions before and after the restoration stage were not significant [73].Their outcome may have been due to the participants' initial emotional state.In contrast, the stress administration before the restoration stage of the present experiment was crucial.
Regarding physiological indicators, EEG measurements showed generally high beta in the stress stage.These measurements were associated with brain activity induced by the cognitive test and harsh noises.The baseline alpha was significantly higher at all four sites than in the stress or restoration stages, probably because the participants were asked to relax with their eyes closed.Alpha waves were more pronounced when the participants were in this state.Barry, Clarke, Johnstone, Magee, and Rushby noted that two conditions, eyes closed and open, produced differing EEG measurements [74].Therefore, the baseline stage only served to calm and standardize the participants' emotions.It was not used as baseline data in restorative trials.The interaction effect analysis results showed that the Square's alpha and beta did not vary significantly in the restoration and stress stages.However, the interaction effect analysis results showed that the alpha and beta of the Square did not change significantly in the restoration and pressure application stages.Moreover, all other sites' restoration stages differed significantly during stress.This finding suggests that the Grassplot, Forest, and Lakesides spaces also promoted psychological recovery, with a significantly higher proportion of participants in the restoration stage being relaxed and significantly decreasing tension and stress.The Square's recovery effect was not significant, this may be because the square has relatively few natural factors and more artificial factors [17].
Compared to the EEG measurement results, only the LF/HF data from Grassplot showed significant differences compared to the pre-restoration phase.HRV data are more sensitive to environmental changes at the physical level than EEG metrics, and it has been suggested that there may be greater fluctuations in LF/HF may occur during non-stationary conditions [75].During the experiment, the baseline and pressure application phases were stationary conditions.Participants had more freedom of movement during the restoration phase.We specified that participants could only walk slowly and sit still during recovery.However, errors in ear-tip pulse measurements may have occurred due to excessive or rapid head movements.Participants spent more time sitting quietly in the Grassplot than in other sites, which may explain the significant reduction in LF/HF for only Grassplot.Some studies suggested that a trend toward higher LF/HF than before exercise was found only after moderate and high-intensity exercise [76].In conclusion, for outdoor experiments, EEG may be more reliable than HRV.
Variability in Restorative Benefits
Sections 3.2 and 3.3 showed no significant differences in LF/HF across sites.Therefore, this section discusses the trend characteristics of PANAS and EEG indicators.A repeatedmeasures ANOVA was conducted on the change in mood to explore whether the difference between the four sites was statistically significant.The results showed that the change in positive emotions was significantly greater in Sites 3 and 4 than in 2, indicating that Square was significantly less effective than Forest and Lakeside in enhancing them.Researchers have shown that environments with natural components have a better restorative effect on PANAS-POS than concrete ones [33].Additionally, the difference between the pre-and post-measures of negative emotion in Site 4 is significantly different from Sites 1 and 2, indicating that Lakeside, an interwoven green and blue space, has a stronger ability to moderate unpleasant emotions compared to Grassplot and Square sites.This finding is consistent with White et al.'s [77].They state that "adding some water to a green scene leads to significantly higher resilience" while arguing that the optimal restorative environment may be the interface between land and large bodies of water.
For indicators with a large degree of subjectivity and significant differences, the discrepancy in change is a more accurate choice [78,79].Since Square's EEG metrics did not change substantially between the Restoration and Stress stages, the difference was not significant when comparing them.The EEG index analysis of different scenes in the three phases by two-way repeated-measures ANOVA showed that the interaction effect was non-significant.However, the baseline and pressure application phases were included.
The results of the multivariate simple effects analysis of the baseline and the pressure application phases showed that all p-values of these two phases were greater than 0.05.It means that the four sites had the same initial state before the restoration phase, confirming the comparability between the data.The restoration stage results showed significant differences in EEG data between the four groups.This outcome demonstrates differences in the physiological effects of the various restorative environments in the park.The results showed that Site 2's α was significantly lower than Site 3 s, indicating that stressed young adults were much more relaxed in the forest than in the paved square.This is consistent with the results of some previous studies related to forest therapy [80,81].The study also confirmed that Square's ability to reduce mental fatigue was significantly less than Lakeside's.The Square condition showed weaker recovery on the EEG compared to the other sites.This might be because it has a more mixed environment and less greenness.Brain waves can be relatively "calmed" whenever the human body is stimulated by green visual scenes [82].In contrast, Square, a landscape space with more artificial sites and less green environment, showed weaker performance in restorative ability.No other significant differences in physiological restorative properties were found between the experimental sites.
In general, the natural park habitats of Forest, Grassplot, and Lakeside have strong psychological and psychological restorative effects, while Square has only significant psychological but minor physiological restorative effects.When comparing the restorative effects of various scenes, no significant differences were found between Forest and Lakeside in terms of statistical significance and both induced better recovery, which aligns with previous studies [16].The higher resilience induced by waterfront space may relate to its higher biodiversity.Many participants reported more birds and insects in the waterfront area due to the Lakeside's more pronounced edge effect.The edge effect is the increased population density and diversity of species where two adjacent ecosystems overlap [83].The forest's restorative nature may be due to its greater plant variety and density [84].More surrounding vegetation can create a visually stimulating environment, enhancing recovery [14].This suggests that not only natural forests in the wild but also forest scenes in urban parks can provide restorative benefits for stressed young people.This also brings the convenience of forest therapy to city dwellers.Grassplot's resilience was intermediate, while Square's was relatively poor.
Furthermore, in exploring the degree of consistency between psychological and physiological indicators, we found a significant inverse correlation between EEG-β and the enhancement of positive affect.Other psycho-physiological indicators were somewhat correlated with each other.However, no significant correlation was found.This may be explained by the fact that both PANAS, EEG, and HRV data are comprehensive indicators which may be correlated at some sub-levels, but the correlation is not significant in a combined statistical analysis.Further investigation is required.We also found a correlation between the two physiological indicators, with EEG-α and LF/HF being significantly negatively correlated.
Limitations and Future Research
In outdoor field experiments, seasonal changes at the same site at different times of the year can affect mood.For example, changes in plant leaf color may increase feelings of wellbeing, calmness, and recovery potential [53].Some researchers found that the lake area did not have a significant restorative effect [38].They hypothesized that this was partly due to a lack of shady places for rest.By contrast, the present experiment was conducted in autumn on a day with a suitable temperature, and the Lakeside was highly restorative.In addition to seasonal and temperature limitations, some limitations exist concerning the study region.We only explored larger urban parks.Smaller sites such as community and street gardens were not included.In this study, to control for variables, participants were prohibited from engaging in free-flowing physiological activities such as eating and talking.And they were also asked to walk slowly and sit quietly to control for physiological effects due to exercise pathways.However, improvement of health problems in high-stress populations requires not only restorative mechanisms but also other active pathways.This requires a larger number of subjects and more complex and comprehensive studies.Furthermore, this study only conducted a restorative experiment on young adults and did not make a consistent comparison with other groups such as children and older adults.More groups will be included in the study in the future, which will help to investigate the differences in the restorative nature of different types of groups.Individual differences may result in a bias in the results.For example, differences in an individual's ability and sensitivity to resist stress can make it difficult to achieve consistent standards during the pressure application phase.Similarly, differences in measurement time can affect the results.The total experimental time will be shortened by simultaneous measurements at multiple sites, but this will increase the need for laboratory staff and equipment, which we will address in future research.
Through semi-structured, in-depth interviews, we found that unexpected events affected participants' emotions, for example, flying birds, streaming clouds, and sudden music.These elements were not included in the planned scenario.However, they affected the participants' emotions through attention shifting.Furthermore, we could explore how "attractors" affect recovery and how they work.We will also consider the strength of attractors and construct a structural equation model of restorative influences.
Conclusions
This study analyzed the relationship between different environments and their restorative and recovery effects using a field experiment.We used pre-and post-measured PANAS and real-time transmitted EEG data as psychological and physiological indicators to confirm the restorative effect of different types of natural immersion for stressed young adults.The experimental results showed that exposure to various types of restorative environments in the park can significantly reduce negative emotions and increase positive emotions in stressed young adults.Physiologically, we found that all sites except the square had a significant positive effect on relaxation.When comparing different types of spaces, the Forest and Lakeside sites together showed the most robust restorative properties.Forest can make stressed people feel more relaxed and less susceptible to external disturbances, while Lakeside tends to reduce brain fatigue and deep thinking.Grassland also shows a lower level of restorative capacity than the Forest and Lakeside but a significantly higher level than the Square.Furthermore, this study demonstrated the correlation between various physiological and psychological indicators.EEG-β indicators in the restoration stage were significantly and inversely correlated with positive affect enhancement.
The study meets the practical need for "hands-on" outdoor experience while reducing the uncertainty of field experiments through a rigorous experimental design and scientific statistical approach.These included (1) rigorous training of participants; (2) rationalization of the experimental procedure for participants; (3) use of appropriate measurement equipment; (4) joint reasoning from physiological and psychological data; (5) establishment of a baseline measurement phase to unify the initial data; (6) imposing stress and mental strain in two ways; (7) discussing the correlation between psychological and physiological indicators; and (8) using repeated-measures analysis of variance.The research methodology adopted in this study can be used in providing an idea to be considered for similar field experiments.For designers and decision-makers, each park landscape space's composition should reasonably match.The artificial square space could be appropriately reduced under the condition that enough activity space is available.Creating more natural greenspaces, especially forests, can help enhance the park's restorative nature.Creating appropriate landscape space along the water's edge can also help enhance restorative quality, especially in areas where blue and green spaces are interwoven and biodiversity is high.Moreover, short-term restorative effects may be influenced by attractors.Therefore, this element should be considered in future recovery mechanism studies.
Figure 2 .
Figure 2. Steps of the restorative experiment.
Figure 2 .
Figure 2. Steps of the restorative experiment.
Figure 3 .
Figure 3. Trends in psychological indicators changes.(a) Pre-and post-affect score changes in positive scale.(b) Pre-and post-affect score changes in negative scale.
Figure 3 .
Figure 3. Trends in psychological indicators changes.(a) Pre-and post-affect score changes in positive scale.(b) Pre-and post-affect score changes in negative scale.
Figure 5
illustrates the differences in psychological affect change across sites.
Figure
Figure Differences in the amount of ∆D in the experimental sites.* p < 0.05, ** p < 0.01.
Forests 2023 ,
14, x FOR PEER REVIEW 13 of 20 tion stage, with a strength of r = −0.223*.This is consistent with our hypothesis.No significant correlations were detected between PANAS indicators for other physiological indicators.In addition, we found a significant inverse correlation (p < 0.01) between EEG-α and heart rate variability index LF/HF in the restoration stage, with a correlation strength of r = −0.342**.
Table 1 .
Average temperature and humidity on experiment days.
Table 2 .
Descriptive statistics for relevant variables at the experimental sites.
Table 3 .
Pairwise comparisons of post-to pre-tests at the experimental sites.
Table 3 .
Pairwise comparisons of post-to pre-tests at the experimental sites.
Table 5 .
The 2 × 2 comparison of ∆D results for the experimental sites.
|
2023-07-12T05:27:24.588Z
|
2023-07-09T00:00:00.000
|
{
"year": 2023,
"sha1": "d56f46ef919c59d73dd7ef7922e238618e86dc57",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/14/7/1400/pdf?version=1688895051",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bc53adc2fbd6a41bdeacdbdb5cc7b1fb59518a64",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
221599926
|
pes2o/s2orc
|
v3-fos-license
|
Is Diagnostic Arthroscopy at the Time of Medial Patellofemoral Ligament Reconstruction Necessary?
Background: Although medial patellofemoral ligament (MPFL) reconstruction is well described for patellar instability, the utility of arthroscopy at the time of stabilization has not been fully defined. Purpose: To determine whether diagnostic arthroscopy in conjunction with MPFL reconstruction is associated with improvement in functional outcome, pain, and stability or a decrease in perioperative complications. Study Design: Cohort study; Level of evidence, 3. Methods: Patients who underwent primary MPFL reconstruction without tibial tubercle osteotomy were reviewed (96 patients, 101 knees). Knees were divided into MPFL reconstruction without arthroscopy (n = 37), MPFL reconstruction with diagnostic arthroscopy (n = 41), and MPFL reconstruction with a targeted arthroscopic procedure (n = 23). Postoperative pain, motion, imaging, operative findings, perioperative complications, need for revision procedure, and postoperative Kujala scores were recorded. Results: Pain at 2 weeks and 3 months postoperatively was similar between groups. Significantly improved knee flexion at 2 weeks was seen after MPFL reconstruction without arthroscopy versus reconstruction with diagnostic and reconstruction with targeted arthroscopic procedures (58° vs 42° and 48°, respectively; P = .02). Significantly longer tourniquet times were seen for targeted arthroscopic procedures versus the diagnostic and no arthroscopic procedures (73 vs 57 and 58 min, respectively; P = .0002), and significantly higher Kujala scores at follow-up were recorded after MPFL reconstruction without arthroscopy versus reconstruction with diagnostic and targeted arthroscopic procedures (87.8 vs 80.2 and 70.1, respectively; P = .05; 42% response rate). There was no difference between groups in knee flexion, recurrent instability, or perioperative complications at 3 months. Diagnostic arthroscopy yielded findings not previously appreciated on magnetic resonance imaging (MRI) in 35% of patients, usually resulting in partial meniscectomy. Conclusion: Diagnostic arthroscopy with MPFL reconstruction may result in findings not previously appreciated on MRI. Postoperative pain, range of motion, and risk of complications were equal at 3 months postoperatively with or without arthroscopy. Despite higher Kujala scores in MPFL reconstruction without arthroscopy, the relationship between arthroscopy and patient-reported outcomes remains unclear. Surgeons can consider diagnostic arthroscopy but should be aware of no clear benefits in patient outcomes.
Patellar dislocations are estimated to account for 2% to 3% of all traumatic knee injuries. 1,11,16 Several risk factors for recurrent patellar dislocation have been identified. These risk factors include patella alta, trochlear dysplasia, increased tibial tubercle-trochlear grove (TT-TG) distance, lateral patellar tilt, patellar hypermobility, variations of medial patellofemoral ligament (MPFL) anatomy, hypoplasia of the vastus medialis, increased Q angle, increased femoral anteversion, valgus alignment, and generalized ligament laxity. 2,9,17 Historically, patellar dislocations were treated nonoperatively, with operative treatment reserved for unsuccessful nonoperative measures; however, nonoperative management may lead to redislocation rates as high as 44%. 7 Some authors 19 have advocated for more prompt surgical treatment, which may provide lower redislocation rates and better short-and medium-term clinical outcomes. Others prefer to defer surgical treatment until recurrent patellar instability occurs.
In either case, the aim of the operative treatment is to address anatomic pathological features contributing to recurrent instability. This may include medial soft tissue procedures, distal and/or medial tibial tubercle transfer, distal femur osteotomy for valgus malalignment, and in rare circumstances, trochleoplasty. Because the MPFL is the primary soft issue restraint to lateral displacement, 13 it is always injured to some extent in recurrent instability and thus nearly always treated surgically with either repair or reconstruction. Several studies 1,3,4,6,10,18,21,26,28 have shown the superiority of MPFL reconstruction over repair; as such, MPFL reconstruction has become a mainstay for treatment of recurrent patellar instability, and various surgical techniques have been described.
There is a paucity of data regarding whether arthroscopy at the time of MPFL reconstruction provides any added diagnostic value or influences treatment outcomes. Although the risks and complications of arthroscopy have been well described, 25 these risks in association with MPFL reconstruction are poorly understood. The purpose of this study was to determine whether diagnostic arthroscopy during MPFL reconstruction provides any supplementary clinical information not previously appreciated on physical examination or imaging, improves outcomes, or increases the risk of complications.
We hypothesized that there would be no difference in postoperative pain, range of motion, recurrent instability, complications, or patient-reported outcomes when performing MPFL reconstruction with versus without arthroscopy.
METHODS
This retrospective cohort study was performed after obtaining approval from our ethics committee. Between 2012 and 2017, patients undergoing primary MPFL reconstruction at our institution were queried in our billing database by Current Procedural Terminology (CPT) codes 27420, 27422, 27425, 27427, and 27429 (n ¼ 139). CPT codes 27420, 27422, 27425, and 27429 are nonprimary MPFL reconstruction procedures but were included in the query to account for any error in CPT coding. A total of 139 patient charts were reviewed; patients undergoing concomitant tibial tubercle osteotomy (TTO), associated multiple ligament reconstruction (ie, anterior cruciate, posterior cruciate, and medial collateral ligaments), revision MPFL reconstruction, or those without at least 3 months of clinical follow-up were excluded. In total, 96 patients (101 knees) met the inclusion criteria.
Medical records were reviewed for characteristic information, radiographic parameters including Caton-Deschamps (CD) ratio and TT-TG distance, preoperative imaging findings, intraoperative findings, postoperative pain and range of motion, perioperative complications, and recurrent instability at 3 months postoperatively. The CD ratios were retrospectively measured on weightbearing lateral knee radiographs taken with the knee flexed to approximately 30 . The TT-TG distances were measured retrospectively on magnetic resonance imaging (MRI) axial T2 sequences. Patients were contacted by telephone to complete a postoperative Kujala score assessment 15 (also known as Anterior Knee Pain Scale) during the fall of 2018. The average follow-up time from surgical procedure to telephone interview for Kujala score assessment was 40 months, and there was a 42% response rate. Chisquare analysis of telephone follow-up response rates demonstrated no statistically significant differences between study groups (P ¼ .12). Reconstructions were performed in a standard fashion using hamstring tendon autograft with suture anchor fixation on the patella and interference screw fixation on the femur. A standardized MPFL reconstruction rehabilitation protocol was used for all patients.
Knees were divided into 3 groups based on the intervention performed: MPFL reconstruction without arthroscopy, MPFL reconstruction with diagnostic arthroscopy, or MPFL reconstruction with a preoperatively planned targeted arthroscopic procedure. Targeted procedures included partial meniscectomy, chondroplasty, loose body removal, microfracture, arthroscopic lateral retinacular release, and arthroscopic synovectomy. It is the practice of some of our surgeons (G.P.T. and D.L.R.) to choose to perform arthroscopy at the time of MPFL reconstruction only in those patients with identifiable pathology on MRI, while other surgeons routinely complete a diagnostic arthroscopy regardless of imaging findings. Patients received their particular intervention based on the standard treatment strategy of their treating physician.
"Diagnostic arthroscopy" was defined as arthroscopy performed without the intent of addressing a specific intraarticular pathological feature visualized on MRI. A "targeted arthroscopic procedure" was defined as a planned arthroscopic procedure to address chondral pathology, meniscal pathology, or loose bodies identified on preoperative MRI by the attending surgeon (G.P.T., D.L.R., or D.C.W.). Modified Outerbridge scores 23 on preoperative MRI were retrospectively scored by a senior surgeon (G.P.T.) blinded to patient identifiers. Complications were defined as wound dehiscence, wound infection defined by the Centers for Disease Control and Prevention guidelines, 12 persistent pain requiring revision procedure, deep-vein thrombosis, nerve palsy, and arthrofibrosis.
Homogeneity among the 3 intervention groups was assessed. Age, CD ratio, and TT-TG distance were analyzed using 1-way analysis of variance (ANOVA) test. Sex and laterality were analyzed using Fisher exact test. Patellar and trochlear modified Outerbridge scores were analyzed using chi-square test. Continuous outcome variables, including tourniquet time, range of motion, and Kujala scores, were analyzed using 1-way ANOVA. Ordinal outcome data, including visual analog scale scores, occurrence of complications, return to operating room (OR), and recurrent instability, were statistically analyzed with chi-square test. Statistical significance was set at alpha ¼ .05.
RESULTS
There were 37 knees in the group without arthroscopy, 41 in the group with diagnostic arthroscopy, and 23 in the group with targeted arthroscopic procedure. The average clinical follow-up time was 6.9 months. There was no statistical difference between groups with respect to sex, laterality, or CD ratio. There was a statistically significant difference in age and TT-TG distance among the 3 groups (Table 1). Table 2 shows treatment outcomes between the groups at 3-month follow-up. Among the 3 groups, there were no differences in postoperative pain, knee extension, or postoperative complications. Significantly longer tourniquet times were seen for MPFL reconstruction with targeted arthroscopic procedures versus diagnostic arthroscopy and no arthroscopy (73 vs 57 and 58 min, respectively; P ¼ .0002); there was no difference in tourniquet times between the no arthroscopy and diagnostic arthroscopy groups. Significantly improved knee flexion at 2 weeks was seen after reconstruction without arthroscopy versus reconstruction with diagnostic and reconstruction with targeted arthroscopic procedures (58 vs 42 and 48 , respectively; P ¼ .02). No difference in pain was observed at 12 weeks. At 40 ± 20 months postoperatively, significantly higher Kujala scores at follow-up were recorded for MPFL reconstruction without arthroscopy versus reconstruction with diagnostic and targeted arthroscopic procedures (
Arthroscopic Procedures Performed
A total of 13 knees (31.7%) in the diagnostic arthroscopy group underwent an additional arthroscopic procedure at the time of their MPFL reconstruction (Table 4). Meniscal injury was the most common pathological feature not previously identified on MRI. The most common targeted arthroscopic procedure performed was loose body removal and patellar chondroplasty. All meniscectomies were small radial tears.
Complications
The overall complication rate in our cohort was 4.9%. Complication rates among each group are listed in Table 2. Two patients in the no arthroscopy group developed recurrent instability at 9 months postoperatively, 1 secondary to trauma and the other after returning to basketball. Both underwent revision MPFL reconstructions with uneventful postoperative courses. Another patient in the no arthroscopy group returned to the OR after a postoperative wound dehiscence at the 3-week mark.
In the diagnostic arthroscopy group, 1 patient sustained recurrent instability and returned to the OR for revision MPFL reconstruction with TTO at 3 years postoperatively. His CD ratio was 1.5 and TT-TG distance was 21 mm. One patient's postoperative course was complicated by wound dehiscence at the 4-week mark, which granulated and healed without need for surgical intervention. One patient in this group returned to the OR because of persistent pain and underwent lateral facet chondroplasty and lateral retinacular release. One patient developed arthrofibrosis and returned to full motion after arthroscopic lysis of adhesions. One patient sustained a common peroneal nerve palsy, which has gradually improved with observation.
In the targeted arthroscopic procedure group, 3 patients developed recurrent instability. Only 1 returned to the OR and was treated with repeat MPFL reconstruction and TTO. Revision MPFL reconstruction was offered to the other 2 patients, but they elected to continue nonoperative treatment of their instability. As this is the first study to compare MPFL reconstruction with and without arthroscopy, we were unable to identify an appropriately similar study to perform a pre hoc power analysis. A post hoc analysis revealed that we would need to include 688 patients for 80% power in detecting postoperative complications.
DISCUSSION
Currently, few studies support or refute whether diagnostic arthroscopy with MPFL reconstruction provides any supplementary information, improvements in pain or motion, increased tourniquet time or complication rate, or differences in postoperative Kujala score or recurrent patellar instability. Proponents of routine arthroscopy cite the ability to address concurrent intra-articular pathological features, remove loose bodies, accurately evaluate the patellofemoral articular surface, and assess patellar tracking. 22 Physicians in support of only targeted arthroscopy question the ability to change outcomes by performing diagnostic arthroscopy in order to identify additional pathological features, as well as the efficacy of assessing dynamic tracking in a patient under anesthesia with an insufflated joint and possibly a tourniquet in place. Additional costs to the patient and the health care system also remain a concern.
In our study, diagnostic arthroscopy identified pathological features not previously noted on MRI in 31.7% of cases; these features were primarily meniscal in nature. This is in line with previous findings of the variability of MRI in diagnosing meniscal pathology, 5,20 which has shown sensitivities and specificities ranging from 50% to 90% and 66% to 84%, respectively. Furthermore, MRI has been shown to have between 76% and 78% interobserver reliability when compared with arthroscopy (the gold standard) for treatment of intraarticular knee pathology. 29 Despite diagnostic arthroscopy addressing intra-articular pathological features in 31.7% of cases, there were no statistically significant differences in pain scores between patients undergoing MPFL reconstruction with versus without arthroscopy. This leads us to question whether the pathological features addressed after diagnostic arthroscopy, mainly partial meniscectomy of small radial tears in conjunction with MPFL reconstruction, result in any clinically meaningful benefit. We acknowledge that associated chondral injury and resultant loose bodies from patellar instability can affect knee pain and function, and this was accordingly addressed in our targeted arthroscopic procedure group. This may highlight the usefulness of preoperative MRI in identifying clinically notable chondral injury and loose bodies over meniscal pathology. This finding is congruent with the study by Kita et al, 14 which found 97% of patellar dislocations to have some chondral lesion at time of MPFL reconstruction, although the lesions did not provide considerable discomfort, were not addressed with a procedure, and did not result in notable deterioration.
In the 42% of patients responding to final telephone interview follow-up, postoperative Kujala scores were significantly higher in the group that underwent MPFL reconstruction without arthroscopy. The average score in this group was 87.8, compared with a score of 70.1 observed in the group that underwent reconstruction with targeted arthroscopic procedure, which exceeded the minimal clinically important difference previously described as 10. 8 The causality of this finding is likely multifactorial and may be attributable to preoperative intra-articular pathology, duration of instability, patient expectations, recall bias, and/or iatrogenic injury related to arthroscopy. Slightly older age and greater TT-TG distance in the arthroscopy groups may have also contributed to this finding. It is unclear if there is a direct link between arthroscopy in conjunction with MPFL reconstruction and patient-reported outcomes.
There were no statistically significant differences between the arthroscopy and no arthroscopy groups with respect to complications, wound infections, or recurrent instability. This finding was expected, given the brief nature and relatively low risk of knee arthroscopy. Although there was significantly increased knee flexion at 2 weeks in the no arthroscopy group, there were no between-group differences observed at the 3-month mark. We also observed significantly longer tourniquet times for targeted arthroscopic procedure, which was expected given the preoperative planning to address intra-articular pathology. Interestingly, we did not observe a difference in tourniquet time between the no arthroscopy and diagnostic arthroscopy groups, suggesting that diagnostic arthroscopy may be performed without the adverse effects associated with longer tourniquet times. 24 The overall complication rate in our cohort was 4.9%, which is similar to the previously reported overall complication rates of 4.7% for arthroscopy 25 and 26.1% for MPFL reconstruction. 27 Although there were no significant differences in complications of MPFL reconstruction with and without arthroscopy, there remain considerable costs associated with additional procedures. Although absolute cost varies with procedure type, number of procedures performed, and payer-specific agreements, these added procedures increase the cost to the patient and payer. Adding an arthroscopic setup increases the overall procedure cost as well. This is a noteworthy factor in light of increasing health care costs and the transition to bundled payments. Further studies, including prospective randomized controlled trials, would be helpful to determine when surgeons should perform diagnostic arthroscopy at the time of MPFL reconstruction.
Our study has several limitations. The decision to proceed with diagnostic arthroscopy at time of MPFL reconstruction was not based on any particular algorithm or randomization and was a shared decision-making process between patient and surgeon. This may have imparted selection bias into the different arthroscopy groups. These data may also be confounded by the older age and greater TT-TG distances observed in the MPFL reconstruction groups undergoing arthroscopy. Furthermore, we did not account for the time between MRI and surgical procedure, which may have affected the pathological features appreciated on arthroscopy but not MRI. Despite similar procedure technique and rehabilitation protocol, we did not account for individual variations in rehabilitation or bracing protocols, which may have confounded pain and motion results. In addition, no specific protocol was utilized to track postoperative complications, which may have imparted recording bias. Performance bias may also have been imparted into the data, as each group was variable with regard to senior surgeon case mixture (G.P.T., D.L.R., and D.C.W.).
Because this is the first study to compare MPFL reconstruction with and without arthroscopy, we were unable to identify an appropriately similar study to perform a pre hoc power analysis. A post hoc analysis revealed that we would need to include 688 patients for 80% power in detecting postoperative complications. Therefore, despite observing statistically significant differences in tourniquet time, postoperative motion, and Kujala scores, this study may have been underpowered, with sample sizes too small to find true differences in complication rates. Our study also did not quantify percentage of meniscectomy, which would aid in determining the magnitude of an intervention performed. The telephone follow-up response rate of 42% may also have imparted transfer bias into the Kujala score analysis. Last, our comparison groups were not homogeneous with respect to patellar and trochlear cartilage injury. The targeted arthroscopy group had significantly higher preoperative modified Outerbridge scores, likely contributing to the lower patientreported outcome scores. However, this disparity in cartilage injury is rational, considering the most common procedure performed for the targeted arthroscopy group was loose body removal. Interestingly, the diagnostic arthroscopy group had lower Kujala scores despite lower preoperative patellar and trochlear Outerbridge scores, suggesting that diagnostic arthroscopy even in the setting of more preserved cartilage does not improve patient-reported outcomes.
CONCLUSION
Diagnostic arthroscopy at the time of MPFL reconstruction may result in findings not previously appreciated on MRI. The clinical benefit of addressing these findings remains unclear despite the increased cost. Postoperative pain, range of motion, and risk of complications were not significantly different among the study groups at 3 months postoperatively. Patient-reported outcomes at 40 months postoperatively were higher in patients undergoing MPFL reconstruction without arthroscopy. Surgeons performing MPFL reconstruction should consider diagnostic arthroscopy a safe adjunct but one without clear benefits in improving pain or patellar stability.
|
2020-09-03T09:12:47.476Z
|
2020-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "7c61be075feb4c0cc7e217c87ab138dad62db880",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2325967120945654",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a88ff6255b2d546a6513d8306d68f2ab7233188b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119495126
|
pes2o/s2orc
|
v3-fos-license
|
Inversion of oxygen potential transitions at grain boundaries of SOFC/SOEC electrolytes
Solid oxide fuel/electrolyzer cell (SOFC/SOEC) converts energy between chemical and electrical forms inversely. Yet electrolyte degradation takes place much more severe for SOEC than SOFC during long-term operations. By solving transport equations, we found very large oxygen potential gradients and sharp oxygen potential transitions at grain boundaries of polycrystalline SOFC/SOEC electrolyte. Surprisingly, an inversion of oxygen potential transitions was identified, suggesting a fundamentally different transport mechanism for minor electronic charge carriers. Such findings could be critical to understand and eliminate SOFC/SOEC degradations in practical applications.
Introduction
Solid oxide electrochemical cell (SOC) [1][2][3] converts energy between fuel and electricity and has been considered as a key component of the future energy economy.
It is typically operated between 600 to 1000 o C, which enables flexible fuel selections and high efficiency. On the other hand, due to the high operation temperatures and slow heat-up cycles, continuous operation over thousands of hours is preferred, which places a strict requirement on its long-term stability. Therefore, understanding and eliminating the degradation are of great interests [4][5][6][7][8][9][10][11][12][13] and we shall focus on the electrolyte part in this work, which has been argued to be the main cause of continuous impedance increase during cell operation 4 . The electrolyte of SOC is a dense ceramic layer that conducts oxygen ions, with yttria stabilized zirconia (YSZ) being the most popular one. While most efforts have been on taken to improve ionic conductivity and minimize ohmic loss across the electrolyte, it has been pointed out that the minor electron/hole conduction cannot be neglected as long as local equilibrium is considered so it is of equal practical importance [4][5][6][7][8] . Specifically, electron/hole conduction is a necessity to determine the chemical potential of molecular oxygen (oxygen potential in short hereafter) as well as its spatial distribution inside the electrolyte. The local oxygen potential would then define the thermodynamics and affect various material properties, including concentrations and conductivities of electrons and holes 14 , phase stability 15 , mechanical properties 16 , chemical expansion and stress 17,18 , microstructural evolution 19-21 , pore/oxygen bubble formation 4 and ultimately degradation/stability of the electrolyte.
The solution of oxygen potential distribution is based on transport equations 7,8,22,23 and previous treatments all assumed a homogeneous "effective medium" whose transport properties solely depend on local oxygen potential, analogous to a single-crystalline electrolyte in some sense. Yet, all electrolyte layers are polycrystalline, sintered from ceramic powders and it is well known that grain boundaries (or more strictly speaking, space charge layers extending several nanometers from grain boundary cores) are blocking to oxygen ions 24,25 hence have distinct transport properties with respect to the grain interiors. Its effect on oxygen potential distribution inside the electrolyte is not known to any extend, which shall be the theme of the present study.
In practical applications, SOC can be operated reversibly, either as a solid oxide fuel cell (SOFC) utilizing chemical fuels for power generation or as a solid oxide electrolyzer cell (SOEC) using electricity to produce fuels. Faster degradation rate has been identified for SOEC than SOFC, leading to observable pore/oxygen bubble generations and line-ups along grain boundaries inside the electrolyte 4,[9][10][11] . The cause has been attributed to larger operational current densities and higher oxygen partial pressures on the anode side of SOEC. However, it cannot be the whole story because the above reasons fail to explain (i) why reversible operations between SOFC and SOEC modes hugely eliminate degradation 4 and (ii) why pores/bubbles preferentially form at the grain boundaries perpendicular to the electric field direction 4,9-11 (oxygen over-pressure equilibrated with the local oxygen potential would indicate isotropic pore/bubble formation at any grain boundaries, irrelevant to the field direction).
Strikingly, Graves et al. 4 primarily conducted on YSZ, whose ionic and electronic conductivities are best known 14 , the phenomenon is general to any mixed ionic and electronic conductors, for both electrolyte and electrode materials.
II.
Formulation of the problem At steady state, we assume no internal chemical reactions between ionic and electronic species (no generation/consumption of molecular oxygen inside the dense electrolyte), so both the ionic current density 2 O I − and the electronic current density eh I remain constants throughout the electrolyte.
While the above two methods are mathematically equivalent, the second one turns out to be numerically much simpler for solving the considered polycrystalline problem (a multilayer problem) and will be used to obtain numerical results in Section III.
III. Results
Numerical results were obtained to illustrate the effect of grain boundaries for two because no information about grain-boundary electronic conductivity is available. The conductivity data are plotted in Fig. 1. The calculated oxygen potential distributions and gradients for Case (1) and (2) are shown in Fig. 2 and 3, respectively. Several feature should be noticed. First, all the curves in Fig. 2a and 3a have a sigmoid shape, with the largest oxygen potential gradient at the oxygen potential corresponding to electronic conductivity minimum ( eh + ) in Fig. 1. Under the same magnitude of current density, the oxygen potential distribution is steeper under SOEC mode than under SOFC mode (by comparing red and blue curves in Fig. 2 and 3), and steeper for a thinner electrolyte than a thicker one (by comparing Fig. 2 and 3). These have been recognized and are consistent with previous theoretical studies 7,8,22,23 . Second, although grain boundaries are 100 times more O 2− blocking than bulk, the oxygen potential distributions inside polycrystals do not differ much from the corresponding references inside single crystals. This is understandable since grain boundary is much thinner than grain size (10 nm vs. 2.5 μm in the present cases) and constitutes only small portion of the total thickness. Third, the existence of grain boundaries slightly sharpens the oxygen potential distribution under SOEC mode and slightly smoothens it under SOFC mode. Fourth, there are an oxygen potential transition at each grain boundary, with a large oxygen potential gradient. For case (1) in Fig. 1, the oxygen potential gradients at grain boundaries are about 10 times smaller than the largest one at electronic conductivity minimum in the bulk; for case (2) in Fig. 2, the ones at grain boundaries are much larger than the largest one in the bulk.
Lastly, and most interestingly, the oxygen potential transitions at grain boundaries are inverse and the gradients have opposite signs under SOFC vs. SOEC modes.
IV. Discussions
Obviously, the oxygen potential transitions at grain boundaries come from the corresponding change in the chemical potential gradient of electrons and holes is required to suppress the over-flow of electrons and holes. Under the assumption of local equilibrium, chemical potential of electrons and holes is always equilibrated with oxygen potential. Therefore, the large chemical potential gradient of electrons and holes is reflected by the oxygen potential transition plotted in Fig. 2a and 3a. Conceptually, this is in the same spirit with the overpotential across the electrode/electrolyte interface: avoiding discontinuity of fluxes at heterogeneous interfaces. So the oxygen potential transition at grain boundary can be integrated over the thickness to define a "grainboundary overpotential". Their difference is: electrode overpotential drives chemical reactions involving oxygen gases, while grain-boundary overpotential drives ion/electron/hole fluxes without reactions. To weaken such oxygen potential transitions at grain boundaries, or lower grain boundary overpotential, one should make bulk conductivities and grain boundary conductivities more alike. This is along the same route people have been trying to decrease space-charge potential and increase grain boundary conductivity of O 2− via grain boundary engineering.
Now we come to the question: why there is an inversion of oxygen potential transition at grain boundaries in SOFC/SEOC? To address this, one should first notice that ionic and electronic currents flow along the opposite directions in SOFC but along the same direction in SOEC 5 . In YSZ electrolyte, the chemical potential of O 2− is fixed because extensive aliovalent doping pins oxygen vacancy concentration 7 . Therefore, ionic current is driven by electrostatic potential alone. For electrons and holes, their concentrations could differ over many orders of magnitude at two electrodes so in addition to electrostatic potential, chemical potential of electrons and holes can also drive electronic current. SOFC operates under an oxygen potential difference or a Nernst voltage to produce electricity. By definition, the Nernst voltage should be larger than the integral of electrostatic potential gradient. In the electrochemical potential of electrons and holes, the electrical and chemical parts are opposite in sign and the latter being larger in magnitude determines the direction of electronic current. Therefore, ionic current controlled by electrostatic potential flows oppositely to electronic current.
In comparison, SOEC operates under an applied voltage across an oxygen potential difference. By definition, the integral of electrostatic potential gradient should be larger than the Nernst voltage. Therefore, the electrical part is larger than the chemical part in the electrochemical potential of electrons and holes, and ionic and electronic currents flow in the same direction.
Back to the polycrystal problem, under both SOFC and SOEC modes, we have positive oxygen potential gradient in the bulk ( Fig. 2 and 3). As discussed earlier, electrostatic potential gradients are larger at more O 2− blocking grain boundaries. As a result, the chemical part in the electrochemical potential of electrons and holes needs to cancel the electrical part to compensate the otherwise over-flow of electrons and holes. In SOFC, the electrical and chemical parts are opposite in sign so chemical part only needs to become larger without changing the sign. Therefore, oxygen potential gradients are both positive at grain boundaries and in bulk. In SOEC, the electrical and chemical parts have the same sign, so in order to cancel the electrical part, the chemical part must change its sign. Therefore, oxygen potential gradient becomes negative at grain boundaries. This clarifies the origin of inverse oxygen potential transitions at different operation modes.
To this point, it is interesting to note such oxygen potential transitions have the same characteristics with the electrode overpotential. In SOFC, oxygen potentials at electrode/electrolyte interfaces are bounded by the two gaseous atmospheres. At both hydrogen electrode (left electrode in our definition)/electrolyte interface and oxygen electrode/(right electrode in our definition)/electrolyte interface, oxygen potential is lower on the left-hand side than the right-hand side. The same trend applies to grain boundaries: oxygen potential is lower on the left-hand side than the right-hand side as shown in Fig. 2a and 3a. Similarly, in SOEC, oxygen potentials at electrode/electrolyte interfaces are outbounded by the two gaseous atmospheres. This is to say, at both electrode/electrolyte interfaces, oxygen potential is higher on the left-hand side than the right-hand side. Again, the same trend applied to grain boundaries, as shown in Fig. 2a and 3a. Therefore, electrode overpotentials have the same signs as oxygen potential transitions, or grain-boundary overpotentials, and it is well known that electrode overpotentials are inversed for SOFC and SOEC! More interestingly, this would raise the following question. Jacobsen and Mogensen 7 wrote "oxygen pressure inside the electrolyte will never become higher than the pressure corresponding to the electrode potential of the oxygen electrode and never lower than corresponding to the electrode potential of the hydrogen electrode, irrespective of which mode or condition for the cell operation." While the statement still holds under SOFC mode, it may break down and oxygen potential could be un-bounded in polycrystalline electrolyte with O 2− blocking grain boundaries under SOEC mode, and the more O 2− blocking the more so. That is to say, the highest oxygen pressure could be at the grain boundary of the electrolyte next to the oxygen electrode, which provides the highest driving force as well as preferential nucleation sites for oxygen bubble formation. Such a possibility has been also discussed by Chatzichristodoulou et al. 23 Lastly, one should note the present study is based on continuum level. In reality, grain boundaries and space charge layers are only a few nanometers thick, and charge neutrality does not hold either. The influence of atomic discretization would be interesting yet difficult to consider. We also assume all grain boundaries have the same transport properties, in the same way AC impedance measurement does. However, grain boundaries with distinct misorientations and structures could behave distinctly, which may lead to degradation preferentially at some special grain boundaries, for example a more O 2− blocking one. These complexities could smear the phenomena, but we believe the general trend still holds.
V. Conclusions
(1) By solving transport equations in polycrystalline electrolyte, we identified a sharp oxygen potential transition at grain boundaries and their directions are inverse when operated under SOFC and SOEC modes.
(2) Ionic and electronic currents flow along opposite directions in SOFC and along the same direction in SOEC, which is rooted in the different (electro-)chemical forces that drive ionic and electronic current under SOFC/SOEC operations. The inversion of oxygen potential transitions has the same origin.
(3) It is suspected the inversion of oxygen potential transitions lead to different stress states at grain boundaries under SOFC/SOEC modes, which is related to their contrasting degradation kinetics.
(4) Oxygen potential could be un-bounded by two terminal ones for polycrystalline electrolyte operated under SOEC mode and internal oxygen pressure could be the highest at the grain boundary of the electrolyte next to the oxygen electrode.
|
2018-10-21T20:07:22.000Z
|
2018-10-21T00:00:00.000
|
{
"year": 2018,
"sha1": "2cca992da25ae89525dbe6442f5fff31c34889dc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2cca992da25ae89525dbe6442f5fff31c34889dc",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
159010497
|
pes2o/s2orc
|
v3-fos-license
|
International Migrant Remittances in the Context of Economic and Social Sustainable Development. A Comparative Study of Romania-Bulgaria
: The economic stability is the main goal of every country’s administration, contributing to the decrease of uncertainty, creating an attractive business environment, attracting foreign direct investment and contributing to economic growth, which increases the standard of living, reduces income inequalities, represents a sustainable development for the country and puts an end to the migration process. Migration flows lower the demographic resources of the states going through this process and consequently they compromise the possibility for future generations to support a sustainable economic growth. Migration is a process with an aggressive and alarming manifestation in Romania and Bulgaria, raising the problem of the future capacity of these countries to ensure long-term economic and social sustainability and requiring an analysis framework from a scientific perspective. The current study proposes a comparative study to identify the important determinants of international migration in the EU28 and analyses the impact of remittances on economic growth/stability and income inequality in Romania and Bulgaria—Central and Eastern Europe countries—for the period between 1990 and 2015. The main contribution of the present study consists in emphasising the common determinants of the two countries regarding the migration process and at the same time providing solutions to improve government policies to contribute to the economic and social sustainability. The authors employed a multiple regression model and the correlation analysis, and tested 8 hypotheses for Romania and Bulgaria. The results indicated that the main determinants of the migration process in Romania and Bulgaria are the inflation rate, the income inequality and household consumption expenditure. Furthermore, the results indicated that there is not a direct relationship between the remittances received/capita and GDP/capita growth rate in Romania and Bulgaria. In addition, there is a direct relationship (negative and with average intensity) between the remittances received/capita and price inflation rate in Romania but not in Bulgaria. In the case of Romania and Bulgaria, the results indicate that there is a direct relationship with a similar intensity between the remittances received/capita and the unemployment rate, the household final consumption and income inequality.
Introduction
Sometimes, migration is perceived as a threat to the labour market, to the security of incomes, to employment and to local culture. According to Giddens [1], people accept paying taxes to show solidarity with people like them, who share the same values and principles and who are not immigrants. The problem with migrants is more delicate. The migration flows affect the demography of the sender country and of the host country. The decrease in the numbers of an active population in a state means a decrease in the number of human resources, births, people who contribute to the state budget by paying taxes and fees, qualified workers which in the long term weakens the economic basis of a country and its capacity to support development. Migration has ambivalent effects for the countries of origin and for the host countries. In time, the countries of origin are the most affected, because the migrants intend to reach the developed countries which by migration solve many of their demographic problems. Any economy characterised by massive migration outflows tends to compensate the losses in human resources by exploiting the land and the underground and by energy-intensive production, raising questions regarding the possibility of future sustainable development. We start from the premise that there is a negative influence relation between the migration outflows from developing countries and their sustainability.
Migration is considered a wealth-generating phenomenon for the country of origin and the host country in the context of over 200 million people living in another country other than their country of origin [2]. In the paper Migration and Remittances. Recent Developments and Outlook, the causes of migration include different income and political and demographic factors. The same paper states that the global minimisation of the negative effects and the manner in which the positive effects are amplified represent the main aspects associated with migration [3]. Migration has both positive and negative effects. According to Rosenzweig [4], for example, the positive effects of migration consist, among others, in the increase of the price of qualified work on the emigrants' market of origin, an increase of the income by remittances and an increase of investments, especially in education. The negative effects are correlated with the demographic decrease, labour market unbalance, and according to Antman [5], with psychological effects on the families who stay in the country of origin. The effect of remittances is considerable for the emerging and less developed states. If remittances emphasise the inequalities in the sender country, emigration will stimulate emigration [6]. Emigration is a self-fed process. The migrants established on other territories will help other people to migrate, offering at the same time the example of success and of beneficial chances away from the country of origin, where the living conditions are unsatisfactory for the remaining population. In the case of emerging states (countries with high development potential like Brazil, Russia, India, China, Mexico, Turkey, Poland, Romania, Bulgaria and many others), in 2009 global remittances represented 1.9% of GDP, and in the case of less developed states they represented 5.4% of GDP. The difficulties for these countries appear when migration takes the form of brain drain [2]. Sometimes, migration contributes to the demographic balance of the destination countries and to the economic balance of the countries of origin. According to Skeldon [7], the demographic balance is reflected in the composition of the population from the point of view of births, deaths and net migration (the difference between immigrants and emigrants). Births and immigration contribute to the population growth and deaths and emigration contribute to the population decline. According to the demographic balance, emigration determines the decrease in the number of individuals in a country, especially of the active ones. Remittances are money earned by immigrants in the host country, which go into circulation in the market of the country of origin. The volume of remittances is under-evaluated, because they do not always officially reach the immigrants' country of origin. If in the developed countries the foreigners represent cheap labour used in some sectors with the purpose of remaining competitive on the market, remittances represent important sources of income for the migrant sending countries [8].
Their impact is generally positive, with an effect of multiplication. Remittances directly and indirectly raise the national income, the rates of investments and consumption (a great part of remittances are used to purchase land and housing) and the demand. In addition, they stimulate the production and creation of jobs and implicitly the income of the families who do not receive remittances, they are introduced into the educational and health systems. However, they also manifest negative effects like an increase in income inequality, a decrease of the interest of those who receive them to be active in the labour market, the creation of dependence on these amounts because the beneficiaries of remittances lose their motivation to work, relying exclusively on the amounts sent regularly by the members of the family who are settled outside the country, inflation pressures because the remittances are money without coverage in domestic production. The retreat of the population from the labour market and the inflation pressures lead to an unbalance which on average and over the long term creates pressures on the economic safety of future generations and implicitly on sustainability. We must state that remittances are not totally reinvested in the formal economy. The distribution of incomes depends on the situation of the beneficiary families, if they are poor or rich [9]. The degree of wealth of a family benefitting from remittances dictates how they will be used. The poor families channel their remittances towards expenses for products of subsistence, while the rich families invest them especially in real estate, education and health.
Remittances are used to purchase real estate as a safe and long-term investment. This is the context in which migration is seen as a positive externality. Romanians and Bulgarians try by migration to ensure their material status. The remittances are directed towards the purchase of land and buildings, or constructions. Often, the decision to emigrate is directly proportional to the possibility to purchase a house. As the income in the country of origin is too low to reach such an objective, people choose to migrate. Many Romanians and Bulgarians send a part of their income earned abroad with the precise purpose of investing in real estate, with the intention to go back to their country of origin and stay there.
In the speciality literature regarding remittances, there are studies on: the relationship between remittances and the development of the financial sector [10], the effects of remittances on children's education and school attendance [11], the relationship between remittances and legislation [12], the net effects of migration and remittances on income distribution [13], the effect of remittances on the residents of an economy [14], the relationship between immigrants' behaviour and their motivation to migrate and their attitude towards remittances [15][16][17], the relationship between remittances and economic growth [18][19][20] and also GDP growth [21] the influence of remittances on fiscal sustainability in dependent economies [22] and on sustainable development [23].
This research has the purpose to contribute to a field less studied: investigation of the determining factors of the migration process, and impact of international migrants' remittances on the sustainable economic and social development of Romania and Bulgaria, countries geographically placed in Central and Eastern Europe. According to Daianu et al. [24], Romania and Bulgaria are among the countries with a relatively high dependence on remittances, offering security especially to poor and unemployed people, contributing to the increase of wealth and of national income, to import financing and to the decrease of current account deficit. In agreement with Haller [25,26], we make the distinction between sustainable economic development and sustainable social development, that is, the economic one is synonymous to growth, representing an increase of macro-indicators. When growth (economic development) influences a society by the increase of wealth, we are speaking about sustainable social development. Not any economic progress is positively influencing the society, therefore it is necessary to make the distinction between the two concepts, one quantitative and the other qualitative.
The investigation of the determinants of the migration process was performed by using migration economic factors like unemployment rate, inflation rate, and level of expenses/capita to analyse the macroeconomic balance. The analysis of economic growth was performed by measuring the impact of remittances on the increase of GDP/capita and on the income inequality in Romania and Bulgaria. The Gini coefficient used explains the impact of sustainable social development. As we presented above, there are few papers explaining the impact of remittances on sustainability. The current study intends to contribute by its results to the literature in the field, analysing also the impact of remittances on sustainable economic and social development in Romania and Bulgaria, countries on which no such comparative studies have been performed. The theoretical model tested is obtained as a result of the review of speciality literature, including relevant studies from the field of migration.
2.1.Conceptual Approach
The topic of migration is neither new, nor little discussed, on the contrary. Migration is analysed from a multi-and transdisciplinary perspective: geographically, economically, sociologically, psychologically, anthropologically, also from the perspective of causalities, theories and effects (Table 1). Table 1. Migration-perspectives of analysis.
The migration theories made the object of many researches. Budnik [54] analyses from the standpoint of temporary migration (when the period of stay on another territory is up to a year), by analysis levels, the reasons for the decision to emigrate, the choice of destination, the markets, the utility and the mechanisms. Galbraith [36] associates the causality of migration with economic inequity. The population living illegally on the territory of a country forms a sub-class of second-hand residents, with no political representations and no civil rights. This sub-class is responsible for the worsening conditions on the labour market, the natives being affected by unemployment. Providing the migrants with civil rights results in bigger migration flows, as networks are formed and start functioning, while the entire phenomenon gets the nature of a political issue.
De Hein [60][61][62] analyses the myths of migration, reaching a series of conclusions: the period of the end of last century and the beginning of the present one is not characterised by massive migration waves; the cases of migration are not exclusively stories of poverty and misery, but, among other things, the manifestation of knowledge and the existence of networks; the relationship between migration and development is neither linear, nor directly proportional. Furthermore, the policies of development and trade liberalisation are not the most effective remedies against this phenomenon, as development stimulates and does not inhibit migration. Coyle [63] also refers to the myths of migration. She considers that migrants reduce welfare and weaken the health systems. However, the first-generation immigrants bring a net contribution to the state through the taxes they pay, which is higher than the value of medical care and of the benefits they get. There is a difference between the economic effects of old migration, compared to the ones of new migration. Coyle agrees that the expansion of migration needs an adaptation of infrastructure in the host country (more dwellings, more hospitals, more schools) and the migration waves are related to the income inequities in the countries of origin. The aggravation of poverty puts much pressure on migration, the destination being the rich countries.
According to the World Bank [3], migration brings advantages to the countries of origin if they are states with low incomes. The remittances represent for poor countries the main source of money for currency exchanges, the main possibility to reduce utter destitution, the main source of investments and capital accumulation. There are situations when migration becomes a mechanism for economic and demographic balance.
Studying migration in the EU, Kerr and Kerr [43] reached the conclusion that the migration flows are asymmetrical and heterogeneous: Sweden is preferred by refugees, Germany has a highly represented Turkish minority, while people of Moroccan origin prefer the Netherlands. The immigrants' participation rates to the labour market are lower than those of the natives, especially in the countries where the benefits offered are abundant. The immigrants with a high level of education come from European states (a third are from developing European countries) but their level of education is poorer than the locals'. From the standpoint of causality, the authors consider that the mobility is due to the differences between incomes, also the living conditions and the oppressive regimes in the countries of origin. The choice of destination depends on financial factors, on personal security, on the distance from the country of origin, on the existence of migration networks.
Starting Point-Initial Migration Theories and Models
Free circulation across wide territories enables the mobility of the people looking for conditions able to guarantee or improve their wealth. The theories of mobility are not recent. One of the most famous classical theories was formulated by Zelinsky in 1971 [29]. The theory of territorial mobility considers migration a synonym of circulation, seen as a change of residence for a determined or undetermined period of time. The migration process is divided into five stages, starting with the Middle Ages and up to a future left at the discretion of our imagination or of reality, as events occur in time. The five stages go from a relatively low mobility, specific to the pre-modern period, to a relatively high one, specific to the modern period.
Migration is an intensely theorised topic, developed in micro-and macro-approaches (theories) [64], also mixt approaches, combinations of the first categories.
The micro-theories emphasise aspects of the system of values such as migrants' wishes, expectations and resources, analysing the factors influencing individually the decision to migrate. This group of theories includes the expectancy-value model and the stress-threshold model, and the cost-benefit model. The expectancy-value model, developed by Crawford [65], is based on the hypothesis stating that behaviour depends on expectations and values. It is a cognitive model, according to which migrants make decisions according to economic factors and education [66]. The stress-threshold model, developed by Wolpert [67], describes the rational migrant's behaviour before making the decision but not necessarily after that. According to the cost-benefit model, any decision generates positive effects (benefits) higher than the costs [68]. The migrant potential compares the costs-the available financial resources and the invested psychological resources to reach their purpose, with the benefits obtained-financial gains higher than those earned in their country of origin, in addition to personal security. The micro-theories explain how the macro-and the mezzo-factors are reflected in the individual decision to migrate.
The macro theories (pioneering gravity model, push and pull model) are focused on economic, demographic, political aspects and on characteristics specific to some regions and countries; for instance, legislation or global changes. The pioneering gravity modelor Ravenstein's law (1885) [27]-considers that migration has economic causes-well-paid jobs and that the number of migrants diminishes with the increase of the distance from the country of origin [68]. Push and pull model, formulated by Lee in 1966 [64], considers that migration depends on factors specific to the place of origin, which push people to leave and on factors that pull individuals towards destinations with higher potential [56]. The push factors are economic (high unemployment rate, low level of payment, small income per capita). The pull factors are correlated with the regulations regarding migration and with the situation of the labour market in the host country.
The macro theories provide the best understanding of the factors promoting the voluntary migration phenomenon (the result of the personal decision to emigrate, based on several causes analysed by macro-theories) and the best explanation for involuntary migration [40,41].
The combination of micro and macro approaches formed the so-called mixed theories, of which the most relevant is the cost-usage theory, developed by Bogue. This theory combines specific elements of the push and pull model with elements of the cost-benefit model, by analysing the advantages and disadvantages offered by the countries of origin and of destination [69,70].
The mezzo theories analyse the person-community relations, which, in their turn, influence the decision to leave the place of origin to settle down elsewhere. These theories actually fill the gaps left by the other two approaches. The mezzo theories also explain why voluntary migration is a long-term process and why some regions are more susceptible to the phenomenon, regardless of their position of migrant senders or migrant receivers.
The micro, macro and mezzo theories combine, forming mixt theories with mutual components. The micro, macro, mezzo and mixt theories are divided into two other categories, one explaining the causes of migration (the neo-classical theory, the new economics of migration theory, the dual labour market theory, the world systems theory), and one explaining the resistance of the phenomenon in time (the network theory of migration, the migration systems theory, the cumulative causation theory).
The neo-classical theory considers that migration is the result of the differences of income among markets and countries, that is, it is the result of the heterogeneity of the labour market [34]. The new economics of migration theory starts from the neo-classical theory but it moves the decision of migration from the individual to the small community, that is, the decision to leave one's country of origin is not taken by each person apart but by their family [37,38]. Dual labour market theory correlates migration with structural changes, an important part being played by the demand. Developed by Piore in 1979 [32], this theory refers to the duality of the occupational structure and of the economic organisation in the developed states, where the capital-intensive branches provide good, safe and well-paid jobs, while the labour-intensive branches provide small wages, the capital being underused [70]. The world systems theory, developed by Collins et al. [35], considers migration the result of structural changes on the world markets induced by globalisation, by the increasing interdependence among countries and by the appearance of new forms of production.
The network theory of migration analyses the factors maintaining migration in time and in space, such as the existence of communities with a similar culture in the host country (networks). The people's behaviour is not determined only by their own culture, by individual attitudes and by demographic features. An important role is played by social relations, stimulating or constraining the behaviour of the people involved [71]. According to the migration systems theory, migration influences the social, cultural, economic and institutional conditions in the host countries, and in the countries of origin [34]. Cumulative causation theory, initially developed by Myrdal [72], considers migration a self-generated and self-fed process, due to the existence of networks and of a migration culture. In a vicious circle, migration is presented as a divergent and never a convergent process [60][61][62].
Based on the analysis of these migration theories, we proposed in Table 2 a theoretical model of migration on types of migration, dimensions and indicators. Neoclassical [34] heterogeneity of labour market -difference of income among markets and countries The new economics of migration [37,38] decision aspects -the decision to emigrate is made by the family; Dual labour market [32] structural changes -demand; -offer: better, safer and well-paid jobs-in capital-intensive branches vs work-intensive jobs and low salaries-in branches where the capital is under-used.
World Systems [35] structural changes -new forms of production.
Theories Explaining the Resistance of the Phenomenon in Time
Theories/Concept Dimension Indicators
The network of migration [50,54] cultural -analyses the factors maintaining migration in time and space Migration Systems [1,32,34] aspects in the host countries and in the countries of origin -social aspects; -cultural aspects; -economic aspects; -institutional aspects.
The literature presents various theories emphasizing several variables which explain the level and the causes of the migratory process. Based on the assumption that the macro theories provide the best understanding of the factors promoting the voluntary migration phenomenon and the best explanation for involuntary migration [40,41], the authors of this study decided to emphasis the economic characteristics.
The economic stability is the main goal of every country administration and it is measured by the degree of achievement of their economic goals, including the increase of individual welfare mirrored in measures such as economic growth (GDP/capita growth rate), low employment, price stability, or growing consumption [75].
The promotion of economic stability contributes to the decrease of uncertainty, creates an attractive business environment, attracts foreign direct investment, contributing to economic growth, which increases the standard of living, reduces income inequalities, representing a sustainable development for the country and stops among others the migration process.
Remittances remain one of the primary sources of financing for economic growth in dependent economies.
As we can see, the literature does not present comparative studies on Romania and Bulgaria regarding the impact of remittances on economic growth and income inequality and on the evolution of the migratory process within EU28. The authors propose in this comparative study to identify the important determinants of the international migration in the EU28 and analyse the impact of remittances on economic growth/stability and income inequality in Romania and Bulgaria for the period 1990-2015.
This study will enable the authors to identify the common elements of the two countries regarding the migration process and at the same time will be able to provide solutions to improve government policies in reducing inequalities in society.
To examine and highlight the main causes of the migration process and to illustrate the main economic impacts of remittances in Romania and Bulgaria within the EU28, in Figure 1 we suggest the research model to be tested.
Analysis of Remittances in the Case of Romania and Bulgaria
Lubambu [76] highlights the multiple role of remittances: for social insurance, being destined for consumption expenses, contributing thus to the decrease of severe poverty; of investment, especially in the medical and educational sector, for purchasing goods, albeit less sustainable ones in the long term.
In the case of Romania, remittances increased until 2008, then they slightly decreased until 2009, peak year of financial crisis. Between 2009 and 2012, remittances slightly increased compared to 2009, without reaching the level of 2008. After 2012, remittances had a relatively constant trend, with a slight decline in 2015. Starting with 2000, the volume of remittances increased, Romania being among the main beneficiaries of remittances in the world. In 2008, remittances represented 3.3% of Romanian GDP (the fourth place in the world). However, their volume is difficult to estimate, because, according to Andrén and Roman [77], only 40% are sent officially sent into the country. The same authors show that between 2001 and 2003, the value of remittances was 2 billion dollars per year, higher than the direct foreign investment and in 2009 they reached 9.4 billion dollars. For 2012, remittances represented 2.2% of Romanian GDP. Analysing the migration process in the case of Romania, Hărău [78] characterises the remittances as financial transfers compensating for the phenomenon of 'brain-drain' and the losses of human capital by migration outflows. The remittances increase the income of the country from external sources with effects on the standard of living of the beneficiaries, also on the local development by consumption and investments, however without a consensus regarding their contribution to the economic growth and creation of jobs. De Sousa and Duval [79] analyse the relationship between the geographical distance and remittances in the case of Romania for the period between 2005-2009. The conclusion of the study shows that the remittances grow proportionally with the geographical distance, however there is a descending tendency specific to a small group of countries according to the size of the country, the status of the financial and labour market. Silaşi and Simina [80] show that, after 2002, remittances supported the economic development of Romania. The remittance flows became higher than the direct foreign investments, with a role of compensating measures helping the beneficiaries to protect themselves in conditions of economic regression without acting as a capital source for the economic development. Consequently, Romania benefits from remittances only on short term and if it desires to maintain the current development tendency, it will need to import labour in the future.
In the case of Bulgaria, remittances represent a positive aspect of migration. In general, the positive effects of remittances are felt ever the short term. In the case of Bulgaria, they also have negative effects on the labour market, demographic structure, motivation to work, which manifested mostly on average and long term. After 2004, the volume of remittances increased considerably in the case of Bulgaria. In 2004, remittances represented 4.2% of GDP and in 2006 they were 5.4% of GDP [6]. In 2008 and 2009, years of financial crisis, the volume of remittances considerably decreased in Bulgaria and until 2012 their volume remained relatively constant. In 2013 remittances strongly increased, and subsequently their volume slightly decreased until 2015, when, after a slight decline, they started to increase again. According to Markova [81], the Bulgarians benefitting from remittances used them to cover basic needs and to purchase goods for long-term use, especially buildings and land, raising the standard of living and the economic growth by consumption and investments. Mintchev and Boshnakov [82] consider that 4-5% of the GDP of remittances do not make the economy depend on them, however it is enough to cover a substantial percentage of the commercial deficit with a positive impact on economic growth and macro-stability. In 2013, remittances represented 2.7% of GDP. They are on an ascending tendency, even if migration is on a descending tendency, which means that migrants send more money to their country of origin. The volume of remittances increased proportionally with the degradation of the economic situation in the countries of origin. The decrease of the currency power of acquisition, the inflation boosts, the small income and increase of inequality in the country of origin determine the migrants to send more money to their families to help them. In addition, the improvement of the migrants' situation in the host country, especially their opportunity to find better paid jobs as they adapt (they learn the language and the customs of the host community, they increase their social circle) determines the growth of emigrants' income from which they send more money to their families at home. When the migrants legally settle in the host country, the volume of remittances decreases.
Mansoor and Quillin [83] quantify the remittances sent by Bulgarians and Romanians as80% and 62%of their income, respectively, considering them a factor contributing to poverty decrease, of savings and investments, however also contributing to a decrease of the competitiveness of exports and of motivation to work.
Migration does not affect only the sustainable development of Romania and Bulgaria. Migration flows are specific to all former communist states of Central and Eastern Europe. The system change increased the number of opportunities for the Central and Eastern European population to look for better living conditions in the developed states, especially in Europe. The case of Romania and Bulgaria is special due to the high flows of emigrants which intensified after the 1990's, especially in 2007, when the adherence of the two countries to the EU brought the hope of a better life for the inhabitants of these states, however not in their countries but on the territory of developed European states, where the access became free. In 2009, year of crisis peak, remittances increased as a result of the worsening of the economic situation in Romania and Bulgaria. The intensification of migration outflows in the last years shows a tendency of life degradation in Romania and Bulgaria, also in the rest of the Central and Eastern European territory, even if the macroeconomic indicators do not entirely reflect it. GDP growth or decline does not mean anything as long as it is not analysed in relation with other indicators and these analyses certify that the Central and Eastern European states do not manage to decrease the development gaps. In their case, sustainability will become in time an acute problem if the tendency to leave the national territory continues. As long as the migration outflows maintain or even increase, the human resource will decrease without being replaced by immigration, because the Central and Eastern European countries are traditionally sending emigrants, without a conversely interest in immigrants.
Perspectives of Migration of Romanians and Bulgarians inside EU28-Romanian Migrant Profile versus Bulgarian Migrant Profile
Romania and Bulgaria have many points in common and one of them is that after 1989 both countries ceased to be communist states. Romanian and Bulgarian government started to be open to the international mobility of the labour force.
According to UN reports [84][85][86], the migration profile was made based on the country of origin and the period of reference. This profile was synthesised in Appendix A- Table A1 and Figure 2 presents the migration flow in the countries of the EU28, comparing Romania and Bulgaria. An analysis of the data offered by UN accessible for the period between 1990-2013 enables us to present a picture of the migration phenomenon in Romania and Bulgaria. According to the data included in Appendix A, UN [84][85][86] shows a decrease in the number of emigrants in the case of Romania by 8% in 2013 compared to 1990. The number of women who emigrated is higher than the number of men. If in 1990 352,000 women emigrated from Romania, in 2013 their number increased to 561,000. These data reflect the psychological and mental profile of the population. Women are more inclined to work than men, which can be seen on the market. The number of working women is higher than the number of working men and the number of unemployed women is lower than the number of unemployed men. More people from the urban environment left the country than from the rural environment. The prevalence of emigrants originating from the rural environment is determined by the degree of education. The urban population adapts easier to the external environment due to their professional and linguistic knowledge, higher than in the case of the rural population. The analysis on periods of time shows that migration increased by 5.15% between 1985-1990, by −2.18% In the case of Bulgaria, the number of emigrants lowered by 22% in the same period. The number of women is higher than the number of men and growing. If in 1990 there were 12,300 more women than men who emigrated from Bulgaria, in 2013 there were 20,300. Most of the migrants are fit for work. The emigrants from the urban environment are much more than those from the rural environment. The estimates for the period between 2010-2015 showed a tendency of improvement of the phenomenon and the analysis until 2050 confirms the UN estimates. The number of Bulgarian emigrants is estimated to 5,077,000 in 2050, compared to 6,827,000 in 2020. According to the UN, As we can see, immediately after the fall of the Communist regime in both countries, the emigrant population chose their first destination based on geographic proximity and commodity (accessibility, advantages, etc.). The World Bank, as compared to the UN, offers data for 2015, when Italy, Spain and Germany were the most attractive destinations for Romanian and Bulgarian emigrants. The real GDP growth in 2015 for Germany by 1.7%, for Spain by 3.2% and for Italy by 0.8% and the average growth for EU28 in 2015 by 2.2% show that both Romanians and Bulgarians chose as destination countries the developed countries within EU28 [2].
One of the destinations preferred by Romanian and Bulgarian emigrants is UK, country which decided to exit the EU and where the emigrants' future, an important economic link, is questionable. Numerically, the Romanian emigrants overtake the Bulgarians. 7500 Romanians and 5350 Bulgarians entered officially the UK territory in 2001, the Bulgarians' emigration being gradual [87]. The flows of emigrants from A2 countries (Romania and Bulgaria) towards UK increased after 2004, especially towards London and the regions of East and South-East of England. After the adherence of the two countries to the EU, approximately 22,000 Romanians and 14,000 Bulgarians entered annually the territory of UK, especially London and the regions of South-East [88]. Approximately 90% of Romanian and Bulgarian emigrants who lived in the UK in 2007 were between 16 and 64 years old and they worked in constructions, real estate, commerce, hotels, and restaurants [86], especially in small and medium private companies [89]. According to Glennie and Pennington [88], the emigrants from the two countries are young and qualified. In the case of Romanians, 82% of the emigrants are 20-65 years old and 69% are 20-39 years old, 52% are men and 48% are women. In the case of Bulgarian emigrants, 44% are under 24 years old and 81% are under 34 years old. 60% of the Romanian and Bulgarian emigrants obtain relatively quickly the certification of their qualification, 18% having higher qualification in their country of origin. According to Glennie and Pennington [88], 82% of the Romanians prove a good knowledge of English.
Analysing the remittances for Romania and Bulgaria, the situation is as follows: However, we noticed that after 2000 and 2007, the volume of remittances has increased surprisingly for both countries. During 2000-2004, the increase of remittances was significant in the case of Romania. It was only after 2004 that the volume of remittances reached an ascending trend in the case of Bulgaria, lower than in Romania. The graph shows two inflexion points for both countries. One is characterising the adherence to the EU which offered the possibility of people's free circulation on the EU territory and the other shows the years of peak crisis which acutely influenced the Romanian and Bulgarian economy and society, respectively. A more detailed analysis of the evolution of Romania and Bulgaria remittances based on Figure 3 was performed in Section 2.2 of the paper.
Data
In order to accomplish this comparative study of Romanians and Bulgarians within the EU28, the authors will use several databases such as: the annual databases for 1990-2015 presented by UN [84][85][86] and processed data presented by Eurostat [87] and the World Bank [2,3,86].
The purpose of this study is to: 1. define the determinants of the international migration in Romania and Bulgaria (see Table 3). 2.
analyse the impact of remittances on economic growth and income inequality in Romania and Bulgaria (see Table 4). In Tables 3 and 4 we present the variables considered in the study of the determinants of the international migration: 1.
GDP/capita growth rate.Ro(Bu)-GDP per capita growth (annual %) represents annual percentage growth rate of GDP per capita based on constant local currency. Aggregates are based on constant 2010 U.S. dollars. GDP per capita is gross domestic product divided by midyear population. GDP at purchaser's prices is the sum of gross value added by all resident producers in the economy plus any product taxes and minus any subsidies not included in the value of the products. It is calculated without making deductions for depreciation of manufactured assets or for depletion and degradation of natural resources [91].
2.
Price inflation rate Ro(Bu) -Inflation Rate (CPI, annual variation in %). World Bank presents the inflation as measured by the consumer price index, which reflects the annual percentage change in the cost to the average consumer of acquiring a basket of goods and services that may be fixed or changed at specified intervals, such as yearly. Laspeyre's formula is generally used [92].
3.
Unemployment rate Ro(Bu)-Unemployment rate (%) represents unemployed workers, who are those who are currently not working but are willing and able to work for pay, currently available to work and have actively searched for work [93].
4.
Household final consumption expenditure-Household Consumption Expenditure Ro(Bu) (current US$). World Bank defines the household final consumption expenditure as formerly private consumption-the market value of all goods and services, including durable products, purchased by households. It excludes purchases of dwellings but includes imputed rent for owner-occupied dwellings. It also includes payments and fees to governments to obtain permits and licenses [94].
5.
GINI.Ro(Bu)-Income inequality-Regarding the income inequality level, the authors considered the GINI coefficient presented in the GINI index, a World Bank estimate [95]. 6.
Total.Migrants.Ro(Bu)-The number of definitive emigrants-we considered the number of the migrants in the EU28-Net migration is the net total of migrants during the period, that is the total number of immigrants less the annual number of emigrants, including both citizens and noncitizens [96]. 7.
Remittances received/capita Ro(Bu)-Personal remittances received (current US$), we considered the definition provided by World Bank-Personal remittances comprise personal transfers and compensation of employees. Personal transfers consist of all current transfers in cash or in kind made or received by resident households to or from non-resident households. Personal transfers thus include all current transfers between resident and non-resident individuals. Compensation of employees refers to the income of border, seasonal, and other short-term workers who are employed in an economy where they are not resident and of residents employed by non-resident entities. Data are the sum of two items defined in the sixth edition of the IMF's Balance of Payments Manual: personal transfers and compensation of employees. Data are in current U.S. dollars [97].
In order to clarify the determinants of the international migration in Romania and Bulgaria, our research hypotheses take into consideration the economic characteristics-the push factors measuring the international migration from a macro theory perspective-and emphasize the influence of these variables in the case of Romania and Bulgaria. Consequently, we formulated the following hypotheses : Hypotheses 1 (H1). The inflation and unemployment rate explain the number of definitive migrants in EU28 from Romania and Bulgaria.
Hypotheses 2 (H2). The unemployment rate and income inequality (Gini index) explain the number of definitive migrants in EU28 from Romania and Bulgaria.
Hypotheses 3 (H3). The household consumption expenditure and GDP growth rate/capita explain the number of definitive migrants in EU28 from Romania and Bulgaria.
In order to analyse the impact of remittances on economic stability and income inequality in Romania and Bulgaria, our research hypotheses take into consideration the economic characteristics measuring the economic stability such as: GDP/capita growth rate, price inflation rate, unemployment rate, household final consumption, income inequality, emphasizing the influence of these variables in the case of Romania and Bulgaria.
Consequently, we formulated the following hypotheses:
Hypotheses 4 (H4).
There is a direct relationship between the remittances received/capita and GDP/capita growth rate.
Hypotheses 5 (H5).
There is a direct relationship between the remittances received/capita and price inflation rate.
Hypotheses 6 (H6).
There is a direct relationship between the remittances received/capita and unemployment rate.
Hypotheses 7 (H7).
There is a direct relationship between the remittances received/capita and household final consumption.
Hypotheses 8 (H8).
There is a direct relationship between the remittances received/capita and Gini Index (income inequality).
Methodology-Regression Model with SPSS
In order to define the determinants of the international migration and empirically investigate the impact of remittances on economic growth and income inequality in Romania and Bulgaria, the authors employed the multiple regression model and Pearson correlation analysis. Regression models are constructed to explain (or predict) the variance of a phenomenon (dependent variable) using a combination of explanatory factors (minimum two independent variables) [98].
The mathematical form of multiple regression model is represented as follows: where: y i = dependent variable (to be explained) x i = independent variables (explanatory) b 0 = a constant which corresponds to the value of the dependent variable when all the independent variables are equal to zero. b n = beta coefficient is a standardized form which corresponds to each independent variable and represents its relative contribution in the model. ε i = represents the residual-the difference between the observed value of the dependent variable and the predicted value.
Meticulously associated with the evaluation of the model, the multiple correlation index R2 represents the percentage of variance explained by the model (the combination of the independent variables).
The design of a regression model should relate on the choice of independent variables and the choice of the regression method. In our case we used the backward elimination method. We used the backward elimination criterion-the probability of F to remove ≥0. 10.
In this case, the initial model includes all the variables, as for forced regression and then the variable with the smallest contribution to the model will be removed if the variation of the R2 is not significant. This variable is eliminated from the model. The procedure will be repeated until all the conserved variables contribute significantly to the improvement of the R2.
This method simplifies the regression model and conserves only the variables contributing significantly.
To be able to apply the multiple regression model, the following most important premises should be observed: weak exogeneity, linear character, homostedasticity, independence, absence of multicollinearity, and so forth. Multicollinearity can be measured by calculated variance inflation factors (VIFs).VIF values higher than 10 indicate that multicollinearity may be a problem. Ideally would be to obtain a VIF value of 1 [98].
The null hypothesis (H0) is that there is no linear relationship between the combination of the independent variables (X 1 , X 2 , X 3 ... X n ) and the dependent variable (Y).
The research hypothesis (H1) is the opposite, that is, the combination of the independent variables is significantly associated with the dependent variable.
The first step is to evaluate the quality of the regression model-analysis of variance. The ANOVA test enables us to determine whether we reject the null hypothesis (H0) or not.
We verify if the model explains significantly more variability than a model without predictor (VI). Then, it is a question of ensuring that all the variables introduced contribute to significantly improve the variability explained by the final model. We analyse the null hypothesis that there is no relationship between the dependent variable and the independent variables by interpreting the results of the ANOVA test. We analyse the relevance of the model and we perform the F value test using the SPSS software. The value of F is significant at p < 0.001. In this case, we must reject the null hypothesis and conclude that there is a statistically significant relationship between the dependent variable and the independent variables. On the other side, if the value of F were not accompanied by a significant p value, the interpretation would stop here.
Subsequently, we examine the contribution of each block of variables. The "R2" value indicates the proportion of the variability of the dependent variable (y) explained by the regression model. The adjusted R2 value is an estimate of the robustness of the model.
In order to analyse the impact of remittances on economic stability and income inequality in Romania and Bulgaria, the authors employed the correlation analyses, by which we intended to determine the possible relationship between two variables, the intensity of the relationship and the direction of influence of a variable on the other. The correlation analysis is a bivariate statistical analysis consisting of the observation of an ensemble of units distributed according to the values of two variables, X1 and X2 [98]. The bivariate statistical analysis has the objective to identify the influence of a variable on another variable, of the direction and intensity of the connection between the two variables.
Model of Migration-The Determinants of the International Migration in Romania and Bulgaria
As mentioned in the methodology, we employed the multiple regression model-backward elimination method. As we previously mentioned, this analysis considers the period between 1990 and 2015 (26 years). Before presenting the results for Romania and Bulgaria, Table 5 shows a descriptive statistic of the independent variables considered to explain the number of definitive migrants of these two countries in the EU28 (dependent variable). The GDP growth rate/capita mean is similar for the two countries considered in our study (2.12% for Romania and 2.15% for Bulgaria). The price inflation rate is higher in Bulgaria, which can be explained by the fact that in its case, there was a hyperinflation in 1991 and 1997; the inflation rate was 333.5% and 1061.20%, respectively. In the case of Romania, the maximum value of price inflation was 256.10% in 1993. The unemployment rate is higher in Bulgaria and the highest value (18.10%) was in 2000. At that time, the unemployment rate in Romania was 7.6%. The income inequality average score is similar for these two countries (0.8 in Romania and 0.79 in Bulgaria) but very high compared to the average of 0.3 for EU28 countries. Table 6 presents the correlations between the studied variables and their Pearson correlation test. Correlation is significant at the 0.05 level (2-tailed) marked with (*) in the table and the correlation is significant at 0.01 level (2-tailed), marked with (**) in the table. If the correlation between two of these variables is significant, there would be a significant risk of multicollinearity. We want to avoid this situation. We can see that there is a very high and significant correlation between the variables GDP/capita growth rate, price inflation rate and unemployment rate for Romania but not correlated in the case of Bulgaria. We can see that GDP/capita growth is not correlated with Household Consumption Expenditure and GINI index (income inequality) for any of the two countries. Data analysis shows that the variable price inflation rate in Romania is significant correlated with the variable GDP/capita, household consumption expenditure and GINI index. In the case of Bulgaria, the same variable is not correlated with the other variables. This situation can be explained in the case of Bulgaria with the hyperinflation episode mentioned above. If we isolate these 2 values, the correlation is following the same trend as in the case of Romania.
Hypotheses 1 (H1).
The inflation and unemployment rate explain the number of definitive migrants in EU28 from Romania and Bulgaria . Hypotheses 1.0. (H1.0.). The inflation and unemployment rate cannot explain the number of definitive migrants in EU28 from Romania and Bulgaria.
In Table 7, we can see that according to the F value obtained for the two models, the null hypothesis can be rejected in the case of Romania. Indeed, the values of 11.525 and 21.487 are significant at p < 0.001, which indicates that we have less than 0.1% chances of being wrong in stating that the models contribute better to predict the number of definitive migrants in EU28. For the case of Bulgaria, according to the F value obtained, the two models were not accompanied by a significant (p < 0.001) p value (p = 0.114 and 0.091). We analyse the relevance of the model and we conclude that the null hypothesis cannot be rejected.
We employed the multiple regression model-backward elimination method. In this case, the initial model includes all the variables considered (model 1), such as price inflation rate and unemployment rate and model 2 is considering only one variable-the price inflation rate (we consider the backward elimination criterion-the probability of F to remove ≥ 0.10. We can see in Table 8 that the unemployment rate is the variable with an insignificant contribution to the model and it was removed from the final model (model 2). The beta coefficient for this particular variable is not statistically significant with a value of −0.168 (i.e., the t-value is not significant with a value of 0.266 ≥ 0.10), the variable does not significantly predict the outcome. The variable price inflation rate presents a beta value of −0.697 and indicates that by each 1-unit increase in the predictor variable, the outcome variable will decrease by 0.697 units. The VIF value (1.003 for the model 1 and 1.000 for the model 2) indicates that we do not have a multicollinearity problem.
The adjusted R Square value for the model 1, which includes all the variables, illustrates that 45.7% of the variance in the number of Romanian migrants in EU28 was explained by the combination of the two variables (price inflation rate and unemployment rate). Furthermore, the model 2 illustrates that 45% of the variance in the number of Romanian migrants in EU28 was explained by the price inflation rate. Hypotheses 2.0. (H2.0.). The unemployment rate and income inequality (Gini index) cannot explain the number of definitive migrants in the EU28 from Romania and Bulgaria.
Hypotheses 2 (H2). The unemployment rate and income inequality (Gini index) explain the number of definitive migrants in EU28 from Romania and Bulgaria.
The Table 9 shows the relevance of the regression model. According to the F value obtained (p < 0.001) for the two models, the null hypothesis can be rejected in the case of Romania and Bulgaria. We can see in Table 10 that the initial model which includes all the variables considered (model 1), such as unemployment rate and Gini index and model 2 which considered only one variable-the Gini index (we consider the backward elimination criterion-the probability of F to remove ≥ 0.10). We can see that the unemployment rate is the variable with an insignificant contribution to the first model in the case of Romania and Bulgaria. In addition, the standardised coefficient Beta of this variable are 0.05 in the case of Romania and −0.009 in the case of Bulgaria, which is not statistically significant (i.e., the t-value is not significant with a value of 0.209 for Romania and 0.601 for Bulgaria ≥0.10) representing an insignificant contribution to explain the dependent variable-number of migrants in EU28-and it was removed from the final model (model 2). The VIF value of the model 2 (1.000 for Romania and Bulgaria) indicates that we do not have a multicollinearity problem.
Furthermore, the variable Gini index presents a beta value of 0.991 for Romania and 0.995 for Bulgaria (model 1) and indicates that by each 1-unit increase in the predictor variable, the outcome variable will increase by 0.991 units for Romania and 0.995 units for Bulgaria. These values indicate that the income inequalities explain in majority the variance of the definitive migrants in EU28 in the case of Romania and Bulgaria.
We may conclude that the variable Gini index (income inequality) explains by 96.3% (model 2) the number of definitive migrants in EU28 in the case of Romania and by 99.4% in the case of Bulgaria (model 2).
Hypotheses 3 (H3).
The household consumption expenditure and GDP growth rate/capita explain the number of definitive migrants in EU28 from Romania and Bulgaria . Hypotheses 3.0. (H3.0.). The household consumption expenditure and GDP growth rate/capita cannot explain the number of definitive migrants in EU28 from Romania and Bulgaria. Table 11 shows that according to the F value obtained (p < 0.001), the null hypothesis can be rejected in the case of Romania and Bulgaria. We can see in Table 12 that the initial model for Romania includes all the variables considered (model 1), such as household consumption expenditure and GDP.Growth.Rate/capita and model 2 which considered only one variable-the household consumption expenditure (we consider the backward elimination criterion-the probability of F to remove ≥ 0.10). We can see that the GDP.Growth.Rate/capita is a variable with an insignificant contribution in Romania. In addition, the standardized coefficient Beta of this variable is not statistically significant with a value of 0.005 in the case of Romania (i.e., the t-value is not significant with a value of 0.940 ≥ 0.10), which represent an insignificant contribution to explain the dependent variable-number of Romanian migrants in EU28 and it was removed from the final model (model 2).
In the case of Bulgaria we have only one model but we can see that the variance of the definitive migrants is explained in majority by the same variable as household consumption expenditure (the standardised coefficients Beta of this variable is 0.941, which indicates that by each 1-unit increase in the predictor variable, the outcome variable will increase by only 0.941 units) as compared to the other variable which presents the standardised coefficient Beta of 0.123, which indicates that by each 1-unit increase in the predictor variable, the outcome variable will increase by only 0.123 units.
In both cases, the VIF value (1.000 for Romania and 1.048 for Bulgaria) indicate that we do not have a multicollinearity problem.
We can conclude that the variable Household Consumption Expenditure explains by 91.9% the number of definitive migrants in EU28 in the case of Romania and by 94.7% in the case of Bulgaria.
Data analysis confirmed that the inflation rate explains the number of definitive migrants in EU28 from Romania. In addition, the income inequality (Gini index) explains the number of definitive migrants in EU28 from Romania and Bulgaria. And finally, the household consumption expenditure explains the number of definitive migrants in EU28 from Romania and Bulgaria. The contribution of each these variables is very similar for Romania and Bulgaria.
In order to achieve our first objective of the present study, we can conclude that the main determinants of the migration process in Romania and Bulgaria are the inflation rate, the income inequality and the household consumption expenditure. However, because we manipulate economic indicators, we can see that many of them are highly correlated and they influence each other. For example, GDP growth rate (economic growth) has a linear correlation with the inflation rate and the unemployment rate, etc.
The Impact of Remittances in Romania and Bulgaria
Hypotheses H4-H8 were confirmed by running correlation analyses, by which we intended to determine the possible relationship between two variables, the intensity of the relationship and the direction of influence of a variable on the other. Table 13 indicates the Pearson correlation for the variable remittances received/capita and the five variables considered in the study such as: GDP/capita growth rate, price inflation rate, unemployment rate, household final consumption, income inequality in Romania and Bulgaria.
Hypotheses 4 (H4).
There is a direct relationship between the remittances received/capita and GDP/capita growth rate in Romania and Bulgaria.
The study of the relationship between the remittances received/capita and GDP/capita growth rate was based on Pearson correlation analysis. According to Table 13, the relationship between the two variables may not be presented as direct, positive (the correlation coefficient value is 0.151 for Romania and 0.308 for Bulgaria), and it is not statistically significant for a confidence level of 99%. Sig. coefficient value (0.504 for Romania and 0.187 for Bulgaria) higher than the accepted level of 0.05, statistically proves that there is not a direct connection between the remittances received/capita and GDP/capita growth rate.
The hypothesis H4 is not confirmed.
Hypotheses 5 (H5).
There is a direct relationship between the remittances received/capita and price inflation rate in Romania and Bulgaria.
The study of the relationship between the remittances received/capita and price inflation rate was based on Pearson correlation analysis. According to Table 13, the relationship between the two variables may be presented as direct, negative, with average intensity (the correlation coefficient value is −0.470) and statistically significant for a confidence level of 95%. Sig. coefficient value (0.027), lower than the accepted level of 0.05, confirms the start hypothesis and statistically proves that there is a direct connection between the remittances received/capita and price inflation rate in Romania.
For Bulgaria, the correlation coefficient value is −0.422 and it is not statistically significant for a confidence level of 99%. Sig. coefficient value (0.064), higher than the accepted level of 0.05, statistically proves that there is not a direct connection between the remittances received/capita and price inflation rate. We recall the hyperinflation episode in Bulgaria in 1991 and 1997 which influenced the statistical data.
H5 is confirmed for the case of Romania.
Hypotheses 6 (H6).
There is a direct relationship between the remittances received/capita and unemployment rate in Romania and Bulgaria.
The study of the relationship between the remittances received/capita and unemployment rate was based on Pearson correlation analysis. According to Table 13, the relationship between the two variables may be presented as direct, negative, with average intensity (the correlation coefficient value is −0.477 for Romania and −0.539 for Bulgaria) and statistically significant for a confidence level of 95%. Sig. coefficient value (0.025 for Romania and 0.014 for Bulgaria), lower than the accepted level of 0.05, confirms the start hypothesis and statistically proves that there is a direct connection between the remittances received/capita and unemployment rate in Romania and Bulgaria.
H6 is confirmed for Romania and Bulgaria.
Hypotheses 7 (H7).
There is a direct relationship between the remittances received/capita and household final consumption in Romania and Bulgaria.
The study of the relationship between the remittances received/capita and household final consumption was based on Pearson correlation analysis. According to Table 13, the relationship between the two variables may be presented as direct, positive, with high intensity (the correlation coefficient value is 0.759 for Romania and 0.799 for Bulgaria) and statistically significant for a confidence level of 99%. Sig. coefficient value (0.000 for Romania and 0.000 for Bulgaria), lower than the accepted level of 0.01, confirms the start hypothesis and statistically proves that there is a direct connection between the remittances received/capita and household final consumption in Romania and Bulgaria.
H7 is confirmed for Romania and Bulgaria.
Hypotheses 8 (H8).
There is a direct relationship between the remittances received/capita and Gini index (income inequality) in Romania and Bulgaria.
The study of the relationship between the remittances received/capita and Gini index (income inequality) was based on Pearson correlation analysis. According to Table 13, the relationship between the two variables may be presented as direct, positive, with high intensity (the correlation coefficient value is 0.718 for Romania and 0.851 for Bulgaria) and statistically significant for a confidence level of 99%. Sig. coefficient value (0.000 for Romania and 0.000 for Bulgaria), lower than the accepted level of 0.01, confirms the start hypothesis and statistically proves that there is a direct connection between the remittances received/capita and Gini index (income inequality) in Romania and Bulgaria.
H8 is confirmed for Romania and Bulgaria. Data analysis shows that there is not a direct relationship between the remittances received/capita and GDP/capita growth rate in Romania and Bulgaria.
In addition, there is a direct relationship (negative and with average intensity) between the remittances received/capita and price inflation rate in Romania but not in Bulgaria.
In the case of Romania and Bulgaria we find that there is a direct relationship with similar intensity between the remittances received/capita and the unemployment rate, the household final consumption and finally the income inequality.
Discussion
Romania and Bulgaria are two very similar countries, as we previously mentioned. Both belong to the Central and Eastern European block, characterised by massive migration flows towards the developed states of the EU28. Migration flows have positive and negative effects on the countries of origin and on the host countries, which are different on short and long term. The focus of the present analysis is on remittances, income sent by emigrants to their country of origin, which on short term influences positively the economy and the society. Remittances are used for subsistence expenditure, also for investments, especially in the field of real estate and education. However, on long term, remittances associated with an increasing number of emigrants economically affect sustainability, because the deficit of active labour will be compensated by activities involving the almost abusive use of other resources and environmental destruction.
The economic stability-sustainability is the main goal of every country administration and at the same time it is a common goal in EU28.
In the case of Romania, migration effects are not entirely negative. Romania is currently the second emigrant sending country after Syria and the Romanian exodus has been high for years [99,100]. The social categories of Romanian migrants are extremely varied, from people who are highly educated and well trained professionally [101], to people with extremely limited formal education [102]. Regardless of the Romanian migrants' professional status, the causes of their decision to migrate are generally the economic and social deprivation. Romanian migrants seek opportunities to raise their income and their standard of living, as well as the safety of their jobs and these objectives are reached both legally and illegally.
Most of the highly educated Romanian migrants leave the country legally. They find a place to work before leaving the country and when they leave their place of origin, they are certain that the activity they are going to perform is according to their education and professional abilities. The migrants with high degree of formal education intrinsically intend that the position they occupy abroad will offer them the expected income, the desired work conditions, possibilities of professional development and safety, all these being important factors in their decision to migrate. The exodus of the people with good formal education constitutes the so-called brain-drain flow, which according to Haller [100,[103][104][105] is an economic and social loss for any country investing in the educational process of its people, and not recovering its investment. The more years for the formal education, the greater the loss. The state will indirectly recover some of this investment by the money which the migrants re-send to the country, that is, by remittances. The probability that the value of remittances is higher than the value of investments in education is low, so that the brain-drain phenomenon becomes a loss for economy and society, especially that a small part of the migrants in this category return to their country of origin. It is usually a form of definitive migration, because well trained people find in the destination countries what they lack in their country of origin: good work conditions, professional development opportunities, high degree of civilisation. The professionally well-trained migrants constitute the social category which has no problems in finding a job in their country, they only have problems in finding an adequate place of work, also from a financial perspective.
Most of the Romanian migrants with low training and education leave the country illegally. They assume major risks, because they do not have the certitude of getting a job, so that they do not have great expectations and they accept to perform almost any activity as long as it is paid. Their objective is to earn higher incomes than they used to earn in their own countries and to re-send them home as remittances with the purpose to consolidate their material position, with the belief that they might return. This category of migrants frequently change their work place in the destination country as they adapt and have the possibility to earn increasingly higher incomes. The migrants with average and low formal education send to their countries the highest volume of remittances due to the certainty of definitive return, with the objective to consolidate their material position. The people who emigrate illegally have difficulties in finding a safe job, in re-qualifying and even in adapting to the conditions in the destination countries, so that they become aware of the fact that at a certain point they will need to return to their country. On long term, migration attracts negative effects to Romania. The more the phenomenon is perpetuated, the more it erodes the economy and the society. The decrease of the number of active population will chronically unbalance the labour market. The demographic pyramid will not be reversed as a result of the aging process associated with the effects of growth specific to developed countries but as a result of a higher and more accelerated exodus of the young population fit for work, on the background of a demographic decrease. After graduation, many young people intend to find a job in one of the countries where their diplomas are recognised. According to our analysis, the volume of remittances in both countries highly explains the degree of income inequality. The number of remittances in Romania and Bulgaria is alarming and it should be one of the main concerns of their governments in order to align these two countries to the requirements of the European Union.
In Romania there are no perspectives for migration to stop, only to lower, providing that there is a fast and efficient implication of the state by complex measures of economic policy. The low income and standard of living will maintain the migration phenomenon in Romania and the effects will prove increasingly complex by their multiplication effect (low income, low investments, labour market unbalance, unemployment, etc.), which will maintain the migration, especially the definitive one, lowering the positive impact by contracting the value of remittances. However, on long term, there will be negative effects of the migration phenomenon on the economy and on Romanian society and the problems associated with migration will involve complex structural measures in almost all the fields and sectors, including the behaviour of the political decision-making factors for which the population manifests a deficit of trust.
The Bulgarians' migration has positive economic and demographic consequences by reducing the pressures on the labour market and the poverty, by stimulating entrepreneurship by increasing the number of small enterprises as an effect of remittances. It also has negative consequences, because brain-drain migration involves highly qualified people who are leaving the country, depopulation of peripheral regions, family division [81]. Like the Romanian migrants, Bulgarians strive to obtain material safety. Remittances are mainly destined for consumption expenditure but also for investments, especially in the field of real estate. The critical economic fund stimulates Bulgarians' migration; however, the volume of remittances is lower than the Romanian one, which may be also explained by the demographic differences. We must mention that Bulgaria made a significant economic progress, which will be reflected in the future migration flows. At present, the effects of migration are similar to those of the Romanian economy. Migration also unbalances the Bulgarian labour market and modifies the demographic balance, on long term straining the Bulgarian economy due to the lower capacity to support sustainability. Young Bulgarians seek development opportunities outside their country of origin, which means a loss on the segment of qualified people, which also has a negative impact on economy and society on average and especially on long term.
In both countries, remittances represent income from external sources with positive short-term and negative long-term effects. If we consider the economic and environmental consequences, that is, the fact that in the future a sustainable development will be difficult to support, the advantages offered by remittances on short term do not compensate the long-term disadvantages. The microeconomic objectives-the income growth and financial safety-do not overlap the macroeconomic ones, which converge towards sustainable development.
For the receiving states, the emigrants coming from the East European states, Ukraine, Romania, Bulgaria and Moldova compensate the loss of workforce from the developed markets as a result of population aging and offer the possibility to use the cheaper and less qualified or highly qualified workforce [106] but willing to perform such a work where the native population is not willing to. However, the problem of stay and of illegal work is relevant in the case of Romanian and Bulgarian migrants [107].
Conclusions
This study aimed to identify (1) the factors determining migration and (2) the impact of remittances-the income sent by emigrants to their country of origin-on the economic growth and income inequality in the case of Romania and Bulgaria for the period between 1990-2015.
For the empirical analysis, we used indicators from three different sources, UN, Eurostat and the World Bank, with the purpose of obtaining a full picture of the situations studied.
The authors proposed eight hypotheses to be tested in order to highlight the main aspects of Romanian and Bulgarian migration singularity and their effects on the economic/social stability-sustainability.
H1-H3 hypotheses were tested with the help of the multiple regression model and H4-H8 hypotheses were tested with the help of the bivariate correlation-based on Pearson correlation analysis.
Data analysis confirmed that the inflation rate explains the number of definitive migrants in EU28 from Romania. In addition, the income inequality (Gini index) explains the number of definitive migrants in EU28 from Romania and Bulgaria. And finally, the household consumption expenditure explains the number of definitive migrants in EU28 from Romania and Bulgaria. The contribution of these variables is very similar for Romania and Bulgaria.
In order to achieve our first objective of the present study, we can conclude that the main determinants of the migration process in Romania and Bulgaria are the inflation rate, the income inequality and the household consumption expenditure. However, because we manipulate economic indicators, we can see that many of them are highly correlated and they have an influence on each other. For example, GDP growth rate (economic growth) had linear correlation with the inflation rate and with the unemployment rate and so forth.
In order to achieve our second objective, we can conclude that there is no direct relationship between the remittances received/capita and GDP/capita growth rate in Romania and Bulgaria.
In addition, there is a direct (negative) relationship between the remittances received/capita and price inflation rate in Romania, though not also in Bulgaria, with average and similar intensity in both cases.
In Romania and Bulgaria there is a direct relationship with similar intensity between the remittances received/capita and the unemployment rate (negative relation, average intensity but higher in the case of Bulgaria), the household final consumption and finally with the income inequality (positive with high intensity and very similar for both countries).
The analysis of the relationship between remittances and economic growth and between remittances and inflation, unemployment and income inequality indicates some of the factors determining Romanians' and Bulgarians' migration. The non-manifestation of a direct relationship between remittances and economic growth highlights that in Romania and Bulgaria migration is not necessarily a factor of economic stimulation but rather a consequence of the deficiencies manifested in economy like inflation, unemployment and income inequality. This paper may constitute a starting point for future studies of this phenomenon, including the comparison to other developing states.
It may also be an inspiration for the decision-makers in economic policy to help them establish the measures necessary for the decrease of the migrants' flow from Romania and Bulgaria towards EU28, because it highlights three of the main factors determining the phenomenon and demonstrates that the value of remittances does not have a direct impact on the economic growth. According to the conclusions of this analysis, countries like Romania and Bulgaria may redirect their attention towards solving the internal problems which are mostly related to phenomena with major implications in economy, like inflation, unemployment and income inequality, because they are the main causes of migration. The demographic contraction in Romania and Bulgaria will not be compensated by the positive effect of remittances but it might be stopped by the implementation of measures to balance the labour and monetary market and to improve the standard of living.
Acknowledgments: The authors would like to thank the anonymous reviewers and the editors for their valuable comments and suggestions to improve the quality of the paper.
Author Contributions: All authors contributed equally to all aspects of the research reported in this paper.
Conflicts of Interest:
The authors declare no conflict of interest. [87,95,111].
|
2019-05-20T13:04:31.741Z
|
2018-04-12T00:00:00.000
|
{
"year": 2018,
"sha1": "d30eb2721d91cbcdecb66002013a07bac17d8f5b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/4/1156/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1785e48d6f1076adfc7755886798aa686721d191",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
7799662
|
pes2o/s2orc
|
v3-fos-license
|
The availability of research data declines rapidly with article age
Policies ensuring that research data are available on public archives are increasingly being implemented at the government [1], funding agency [2-4], and journal [5,6] level. These policies are predicated on the idea that authors are poor stewards of their data, particularly over the long term [7], and indeed many studies have found that authors are often unable or unwilling to share their data [8-11]. However, there are no systematic estimates of how the availability of research data changes with time since publication. We therefore requested datasets from a relatively homogenous set of 516 articles published between 2 and 22 years ago, and found that availability of the data was strongly affected by article age. For papers where the authors gave the status of their data, the odds of a dataset being extant fell by 17% per year. In addition, the odds that we could find a working email address for the first, last or corresponding author fell by 7% per year. Our results reinforce the notion that, in the long term, research data cannot be reliably preserved by individual researchers, and further demonstrate the urgent need for policies mandating data sharing via public archives.
Results
We investigated how research data availability changes with article age.To avoid potential confounding effects of data type and different research community practices, we focused on recovering data from articles containing morphological data from plants or animals that made use of a Discriminant Function Analysis (DFA).Our final dataset consisted of 516 articles published between 1991 and 2011.We found at least one apparently working email for 385 papers (74%), either in the article itself or by searching online.We received 101 datasets (19%), and were told that another 20 (4%) were still in use and could not be shared, such that a total of 121 datasets (23%) were confirmed as extant.Table 1 provides a breakdown of the data by year.
We used logistic regression to formally investigate the relationships between the age of the paper and 1) the probability that at least one email appeared to work (i.e. did not generate an error message); 2) the conditional probability of a response given that at least one email appeared to work; 3) the conditional probability of getting a response that indicated the status of the data (data lost, exist but unwilling to share, or data shared) given that a response was received; and finally 4) the conditional probability the data was extant (either 'shared' or 'exists but unwilling to share') given that an informative response was received.
There was a negative relationship between the age of the paper and the probability of finding at least one apparently working email either in the paper or by searching online (OR = 0.93 [0.90 -0.96, 95% CI], p-value < 0.00001).The odds ratio suggests that for every year since publication, the odds of finding at least one apparently working email decreased by 7% (Figure 1A).Since we searched for emails in both the paper and online, four factors contribute to the probability of finding a working email: i) the number of emails in the paper and ii) the chance that any of those worked, iii) the number of emails we could find by searching online and iv) the chance that any of those worked.The total number of email addresses we found in the paper did decrease with age (Poisson regression coefficient = -0.07,SE = 0.01, p-value < 0.0001) from an average of 1.17 in the number of emails we found online (Poisson regression coefficient = 0.015, SE = 0.007, p-value <0.05, Figure 2C).Moreover, the chance that an email found in the paper or online appeared to work also showed a relationship with article age (OR = 0.96 [0.926 -0.998, 95% CI], p-value < 0.05, and OR = 0.97 [0.936 -0.997, 95% CI], p-value < 0.05, respectively), such that the odds that an email appeared to work declined by 4% and 3% per year since publication, respectively (Figure 2B and 2D).
We note that eight email addresses generated an error message but did lead to a response from the authors.It also seems likely that some addresses failed but did not generate an error message, leading us to record a 'no response' rather than 'email not working', although unfortunately the frequency of these cannot be estimated from our data.
There was no relationship between age of the paper and the probability of a response given that there was an apparently working email (50% response rate, OR = 1.00 [0.97 -1.04, 95% CI], Figure 1B).There was also no relationship between article age and the probability that the response indicated the status of the data, given a response was received (83% useful responses, OR = 1.00 [0.95 -1.07, 95% CI], Figure 1C).
Finally, there was a strong negative relationship between the age of the paper and the probability that the dataset was still extant (either 'shared' or 'exists but unwilling to share'), given that a response indicating the status of the data was received (OR = 0.83 [0.79 -0.90, 95% CI], p-value < 0.0001, Figure 1D).The odds ratio suggests that for every yearly increase in article age, the odds of the dataset being extant decreased by 17%.
Discussion
We found a strong effect of article age on the availability of data from these 516 studies.The decline in data availability could arise because the authors of older papers were less likely to respond, but this was not supported by the data.Instead, researchers were equally likely to respond (Figure 1B) and to indicate the status of their data (Figure 1C) across the entire range of article ages.
The major cause of the reduced data availability for older papers was the rapid increase in the proportion of datasets reported as either lost or on inaccessible storage media.For papers where authors reported the status of their data, the odds of the data being extant decreased by 17% per year (Figure 1D).There was a continuum of author responses between the data being reported lost and being stored on inaccessible media, and seemed to vary with the amount of time and effort involved in retrieving the data.Responses ranged between authors being sure that the data were lost (e.g. on a stolen computer), thinking they might be stored in some distant location (e.g.their parent's attic), or having some degree of certainty that the data are on a Zip or floppy disk in their possession but they no longer have the appropriate hardware to access it.In the latter two cases, the authors would have to devote hours or days to retrieving the data.Our reason for needing the data (a reproducibility study) was not especially compelling for authors, and we may have received more of these inaccessible datasets if we had offered authorship on the subsequent paper, or said that the data were needed for an important medical or conservation project.
The odds that we were able to find an apparently working email address (either in the paper or by searching online) for any of the contacted authors did decrease by about 7% per year.This decrease was partly driven by a dearth of email addresses in articles published before 2000 (0.38 per paper on average for 1991-1999) compared with those published after 2001 (1.08 per paper on average, Figure 2A).Wren et al. [12] found a similar increase in the number of emails in articles published after 2000.The larger number of emails in recent papers may mean that the issue of missing author emails is restricted to articles from before 2000: researchers in e.g.2031 will be able to try a wider range of addresses in their attempts to contact authors of articles published in 2011.
The proportion of emails from the paper that appeared to work declined with article age between 2 and 14 years of age, and then rose to around 80% for articles from 1991, 1993 and 1995 (Figure 2B).These latter three proportions are only based on a total of 13 email addresses.Wren et al. [12] reported a steep decline with age in the proportion of functioning emails from papers published between 1995 and 2004, such that 84% of their ten-year-old emails returned an error message.Our proportions for ten-year-old emails are lower, with only 51% of emails from 2003 returning an error.It may be that email addresses are becoming more stable through time, although this clearly requires additional study.The arrival of author identification initiatives like ORCID [13] and online research profiles such as ResearchGate or Google Scholar should make it easier to find working contact information for authors in the future.
Considering only the papers from 2011, our results show that asking authors for their data shortly after publication does yield a moderate proportion of datasets (c.40%).A comparable study [11] received 59% of the requested datasets from papers that were less than a year old.It is hard to tell whether this difference is due to the slightly different research communities involved or the presence of an extra year between publication and the data request in this study.A related paper by Wicherts et al. in 2005 [9] received only 26% of requested psychology datasets.
Overall, we only received 19.5% of the requested datasets, and only 11% for articles published before 2000.We found that several factors contribute to these low proportions: non-working emails, a 50% response rate, and sometimes the lack of an informative response from the authors.However, when the authors did give the status of their data, the proportion of datasets that still existed dropped from 100% in 2011 to 33% in 1991 (Figure 1D).Unfortunately, many of these missing datasets could be retrieved only with considerable effort by the authors, and others are completely lost to science.
Many datasets produced in scientific research are unique to their time and location, and once lost they cannot be replaced [14].Since it is impossible to know what uses would have been found for these data, or when they would become important, leaving their preservation to authors denies future researchers any chance of reusing them.Fortunately, one effective solution is to require that authors share it on a public archive at publication: the data will be preserved in perpetuity, and can no longer be withheld or lost by authors.Some journals have already enacted policies to this effect [e.g.5,6], and we hope that the worrying magnitude of the issues reported here will encourage others to draft similar policies in due course.
Experimental Procedures
It is likely that expectations on data sharing will differ between academic communities, and that some data types are easier to preserve than others.Moreover, the types of data being collected change through time.We attempted to control for these effects by focusing on a single type of data that has been collected in the same way for many decades: data on morphological dimensions from plants or animals, as is typically collected by biologists and taxonomists.We are also conducting a parallel study on how the reproducibility of statistical analyses changes through time, and this study is working on reproducing discriminant function analyses (DFA), which are commonly applied to morphometric data [15].We therefore also set the condition that the data must have been used in a DFA.
We searched Web of Science for articles matching 'morpholog* and discriminant' in the topic field for the years 1980 to 2011.Only 24 papers were identified before 1991, and these were excluded.To reduce the total number of articles, we chose to focus on odd years from 1991 to 2011, leaving 1009 papers.These papers were randomly assigned to the working group for data collection.Papers were excluded if the article text was not available to us either online or via the University of British Columbia library, if the analysis did not include morphological data from a biological organism, or if the paper did not report the results of a DFA.Papers were also excluded if the data were already available as a supplementary file, appendix, or on another website, as curation of these datasets is no longer the responsibility of the author.Due to the effort involved in checking all 1009 papers for details on analysis and author contact information, we stopped data collection after a random subset of 526 papers had been assessed.Of these, 10 did not meet the inclusion criteria (e.g. were not DFAs on morphology, or had data already available in a supplementary file or appendix), and were dropped.This left 516 papers, with a minimum of 26 papers for any given year, and over 40 for most years (Table 1).Interestingly, we found that only 2.4% (13 of 529) of otherwise eligible papers had made their data available at publication: one paper each from 1999, 2001, 2003 and 2007, three papers in 2005, two in 2009, and four in 2011.
Data collected from the papers included information on the DFA used and results (for the reproducibility analyses), and author contact information.In every case, we attempted to find email addresses for the first, corresponding, and last authors of every paper.Often these were not mutually exclusive (e.g. a single author), and there were many different combinations.We attempted to extract the emails from the article text, but quickly determined that older papers would be more likely to have non-working email addresses [12] or no emails at all.We therefore also searched online for a maximum of five minutes per author for a recent or current email address.
We used R [16] to generate data request emails, with all available email addresses in the 'to:' field, and used an R script to automatically send them out on April 15th, 2013.
Reminder emails were sent out to unresponsive authors three weeks later (May 8th, 2013).When authors replied asking for more information, we provided additional details as required.The text for these two emails is included in the Supplemental Material.The recording period for author responses ended on the 5th of June, 2013, and the papers sorted into different outcomes: 1) all email addresses generated an error message, 2) no response received, 3) a response was received but gave no information about the status of the data, 4) data lost or stored on obsolete hardware, and 5) the authors had the data but were unwilling to share, or 6) data received.Since outcomes ( 5) and ( 6) both implied that the dataset still existed, we combined these into a single outcome 'data extant'.
We used logistic regression to investigate the relationship between the age of a paper and the probability that the data were still extant.Further sub-analyses were conducted on subsets of the data to investigate the relationships between the age of the paper and 1) the probability that at least one email appeared to work; 2) the conditional probability of a response given that at least one email appeared to work; 3) the conditional probability of getting a response that indicated the status of the data (data lost, exist but unwilling to share, or data shared) given that a response was received; and finally 4) the conditional probability the data was extant given that an informative response was received.We also used Poisson regressions to investigate the relationship between article age and the number of emails found in the paper or online.Lastly, logistic regressions were used to examine how article age affected the chance that an email address appeared to work.All analyses were carried out in R 3.0.1 [16]; the analysis code and data are available on Dryad (doi:10.5061/dryad.q3g37).Figure 1.The effect of article age on four obstacles to receiving data from the authors.A) Predicted probability that the paper had at least one apparently working email.B) Predicted probability of receiving a response, given that at least one email was apparently working.C) Predicted probability of receiving a response giving the status of the data, given that we received a response.D) Predicted probability that the data were extant (either 'shared', or 'exist but unwilling to share') given that we received a useful response.In all panels, the line indicates the predicted probability from the logistic regression, the grey area shows the 95% CI of this estimate and the red dots indicate the actual proportions from the data.
Figure 2 .
Figure 2. The effect of article age on the number and status of author emails.A) Number of emails found in the paper against article age.B) Predicted probability that an individual email from the paper appeared to work against article age.C) Number of emails found by searching on the web against article age.D) Predicted probability that an individual email found on the web appeared to work against article age.The line indicates the predicted probability from a Poisson (A, C) or logistic (B, D) regression, the grey area shows the 95% CI of this estimate and the red dots indicate the actual proportions from the data.
Table 1 .
Breakdown of data availability by year of publication.Data are displayed as N [%]; the percentages are calculated by rows.
|
2013-12-19T09:57:53.000Z
|
2013-12-19T00:00:00.000
|
{
"year": 2013,
"sha1": "314036b0b7585a65a1c9241414114f2413fd6720",
"oa_license": "publisher-specific-oa",
"oa_url": "http://www.cell.com/article/S0960982213014000/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "5e7197adcacf70a5663808a993f860532e54a58d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Biology",
"Medicine"
]
}
|
37105796
|
pes2o/s2orc
|
v3-fos-license
|
Batch culture and repeated-batch culture of Cunninghamella bainieri 2A1 for lipid production as a comparative study
This research was performed based on a comparative study on fungal lipid production by a locally isolated strain Cunninghamella bainieri 2A1 in batch culture and repeated-batch culture using a nitrogen-limited medium. Lipid production in the batch culture was conducted to study the effect of different agitation rates on the simultaneous consumption of ammonium tartrate and glucose sources. Lipid production in the repeated-batch culture was studied by considering the effect of harvesting time and harvesting volume of the culture broth on the lipid accumulation. The batch cultivation was carried out in a 500 ml Erlenmeyer flask containing 200 ml of the fresh nitrogen-limited medium. Microbial culture was incubated at 30 °C under different agitation rates of 120, 180 and 250 rpm for 120 h. The repeated-batch culture was performed at three harvesting times of 12, 24 and 48 h using four harvesting cultures of 60%, 70%, 80% and 90%. Experimental results revealed that nitrogen source (ammonium tartrate) was fully utilized by C. bainieri 2A1 within 24 h in all agitation rates tested. It was also observed that a high amount of glucose in culture medium was consumed by C. bainieri 2A1 at 250 rpm agitation speed during the batch fermentation. Similar results showed that the highest lipid concentration of 2.96 g/L was obtained at an agitation rate of 250 rpm at 120 h cultivation time with the maximum lipid productivity of 7.0 × 10−2 mg/ml/h. On the other hand, experimental results showed that the highest lipid concentration produced in the repeated-batch culture was 3.30 g/L at the first cycle of 48 h harvesting time using 70% harvesting volume, while 0.23 g/L gamma-linolenic acid (GLA) was produced at the last cycle of 48 h harvesting time using 80% harvesting volume.
Introduction
In recent years, the production of polyunsaturated fatty acids (PUFAs) such as GLA, arachidonic acids and eicosapentaenoic acids by oleaginous microorganisms has received great interest from researchers. Among these fatty acids, GLA has extensively been used in biomedical products, nutritionals and health supplements (Zikou et al., 2013). Many research studies have been carried out over the last decades to develop lipid production. These attempts have aimed at improving the economic production of microbial lipids rather than plant and animal derived oils. In this view, microbial oils are superior to plant oils and animal fats due to less time required for their circulation in environment, higher possibility for large scale production and higher sustainability under climate changes (Li et al., 2008).
Previous studies have revealed that a high amount of lipid could be accumulated by the fungal species of Cunninghamella depending on the fermentation methods and culture conditions (Fakas et al., 2007(Fakas et al., , 2009Somashekar et al., 2003). Similar studies have shown that a high lipid accumulation is attained by Cunninghamella bainieri 2A1 in the submerged batch culture (Taha et al., 2010). It is well known that C. bainieri 2A1 is capable of producing up to 30% lipid (g/g biomass) which contains 10-15% GLA. In this regard, nutritional intake of GLA and other PUFAs have been used in clinical treatment of human diseases such as blood cholesterol, acute and chronic inflammations, and atopic eczema, hypertension, Crohn's disease, rheumatoid arthritis and asthma (Shuib et al., 2014;Vadivelan and Venkateswaran, 2014).
The production of lipid by oleaginous fungi is highly dependent on medium composition. It has been observed that lipid production by C. bainieri 2A1 is related to the stress conditions created by the deficiency of nitrogen in the medium. On the other hand, it has been found that lipid synthesis by this strain is affected by carbon and nitrogen concentration in the culture medium (Taha et al., 2010). However, little is known about the effect of agitation rate on simultaneous consumption of nitrogen and glucose of the culture medium in relation to lipid production by C. bainieri 2A1. Agitation rate is an important factor which affects microbial growth, especially in shear sensitive microorganisms. Higher agitation rates result in better oxygen supply, which in turn favors cell growth. Hence, optimization of agitation rates is essential to provide high oxygen supply conditions for the mycelia and to increase their metabolic activities throughout the fermentation process (Abd-Aziz et al., 2008;Sun et al., 2012).
Fungal lipid fermentation could be performed as repeatedbatch culture. The repeated-batch culture is a fermentation mode which offers many advantages over the microbial batch culture including the better depletion of medium in the bioreactor at the end of cultivation, the reuse of microbial cells for subsequent fermentation runs, higher cell concentration in the culture and less time required for process operation. Moreover, the repeated-batch culture is expected to increase cell productivity ensuring a high cell growth rate (Huang et al., 2008;Radmann et al., 2007). It has been noted that the repeated-batch culture is affected by operating factors. In this view, it has been observed that the repeated-batch culture is influenced by harvesting times and harvesting volumes of the culture broth (Jin et al., 2011;Masuda et al., 2011).
A number of studies have already been performed to study lipid accumulation by various fungal strains in the batch fermentation (Bellou et al., 2014;Fakas et al., 2009;Gao et al., 2013;Papanikolaou et al., 2004;Zikou et al., 2013). However, much less work has been performed to study fungal lipid synthesis in the repeated-batch cultivation. Current research was performed to investigate lipid production by C. bainieri 2A1 in the batch culture and the repeated-batch culture as a comparative study using a nitrogen-limited medium.
Furthermore, a detailed study on the use of different agitation rates was carried out to investigate the effects of agitation intensities on the depletion of glucose and ammonium tartrate as carbon source and nitrogen source, respectively in the culture medium for the enhancement of lipid production. On the other hand, the effect of two pivotal factors, namely harvesting time and harvesting volume of the culture medium on lipid production by C. bainieri 2A1 in the repeated-batch culture was studied.
Materials and methods
2.1. Microorganism and culture medium C. bainieri 2A1 was obtained from School of Biosciences and Biotechnology, Faculty of Science and Technology, Universiti Kebangsaan Malaysia. Stock culture was maintained on potato dextrose agar (PDA) at 4°C. Inoculum was prepared from the spore suspension containing 10 6 spores/ml harvested from 7day-old PDA plates. The nitrogen-limited medium employed by Kendrick and Ratledge (1992) was modified and then utilized in this study with the compositions as follows (in g/L): glucose, 30; ammonium tartrate (C 4 H 12 N 2 O 6 ), 1.0; KH 2 PO 4 , 7.0; Na 2 HPO 4 , 2.0; MgSO 4 AE7H 2 O, 1.5; CaCl 2 AE2H 2 O, 0.1; FeCl 3 AE6H 2 O, 0.008; ZnSO 4 AE7H 2 O, 0.0001; CuSO 4 AE5H 2 O, 0.001; Co(NO 3 ) 2 AE6H 2 O, 0.0001 and MnSO 4 AE5H 2 O, 0.0001. The initial pH of the culture medium was adjusted to 6.0 using 1.0 M HCl or 1.0 M NaOH. Seed culture was prepared by transferring 20 ml spore suspension into 180 ml of the growth medium. Seed culture was then incubated at 30°C and 250 rpm agitation rate for 48 h.
Batch and repeated-batch cultivation
The batch cultivation was carried out by an addition of 10% (v/v) of seed culture (20 ml) into 180 ml fresh medium in four Erlenmeyer flasks (500 ml) to make a final 200 ml culture medium in each flask. Inoculated butch cultures were incubated at 30°C on a rotary shaker at 120, 180 and 250 rpm agitation rates for 120 h. The repeated-batch fermentation was run in such a way that the four cycles of the batch culture were continually repeated with same conditions. Three time intervals of fermentation were studied in the repeated-batch culture including 12 h time interval (12 h, 24 h, 36 h and 48 h), 24 h time interval (24 h, 48 h, 72 h and 96 h) and 48 h time interval (48 h, 96 h, 144 h and 192 h). The first cycle of the repeatedbatch culture was carried out at 30°C and 250 rpm agitation rate by transferring 10% (v/v) of seed culture (20 ml) into 180 ml fresh medium in four Erlenmeyer flasks (500 ml) to make a final 200 ml culture medium in each flask. The second cycle to fourth cycle of the repeated-batch culture at each time interval was conducted as described by Dashti et al. (2015). At the end of each cycle determined volumes of culture medium (60%, 70%, 80% and 90% v/v) were harvested which were defined as harvesting volume (ml). The time intervals used for all cycles of the repeated-batch culture were defined as harvesting time (h).
Analytical methods
The fungal mycelia were harvested by the filtration of 100 ml culture suspension using filter paper (Whatman No. 1). A volume of 5 ml culture medium was obtained after filtration and used for the following glucose and ammonium tartrate analysis. Glucose was determined by using GOD-glucose oxidase kit (Boehringer GOD-PERID test kit). Ammonium tartrate was determined by the indophenols method (Chaney and Marbach, 1962). The filtered mycelia were washed with 200 ml of distilled water, stored at À20°C for 24 h and then put under freeze-dried conditions (Shell Freeze Dry, LABCONCO LYPH.LOCK6) for 24 h to obtain the dry weight. The dry weight of microbial cells was determined using a balance (AND GR-200). The dry weight of cells was used to determine the biomass, lipid concentration and lipid percentage (lipid content). Dried mycelia were then ground using a pestle and mortar, followed by lipid extraction. Lipid was extracted using a mixture of chloroform and methanol in a ratio of 2:1 (v/v) overnight before filtering. The filtrate was washed with 150 ml NaCl (1% w/v), followed by an addition of 150 ml distilled water (Folch et al., 1957). The chloroform layer was obtained and evaporated using a rotary evaporator (BUCHI Rotavapor R-124). Lipid residue was dissolved in a minimal amount of diethyl ether and transferred to a vial.
Production of biomass and lipid in the batch culture
The variations in nitrogen content (ammonium tartrate) of the culture medium at different agitation rates in the batch culture are given in Fig. 1. As can be seen, ammonium tartrate was depleted within 24 h in all agitation rates tested. The results obtained showed that C. bainieri 2A1 could grow in a limited ammonium tartrate concentration (1.0 g/L) with no considerable effect of various agitation rates on nitrogen consumption, indicating the fact that this strain had strong capability of assimilating organic nitrogen compounds.
In order to find out the consumption of glucose by C. bainieri 2A1 in biomass and lipid production process, the variations of glucose consumption in culture medium were investigated with an initial concentration of 30 g/L (Fig. 1). The study of glucose concentration measured in all agitation rates at 24 h cultivation revealed that a high amount of glucose was consumed at agitation rate of 250 rpm and reached a value of 11.9 g/L, compared to glucose consumed at agitation rates of 120 and 180 rpm during 24 h cultivation with the residual glucose concentration of 27 g/L and 19 g/L, respectively. As can be seen, the half of glucose (15.0 g/L) was consumed by C. bainieri 2A1 at the end of 120 h cultivation when agitation rate was set at 120 rpm, compared to glucose consumed at agitation rates of 180 and 250 rpm with higher glucose Figure 1 The variations in glucose, nitrogen source (Ammonium), biomass and lipid concentration of nitrogen-limited medium in the batch culture of C. bainieri 2A1 at agitation rates of 120 rpm, 180 rpm and 250 rpm for 120 h fermentation.
consumption. The glucose concentration detected at the end of batch culture revealed that with increasing the agitation speed up to 180 rpm glucose consumption concomitantly increased in a shorter time. However, no notable differences in glucose consumed by C. bainieri 2A1 were found at agitation rates of 180 and 250 rpm at the end of cultivation time (Fig. 1). This finding indicated that excessive agitation speed positively could influence glucose consumption, which in turn may affect the biomass and lipid production (Fig. 1). The findings obtained also indicated that glucose concentration of 30 g/L could well support the growth and product formation for C. bainieri 2A1. Fig. 1 also depicts the levels of biomass concentration and lipid concentration in the batch culture of C. bainieri 2A1 at agitation rates studied. The results obtained revealed that biomass and lipid concentration drastically increased in the first 24 h fermentation. This suggested that the consumption of glucose by C. bainieri 2A1 resulted in the intense synthesis of lipid while nitrogen source was depleting during 24 h cultivation. Subsequently, the level of biomass and lipid accumulation increased to maximum levels. As can be seen, increasing agitation rate from 120 to 250 rpm concurrently had a positive effect on biomass and lipid concentration. It was noted that the highest biomass concentration (8.1 g/L) and lipid concentration (2.96 g/L) were obtained with an agitation rate of 250 rpm at 96 h and 120 h fermentation time, respectively (Fig. 1). These facts suggested the favorable effect of elevated agitation speeds on the process of biomass and lipid production by C. bainieri 2A1. The requirement to nitrogen source for lipid production is diversified depending on the microorganisms. It has been found that nitrogen limitation in culture medium is known as a stimulating factor for microbial lipid biosynthesis in oleaginous microorganisms. Lipid accumulation in oleaginous microorganisms is carried out by microbial cell nitrogen depletion, while glucose continues to be assimilated. During the lipid synthesis phase, the proportion of natural lipid fraction is high; however, it decreases when a decrease in biomass production occurs (Makri et al., 2010;Taha et al., 2010). Accordingly, at the present study nitrogen was used as a limited factor in the culture medium of C. bainieri 2A1. Supporting this view, it has been noted that Cunninghamella sp. has a high ability for lipid production in a nitrogen-limited medium (Gema et al., 2002;Ratledge, 1997;Taha et al., 2010).
During the growth of C. bainieri 2A1, glucose (30 g/L) was intensively utilized when the highest agitation rate (250 rpm) was applied compared to that when other agitation rates tested were applied. This finding was possibly due to a better mixing of culture medium, higher oxygen availability and better nutrient transfer phenomenon for microbial cells at 250 rpm, which resulted in an increase in metabolic activity of C. bainieri 2A1 for lipid synthesis (Abd-Aziz et al., 2008;Fuentes-Gru¨newald et al., 2012).
The variations in lipid percentage (lipid concentration/biomass concentration · 100) under the agitation rates studied are depicted in Fig. 2. As is evident, the cultivation of C. bainieri 2A1 at the agitation rate of 250 rpm showed that the highest lipid percentage (32%) was obtained at 96 h cultivation time. This finding indicated that increased agitation rate favored the microbial growth and lipid synthesis by C. bainieri 2A1, however; longer cultivation time higher than 96 h brought about a decrease in lipid percentage at 120 h because of lipid degradation. This was possibly due to turnover phenomenon of lipid produced by C. bainieri 2A1 after the lipogenic phase in which a part of lipid was utilized to produce biomass, accompanying by glucose exhaustion in the growth medium. Thus, lipid formed in biomass started to wane from 96 h to 120 h fermentation (Fakas et al., 2007).
With consideration of Fig. 2, it is evident that the high lipid percentage was obtained for the agitation rate of 250 rpm at 48 h fermentation time compared to that obtained from agitation rates of 120 and 180 rpm, implying the fact that high lipid content (lipid percentage) could attain in the shorter time at an agitation intensity of 250 rpm. In studies fulfilled by Tao and Zhang (2007) it was revealed that maximum lipid content obtained was 25% by Cunninghamella echinulata at an agitation rate of 150 rpm and 96 h batch cultivation, while Papanikolaou et al. (2004) reported that the highest lipid accumulated by C. echinulata was measured at 170 rpm agitation rate after 310-400 h batch fermentation.
The highest biomass and lipid productivity obtained for all agitation rates were achieved after 24 h cultivation time (Table 1). From the economical point of view, it indicated that in spite of higher lipid and biomass concentration at 96 h, the production of lipid is more cost-effective at 24 h. The productivity of biomass and lipid exhibited a similar trend at 24 h when increasing agitation rates from 120 to 250 rpm were applied. As shown in Table 1, there were no considerable changes in lipid and biomass productivity at 120 and 180 rpm agitation rate with biomass productivity ranging from 12.9 · 10 À2 to 13.75 · 10 À2 mg/ml/h and lipid productivity ranging from 2.29 · 10 À2 to 2.5 · 10 À2 mg/ml/h. However, elevated agitation rates up to 250 rpm led to a considerable rise in biomass and lipid productivity with values as high as 31.0 · 10 À2 mg/ml/h and 7.0 · 10 À2 mg/ml/h, respectively, corroborating the positive effect of increased agitation rates on microbial growth and metabolic activities for biomass and lipid production. Table 2 shows the approximate values for the yield of product (lipid) to substrate (Y p/s ), yield of product to biomass (Y p/ x ) and yield of biomass to substrate (Y x/s ) which were obtained at 24 h of the batch culture. As can be observed, Y p/s obtained at agitation rates of 120 rpm (5.0 · 10 À2 ) and 180 rpm (5.45 · 10 À2 ) were lower than Y p/s value measured at Figure 2 The lipid percentage measured for the batch cultivation of C. bainieri 2A1 in nitrogen-limited medium at agitation rates of 120 rpm, 180 rpm and 250 rpm for 120 h fermentation.
250 rpm agitation rate which drastically increased up to the value of 9.4 · 10 À2 , indicating a limitation in microbial nutrient absorption and gas exchange at low agitation speeds. As can be seen from the results in Table 2, the highest Y p/x and Y x/s were measured at 250 rpm agitation rate with values as high as 22.0 · 10 À2 and 41.0 · 10 À2 , respectively supporting the fact that the maximum lipid produced in relation to glucose consumed was obtained at agitation speed of 250 rpm. These findings were possibly related to the fact that agitation rate could affect microbial growth in the aerobic process which was responsible for maintaining the required level of dissolved oxygen. Hence, higher oxygen supply to microbial cell brought about higher productivity. However, further study showed that increasing agitation rates higher than 250 rpm brought about a decrease in biomass productivity and Y x/s (data not shown) which corroborated the cost-effectiveness of 250 rpm agitation rate in light of economical production. Fig. 3 illustrates the changes in biomass concentration during the repeated-batch culture of C. bainieri 2A1 at 12 h, 24 h and 48 h harvesting times using 60-90% harvesting volumes. This figure reveals that for all samples, the highest level of the biomass concentration was obtained at the first cycle of the repeated-batch culture during three harvesting times tested and fungal cell concentration decreased afterward, indicating the fact that increased batch cycle had an adverse effect on microbial cell growth and biomass concentration.
Production of biomass and lipid in the repeated-batch cultivation
From Fig. 3 it can be found that the maximum cell concentrations obtained at 12 h, 24 h and 48 h harvesting times were 4.3 g/L, 7.37 g/L and 11.12 g/L, respectively which were detected at the first cycle of the repeated-batch culture using 90% harvesting volume. However, the last cycle of the repeated-batch using 90% harvesting volume revealed the minimum biomass concentration with the values of 2.2 g/L, 2.9 g/L and 8.1 g/L at harvesting times of 12 h, 24 h and 48 h, respectively. It is obvious that by the comparison of three harvesting times tested, 48 h harvesting time showed the highest amounts of biomass concentration at the first cycle of the repeated-batch using 60%, 70%, 80% and 90% harvesting volume with values as high as 10.35 g/L, 11.0 g/L, 11.02 g/L and 11.12 g/L, respectively.
Furthermore, it was observed that the total biomass concentration of four cycles in 90% harvesting volume (12.14 g/ L), 80% harvesting volume (12.81 g/L), 70% harvesting volume (14.02 g/L) and 60% harvesting volume (12.32 g/L) at 12 h harvesting time had high differences with respective values found at 24 h harvesting time with total biomass production of 17.42 g/L, 17.25 g/L, 18.95 g/L and 18.37 g/L as well as 48 h harvesting time with total biomass values of 36.69 g/ L, 38.99 g/L, 39.31 g/L and 38.27 g/L for 90%, 80%, 70% and 60% harvesting volume, respectively. These findings implied the considerable effect of harvesting times and harvesting volumes studied on the biomass produced. Furthermore, the results obtained in the repeated-batch culture showed an increase in biomass production (11.12 g/L) compared to the biomass produced in the batch culture (8.1 g/L). Similarly, the study fulfilled by Her et al. (2004) showed that the repeated-batch culture revealed high product formation compared to the batch culture. Fig. 4 depicts the lipid concentration of samples during the four cycles of the repeated-batch culture. By the comparison of lipid produced at all harvesting times tested, maximum lipid concentrations were obtained at the first cycle of the repeated-batch culture so that the production of lipid decreased gradually from the first cycle to the last cycle during each harvesting time, implying the fact that the increased repetition of batch cycles had no favorable effect on C. bainieri metabolism for lipid production. Obviously, the highest lipid produced at 12 h and 24 h harvesting times were 0.9 g/L and 2.0 g/L, respectively when 90% harvesting volume was utilized, while the maximum lipid concentration observed at 48 h harvesting time was 3.30 g/L when 70% harvesting volume was used, implying the key role of harvesting time and harvesting volume in lipid production by C. bainieri 2A1 under the repeatedbatch culture. Fig. 4 also shows that minimum lipid concentration at 12 h and 24 h harvesting time was measured with the values of 0.1 g/L and 0.4 g/L, respectively at the fourth cycle of the repeated-batch culture when 90% harvesting volume was used, while the second and third cycle of the repeated-batch culture at 48 h harvesting time exhibited the lowest lipid concentration with the similar value of 1.8 g/L using 60% harvesting volume. Evidently, a rise in harvesting time from 12 h to 48 h caused a progressive increase in lipid production, indicating the positive effect of increased harvesting time on metabolic activity of C. bainieri 2A1 in lipid production process. The findings of this study showed that the production of lipid was enhanced in Table 1 The productivity of biomass, lipid and GLA within different agitation rates at 24 h batch cultivation of C. bainieri 2A1 in nitrogen-limited medium.
the repeated-batch culture (3.30 g/L) compared to that in the batch culture (2.96 g/L). As mentioned previously, the production of biomass and lipid decreased after the first cycle at three time intervals of 12 h, 24 h and 48 h (Figs. 3 and 4). These findings were in agreement with the results obtained by Xiao et al. (2011) who showed that a large amount of astaxanthin was produced by Phaffia rhodozyma at the first cycle of the repeated-batch culture and then decreased subsequently in seven cycles. The decrease in biomass concentration and lipid production after the first cycle could be attributed to the formation of pellet after the first cycle, as many fungi mycelia in liquid culture can either disperse or form pellets during microbial growth (Makri et al., 2010;Pazouki and Panda, 2000).
Figs. 3 and 4 reveal that different harvesting times tested
showed the varied effects on biomass and lipid production. Harvesting time could affect cellular growth, product formation, or cellular metabolism, depending on the host system and the range of harvesting time applied. On the other hand, biomass concentration is relatively dependent on cycle time so that microbial productivity could change as a function of cycle time (Bhargava et al., 2005). In this regard, Andreé t al. (2010) noted that harvesting time had important effects on distribution of cellular fatty acids in the various lipids produced. They observed a decrease in biomass concentration (approximately 10-20%) at the end of fermentation runs with three different harvesting times due to the formation of pellets in the repeated-batch culture.
Figure 3
The biomass concentration measured in the repeated-batch cultivation of C. bainieri 2A1 at 12 h, 24 h and 48 h harvesting time with 4 cycles of batch repetition using 60%, 70%, 80% and 90% harvesting volume.
Figure 4
The lipid concentration measured in the repeated-batch cultivation of C. bainieri 2A1 at 12 h, 24 h and 48 h harvesting time with 4 cycles of batch repetition using 60%, 70%, 80% and 90% harvesting volume. Fig. 5 shows variations in lipid percentage (accumulated lipid in dried biomass) during three harvesting times of the repeated-batch culture. Obviously, the highest lipid percentage was obtained with the value as high as 30% at the first cycle of the repeated-batch culture using 70% harvesting volumes when 48 h harvesting time was applied. On the other hand, the lowest lipid percentage (5%) was measured at the last cycle of the repeated-batch culture using 12 h harvesting time and 90% harvesting volume. As can be seen from Fig. 5, the first cycle of each harvesting time showed the maximum lipid percentage so that an increase in the number of repeating cycles reduced lipid content, showing the deleterious effect of increased repetition of batch cycles on lipid percentage. Similar to lipid production, increasing harvesting time from 12 h to 48 h brought about a rise in lipid percentage (Fig. 5). This figure reveals that the highest lipid percentages of 22.2% at 12 h harvesting time and 27.1% at 24 h harvesting time were produced at the first cycle of the repeated-batch culture using 90% harvesting volume. These findings corroborated the pivotal effects of harvesting time and harvesting volume on lipid percentage. Kim et al. (2006) demonstrated that high harvesting volumes were necessary for maximum production when 12 h harvesting time was applied. Variations observed in harvesting volume and harvesting time tested in the repeated-batch culture could be due to the fact that harvesting volume and harvesting time are related to the characteristics of the selected microorganism and respective microbial cell growth. Hence, varying conditions may provide different effects of harvesting volumes and harvesting times on microorganisms (Radmann et al., 2007).
Production of GLA
In terms of the effects of different agitation rates on GLA concentration in the batch culture, it was observed that no considerable differences in GLA concentration were measured at 120 and 180 rpm agitation rates after 24 h batch culture with values of 0.042 g/L and 0.046 g/L, respectively. However, higher agitation speed of 250 rpm enhanced GLA production up to 0.13 g/L after 24 h batch fermentation with a productivity value of 0.54 · 10 À2 (Table 1). Fig. 6, illustrates the GLA production during three harvesting times in the repeated-batch culture. Experimental results showed a notable difference between GLA produced at three harvesting times tested. As can be seen from Fig. 6, the highest concentration of GLA was measured as high as 0.10 g/L and 0.23 g/L at the fourth cycle of 24 h and 48 h harvesting time, respectively using 80% harvesting volume. However, GLA concentration was maintained with the values less than 0.03 g/L at 12 h harvesting time in all harvesting volumes tested (Fig. 6). Obviously, GLA produced at 24 h and 48 h harvesting time increased gradually from the first cycle to the last cycle in 60%, 70%, 80% and 90% harvesting volumes studied, contrary to that in biomass and lipid production. Moreover, GLA production increased from 12 h to 48 h harvesting time, suggesting the favorable effects of increased harvesting time on GLA production. This finding could be attributed to the fact that reduced biomass and lipid concentration at elevated harvesting time and the repetition of batch cycle resulted in the morphological changes in fungal mycelia and the formation of pellet by C. bainieri 2A1, which resulted in a shift in metabolic activity of the fungal cells to increase GLA production (Dashti et al., 2015).
Fatty acid composition of lipid produced in the repeated batch culture was determined by measuring fatty acid content of fungal lipid at 48 h harvesting time using 60-90% harvesting volume (Table 3). It is obvious that the main fatty acid produced was oleic acid ( D9 C18:1), followed by palmitic acid (C16:0). Linoleic acid is a PUFA known as an omega-6 fatty acid which has a double bond six carbones away from the omega carbon, while gamma-linolenic acid (GLA) is an omega-3 fatty acid which includes a double bond three carbons away from the omega carbon (Stoll, 2002). It is has been found that the metabolic pathways for the synthesis of omega-6 and 0 0 Figure 5 The lipid percentage measured in the repeated-batch cultivation of C. bainieri 2A1 at 12 h, 24 h and 48 h harvesting time with 4 cycles of batch repetition using 60%, 70%, 80% and 90% harvesting volume.
omega-3 proceed from same initial step as in stearic acid (C18:0) which results in the synthesis of oleic acid and linoleic acid by the enzyme D 9 and D 12 -desaturase. Finally, GLA is produced from linoleic acid which is catalyzed by D 6 -desaturase. This enzyme makes a double bond on the sixth carbon counting from carboxyl end (Hagan et al., 2006;Horrobin, 1993).
Conclusions
This study showed the enhancement of lipid production in the batch culture of C. bainieri 2A1 grown in the nitrogen-limited medium under elevated agitation rate. The current research work indicated that increasing agitation rate caused higher consumption of glucose which could favor biomass production and lipid accumulation using nitrogen-limited medium. Moreover, this study revealed that the repeated-batch culture is a promising and reliable fermentation system for lipid synthesis compared to the batch culture. The efficiency of the repeated-batch culture was dependent on harvesting time and harvesting volume, which determined biomass synthesis and lipid production. The findings obtained from the repeatedbatch culture of C. bainieri 2A1 in the nitrogen-limited medium revealed that the highest concentrations of biomass (11.12 g/L) and lipid (3.30 g/L) were detected at the first cycle of 48 h harvesting time using 90% and 70% harvesting volume, respectively, however, maximum GLA concentration of 0.23 g/L was produced at the last cycle of 48 h harvesting time where 80% harvesting volume was utilized.
|
2018-04-03T03:56:15.634Z
|
2015-02-16T00:00:00.000
|
{
"year": 2015,
"sha1": "c5a057da732c6593c6b2a9e51654a5fa5a3dcbf5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sjbs.2015.02.006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59323b35e79fbf07a1acafa01a4fb6abde8b4036",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
202407
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptional Activation by MEIS1A in Response to Protein Kinase A Signaling Requires the Transducers of Regulated CREB Family of CREB Co-activators
The transcription factor encoded by the murine ecotropic integration site 1 gene (MEIS1) is a partner of HOX and PBX proteins. It has been implicated in embryonic patterning and leukemia, and causally linked to restless legs syndrome. The MEIS1A C terminus harbors a transcriptional activation domain that is stimulated by protein kinase A (PKA) in a manner dependent on the co-activator of cAMP response element-binding protein (CREB), CREB-binding protein (CBP). We explored the involvement of another mediator of PKA-inducible transcription, namely the CREB co-activators transducers of regulated CREB activity (TORCs). Overexpression of TORC1 or TORC2 bypassed PKA for activation by MEIS1A. Co-immunoprecipitation experiments demonstrated a physical interaction between MEIS1 and TORC2 that is dependent on the MEIS1A C terminus, whereas chromatin immunoprecipitation revealed PKA-inducible recruitment of MEIS1, PBX1, and TORC2 on the MEIS1 target genes Hoxb2 and Meis1. The MEIS1 interaction domain on TORC1 was mapped to the N-terminal coiled-coil region, and TORC1 mutants lacking this domain attenuated the response to PKA on a natural MEIS1A target enhancer. Thus, TORCs physically cooperate with MEIS1 to achieve PKA-inducible transactivation through the MEIS1A C terminus, suggesting a concerted action in developmental and oncogenic processes.
The homeodomain is a DNA-binding structure shared by numerous transcription factors throughout eukaryotes, and is most commonly 60 amino acids in length (1). The three-amino acid loop extension class of homeoproteins is so named for an extra 3 residues in the loop between helices 1 and 2 in the typical homeodomain (2). Members of the three-amino acid loop extension class in mammals include the MEIS, PREP, and PBX families, which participate in relatively complex interactions between themselves and with the products of another group of homeodomain-containing proteins, the HOX family (3). PBX proteins form cooperative DNA-binding heterodimers with MEIS, PREP, or HOX proteins, and coordinate the formation of higher order heterotrimeric complexes of PBX, HOX and MEIS, or PREP (3,4). DNA-binding PBX homodimers have also been noted, further extending the possible permutations for these partners (5). Targets of PBX⅐MEIS heterodimers include the bovine CYP17 gene (6), whereas those of PBX⅐MEIS⅐HOX trimers include the Hoxb1 autoregulatory element (ARE) 5 and Hoxb2 rhombomere 4 (r4) enhancer (7)(8)(9).
The Meis1 gene was identified near a site of frequent retroviral insertion leading to acute myeloid leukemia in BXH-2 mice (10). It has been further associated with human and mouse leukemias through frequent coordinated up-regulation in these cancers, and through its ability to potentiate the onset of acute myeloid leukemia provoked by Hoxa7 and Hoxa9 ectopic expression in mouse bone marrow (11)(12)(13)(14)(15)(16)(17)(18). More recently, intronic polymorphisms in MEIS1 have been linked to restless legs syndrome (19,20).
A C-terminal domain of the MEIS1A isoform is indispensable for its oncogenic properties, however, this function can be entirely rescued by replacement of this C-terminal domain with the potent transcriptional activation domain of VP16, suggesting that the MEIS1A C terminus exerts its oncogenic functions through transcriptional activation of target genes (21,22).
The transcriptional complexes formed by three-amino acid loop extension and HOX family homeoproteins recruit a variety of coregulators with sometimes opposing functions. HOXD4 and HOXB7 both recruit the histone acetyltransferase coactivator CBP to their N termini, whereas PBX1 N and C termini exert negative effects on transcription by binding corepressor complexes containing NCOR/SMRT and HDAC1 (3,4,(23)(24)(25). A role for PBX and/or MEIS in mediating transcrip-tional activation by PKA was first suggested for the bovine CYP17 gene (6,26). We have demonstrated that PBX⅐HOX complexes can be converted from repressors to activators by PKA signaling, and that this is in part due to increased association between HOXD4 and CBP (23). More recently, we have shown that the association of MEIS1A or MEIS1B with PBX and HOX contributes a PKA-inducible and CBP-dependent transcriptional activation domain located in the MEIS1A/B C termini (8). The mapping of this transactivation function to the same domain implicated in MEIS1A-mediated leukemogenesis strongly supports the notion that transcriptional activation is the basis for the oncogenic properties of MEIS1A (21,27). At least some of the embryonic patterning functions of MEIS family proteins are also achieved by transcriptional activation (4, 28 -31).
CREB family transcription factors bind to cAMP response elements (CREs) within target genes, and are targets of PKA (32,33). Phosphorylation of Ser 133 on CREB provides a high affinity binding site for CBP/p300 and leads to transcriptional activation of CRE-bearing target genes (34,35). More recently, a parallel PKA response has been described for CREB. In this pathway, PKA provokes the nuclear accumulation of TORC (also known as CREB-regulated transcription co-activator, CRTC) family transcriptional coactivators, which bind to the CREB bZIP DNA-binding domain via a coiled-coil interface in their N termini (36 -39). Recruitment of TORCs is not limited to CREB, because the HTLV-1 Tax protein and the AP-1 transcription factor likewise bind TORCs (40 -42).
We investigated a possible role for TORC family coactivators in the PKA inducibility of the MEIS1A C-terminal transactivation function. Our results show that PKA signaling to MEIS1A is dependent on TORCs, and that overexpression of TORCs obviates the need for PKA for transcriptional activation through the MEIS1A C terminus. Importantly, MEIS1 physically interacts with TORC1 and TORC2, and TORC2 is found in the nucleus at the regulatory regions of MEIS target genes in association with MEIS1 and PBX1.
Cell Culture and Transfections-HEK293 and P19 mouse embryonal carcinoma cells were cultured in Dulbecco's modified Eagle's medium and ␣ minimal essential medium, respectively, supplemented with 10% fetal bovine serum, L-glutamine, and penicillin/streptomycin. To differentiate P19 cells, cells were aggregated in 100-mm diameter bacterial Petri dishes at a density of 10 5 cells/ml and treated with 0.3 M retinoic acid (catalogue number R2625, Sigma) for 48 h. HEK293 cells were seeded at 75 to 90% confluence in 60-mm diameter tissue culture dishes for immunoprecipitation and in 12-well plates for luciferase assay. The cells were allowed to attach overnight and then transfected by Lipofectamine 2000 reagent (Invitrogen, catalogue number 11668-019). MG132 (Merck, catalogue number 474790) was used at 10 M for 5 h.
Luciferase Assay-For added with protease inhibitor mixture (catalogue number 11873580001, Roche). Following two freeze-thaw cycles, cells were spun down at 4°C for 10 min. The supernatant was incubated with the appropriate primary antibody from 5 h to overnight at 4°C, followed by 3 h incubation at 4°C with 30 l of a 50% slurry of Protein A-agarose (catalogue number 16-156, Upstate Biotechnology), unless the primary antibody was in the form of anti-FLAG M2 affinity agarose. The precipitates were washed three times, each with 500 l of Buffer B. Precipitates of Protein A-agarose were eluted with 1ϫ SDS sample buffer and boiling. Elution from anti-FLAG M2 affinity agarose was done by adding a 7.5-g FLAG peptide (catalogue number F3290, Sigma) for 1 h at 4°C. Protein samples were separated by SDSpolyacrylamide gel electrophoresis and transferred to 0.45-m nitrocellulose membrane. The membranes were blocked with 5% nonfat milk powder in 0.1% Tween 20 in PBS (PBS-T) for 1 h at room temperature to reduce nonspecific background, followed by primary antibody incubation for 3 h at room temperature or overnight at 4°C. The membranes were then washed four times, 10 min each with PBS-T, and incubated with secondary antibody conjugated with horseradish peroxidase for 45 min at room temperature. Subsequent to three 10-min PBS-T washes, bound antibodies were detected with a chemiluminescent kit (catalogue number KP-54-61-00, Mandel).
ChIP Assay-ChIP assays were performed according to the protocol from Upstate Biotechnology with minor changes as reported previously (8,45). P19 cells induced to differentiate down the neural pathway by aggregation in the presence of retinoic acid (see above) were treated with 20 M forskolin for 2 h, cross-linked with 1% formaldehyde for 10 min at 37°C, collected, and washed twice with ice-cold PBS containing protease inhibitor mixture. A 200-l aliquot of SDS lysis buffer (1% SDS, 10 mM EDTA, 50 mM Tris-Cl, pH 8.0, protease inhibitor mixture) was added to each 1 ϫ 10 6 cells and incubated on ice for 10 min. The 200-l lysates were sonicated at 4°C with 10 sets of 10-s pulses at 30% amplitude of a Betatec Sonics Vibra Cell sonicator to an average DNA length of 200 bp and then centrifuged for 10 min at 4°C. Each 100-l sonicated cell supernatant was diluted 10-fold in ChIP dilution buffer (0.01% SDS, 1.1% Triton X-100, 1.2 mM EDTA, 16.7 mM Tris-Cl, pH 8.0, 167 mM NaCl, protease inhibitor mixture) and pre-cleared with 40 l of a 50% slurry of salmon sperm DNA/Protein A-agarose (catalogue number 16-157, Upstate Biotechnology) for 30 min at 4°C with rotation. After an overnight incubation with anti-MEIS NT, anti-PBX1, anti-TORC2, or anti-rabbit IgG antibodies, 30 l of salmon sperm DNA/Protein A slurry was added for 1 h at 4°C, along with a no antibody control. To remove nonspecific DNA from the protein A-antibody-histone complex, we performed extensive washes with 500 l of each buffer in the following sequences: once with low salt buffer (0.1% SDS, 1% Triton X-100, 2 mM EDTA, 20 mM Tris-Cl, pH 8.0, 150 mM NaCl), once with high salt buffer (0.1% SDS, 1% Triton X-100, 2 mM EDTA, 20 mM Tris-Cl, pH 8.0, 500 mM NaCl), once with lithium chloride buffer (0.25 M LiCl, 1% Nonidet P-40, 1% deoxycholate, 1 mM EDTA, 10 mM Tris-Cl, pH 8.0), and twice with TE buffer (1 mM EDTA, 10 mM Tris-Cl, pH 8.0). Each wash was done by first pipetting up and down for 10 times, and then an 8 -10-min incubation on a rotating platform at 4°C. Subsequently, the histone complex was eluted from the antibody by incubating twice with 125 l of elution buffer (1% SDS, 0.1 M NaHCO 3 ) for 15 min at room temperature. Cross-links were reversed at 65°C for 4 h in the presence of 0.2 M NaCl. DNA was phenol-chloroform-extracted, ethanol precipitated, and resuspended in 40 l of distilled water (catalogue number 15230-147, Invitrogen). Five percent (by volume) of the immunoprecipitated DNA was served as template in quantitative real-time PCR by a SYBR Green JumpStart Taq ReadyMix kit (catalogue number S1816, Sigma) with a Roche LightCycler. The sequences of ChIP primers used in this study were as follows: for Hoxb1 ARE, 5Ј-CTCTGGTCCCTTCTTTCC and 5Ј-GGC-CAGAGTTTGGCAGTC; for Hoxb2 r4 enhancer, 5Ј-AGGC-CTTTTTAAGGGATATGC and 5Ј-AGGCCTCAAAGCT-GAAAATGA; for Meis1 promoter, 5Ј-TTAGGACTGATTCA-AGGAAAGC and 5Ј-GCCCCTCAGACCCAACTAC; and for gapdh, 5Ј-AACGACCCCTTCATTGAC and 5Ј-TCCACGAC-ATACTCAGCAC. The primers for the murine Meis1 gene flank a consensus PBX⅐MEIS binding site having the sequence 5Ј-TGATTGACAG-3Ј.
TORC1 and TORC2 Bypass the Need for PKA to Activate
Transcription by MEIS1A-To study the mechanism by which the MEIS1A C terminus responds to PKA signaling, we examined the contribution of TORCs to the transcriptional activity of MEIS1A residues 335-390 fused to the GAL4 DNA-binding domain (DBD). As noted previously (8), co-transfection of expression vectors for GAL-MEIS1A-(335-390) and PKA␣ in HEK293 cells strongly activates the transcription of a luciferase reporter driven by five tandem copies of the GAL4 DNA binding site (pML5xUAS). By contrast, the GAL-DBD fused to a mutant MEIS1A C terminus bearing alanine substitutions in the last six residues (GAL-MEIS1A-(GQWHYM) is refractory to stimulation by PKA (Fig. 1A). To test whether TORCs medi-ate MEIS1A C terminus transcriptional activity, we co-transfected TORC1 and pML5xUAS with GAL-DBD, GAL-MEIS1A-(335-390), or GAL-MEIS1A-(GQWHYM) in HEK293 cells both in the presence and absence of PKA␣. Both the GAL-DBD and GAL-MEIS1A-(GQWHYM) mutant, which are nonresponsive to PKA signaling were also non-responsive to TORC1 (Fig. 1A, upper panel). By contrast, TORC1 was able to bypass PKA␣ signaling to augment transcription of the luciferase gene by GAL-MEIS1A-(335-390) (Fig. 1A, upper panel). This ability of TORC1 to bypass PKA signaling was mimicked by TORC2 (Fig. 1A, bottom panel), which shares 32% identity with TORC1 (38). The lack of synergistic activation following co-expression of GAL-MEIS1A and TORCs suggests that all of the effects of PKA are mediated by TORCs.
Phosphoryation by SIK2 results in the sequestration of TORC family members in the cytoplasm. PKA inactivates SIK2, leading to the accumulation of TORCs in the nucleus. This suggested that the ability of overexpressed TORCs to bypass PKA for activation through MEIS1A could be due to the forced accumulation of TORCs in the nucleus. To examine the subcellular localization of TORCs expressed from transfected vectors, we performed immunofluorescence experiments using FLAGtagged TORC constructs. Endogenous TORC2 in HEK293T cells accumulates in both the nucleus and cytoplasm (Fig. 1B). By contrast, following overexpression, TORC2 is strongly concentrated in the nucleus (Fig. 1B). No significant nuclear accumulation was observed for overexpressed TORC1, consistent with previous observations (data not shown).
To evaluate the contribution of TORC1 to MEIS1A transcriptional activity on an authentic MEIS1 target promoter, we used a luciferase reporter vector driven by the 150-bp ARE of the Hoxb1 gene. As previously observed, the ternary MEIS⅐PBX⅐HOX complex strongly activated the luciferase reporter in response to PKA signaling (Fig. 2). As it did for GAL-MEIS1A-(335-390), TORC1 by itself could confer transcriptional activation by MEIS⅐PBX⅐HOX to the same extent as PKA␣ (Fig. 2). This robust transcriptional activity was significantly hampered by the loss of an N-terminal 46-amino acid fragment in TORC1 that contains a conserved coiled-coil domain, and to a lesser degree by the loss of a C-terminal 203amino acid fragment (Fig. 2). TORC2 also enhanced MEIS⅐PBX⅐HOX activation, albeit less vigorously than TORC1 (Fig. 2). Loss of the MEIS1A C terminus (MEIS1A-(⌬334 -390)) impaired the transcriptional activity of the ternary complex in response to TORC1 and TORC2, suggesting that the MEIS1A C terminus could fulfill its role as a transactivation domain by recruiting TORCs (Fig. 2). Note that deletion of the MEIS1A C terminus would not be expected to eliminate reporter gene activation, because endogenous MEIS1 would recruit TORC1 at some level.
To confirm that TORC mediates MEIS1A transactivation, we performed knockdown studies using an RNA polymerase III-driven hairpin (shRNA) vector against TORC2. When transiently co-transfected with the FLAG-TORC2 expression plasmid in HEK293 cells, TORC2 shRNA plasmid inhibited luciferase transcription by GAL-MEIS1A-(335-390) to a level comparable with that without the co-expression of FLAG-TORC2 (Fig. 3A, upper panel). This abrogation of TORC2 activity correlated with the depletion of FLAG-TORC2 protein levels (Fig. 3A, bottom panel). Increasing amounts of TORC2 shRNA did not further decrease the FLAG-TORC2 activity (Fig. 3A, upper panel). This effect of TORC2 shRNA on FLAG-TORC2 activity was specific because a control shRNA vector had the opposite effect (Fig. 3A, upper and bottom panels). This mild activation by the control shRNA could point to a generalized induction of TORC-mediated activation in response to shRNA. If so, it would argue for even more robust knockdown by the TORC2 shRNA against a background of shRNA-induced activation. An RNA interference-resistant version of FLAG-TORC2 (Flag-TORC2(Wobble)) was not depleted by TORC2 shRNA, further demonstrating the specificity of TORC2 shRNA (Fig. 3A, bottom panel). Importantly, knockdown of endogenous TORC2 likewise impaired the response of the MEIS1 C terminus to PKA signaling (Fig. 3B), demonstrating that the transcriptional effect observed here is mediated by physiological levels of TORC family members.
MEIS1A Associates with the CREB Co-activators TORC1 and TORC2-To investigate whether TORC1 binds to MEIS1A in vivo, we carried out immunoprecipitation experiments using whole cell extracts from HEK293 cells transfected with MEIS1A and/or FLAG-TORC1 expression vectors. Immunoprecipitation of FLAG epitope-tagged TORC1 by anti-FLAG M2-agarose resulted in co-precipitation of MEIS1A (Fig. 4A). Western blot analysis of control M2 immunoprecipitates from cells transfected with MEIS1A or FLAG-TORC1 expression vector alone did not yield any sign of MEIS1A co-precipitation (Fig. 4A). To further examine MEIS⅐TORC complex formation in vivo, we prepared TORC2 immunoprecipitates from HEK293 cells using TORC2 antiserum against the endogenous TORC2 protein (Fig. 4B, lanes 8 -13, bottom panel). As shown in Fig. 4B, MEIS1A was recovered from immunoprecipitates of TORC2 prepared from cells transfected with MEIS1A expression vector (Fig. 4B, lanes 8 -10, upper panel). More significantly, ϳ10% of endogenous MEIS1A protein was found to co-precipitate with endogenous TORC2 (Fig. 4B, lanes 11-13, upper panel). This interaction is specific because no MEIS1A protein was detected from control immunoprecipitates using an anti-GAL4 antibody (Fig. 4B, lane 14).
To further implicate the MEIS1A C terminus in interaction with TORCs, we performed co-immunoprecipitation experiments with wild-type MEIS1A and a MEIS1A mutant lacking all residues C-terminal to the homeodomain. We observed that overexpression of TORC1 destabilizes the MEIS1A mutant. This loss of mutant MEIS1A protein is the result of proteasome-mediated degradation, because the mutant protein is recovered by addition of the proteasome inhibitor MG132 to the culture medium (data not shown). In the presence of MG132, the wild-type and mutant MEIS1A proteins accumulate to similar levels, but the mutant is strongly impaired for interaction with TORCs (Fig. 4C). This result confirms the importance of the MEIS1A C terminus for interaction with TORC family members.
MEIS1, PBX1, and TORC2 Are Recruited to MEIS1 Targets-The importance of TORCs for transcriptional activation by the MEIS1A C terminus prompted us to assess the recruitment of MEIS1, TORC2, and PBX1 to known MEIS1 targets in vivo. We performed ChIP assays on neurally differentiating mouse P19 embryonal carcinoma cells either untreated or treated with forskolin for 2 h. Real-time PCR was carried out on immunoprecipitated DNA using primers spanning the Hoxb1 ARE, Hoxb2 r4 enhancer, and Meis1 promoter. Values obtained from the LightCycler quantification were normalized against the corre- immunoprecipitated TORC2 by the anti-TORC2 antibody but not the control anti-GAL4 antibody. 10% input levels of MEIS1A and TORC2 are shown. C, a MEIS1A mutant lacking the C terminus fails to co-immunoprecipitate with TORC1. HEK293T cells were co-transfected with a FLAG-tagged TORC1 expression vector and a vector encoding either wild-type MEIS1A or a mutant lacking the TORC-responsive C terminus (MEIS1A- (⌬334 -390)). On the second day following transfection, cells were treated with the proteasome inhibitor MG132 and cell lysates prepared 5 h later. Immunoprecipitation of TORC1 was performed with an anti-FLAG antibody, and the presence of MEIS1 proteins in the immunoprecipitates was subsequently assessed by Western blotting.
sponding input and nonspecific IgG antibody, and expressed as relative occupancy.
The Hoxb1 ARE, Hoxb2 r4 enhancer, and Meis1 promoter were effectively recovered from both the immunoprecipitates of MEIS1 and PBX1 (Fig. 6). In comparison to their untreated counterparts, immunoprecipitates of forskolin-treated cells revealed a greater recruitment of MEIS1 and PBX1 to the Meis1 promoter, the Hoxb2 r4 enhancer, and the Hoxb1 ARE (Fig. 6, A-C). Strikingly, TORC2 was also recruited to the Hoxb2 r4 enhancer and Meis1 promoter under forskolin-treated conditions (Fig. 6). Forskolin-induced recruitment of TORC2 to the Hoxb1 ARE was not observed. The specificity of our ChIP analysis was validated based on "no antibody" and nonspecific IgG precipitation controls (used to normalize the values presented in Fig. 6), and on the lack of differential recruitment to the housekeeping gene gapdh (Fig. 6D). These results demonstrate that TORC2 is indeed recruited and present with MEIS1 on some MEIS1 target genes in vivo, strongly supporting physical and functional associations between these transcriptional regulators.
DISCUSSION
Our previous study established that the MEIS1A C terminus has a transcriptional activation domain that responds to PKA signaling. Supporting this work, two studies using mouse models of HOXA9-induced leukemia mapped a conserved transcriptional function to the MEIS1A C terminus required for accelerating leukemogenesis (21,22). Here we show that a mechanism by which the MEIS1A C terminus achieves its transcriptional function involves CREB coactivators TORC1 and TORC2.
TORCs Mediate PKA Signaling by Physical Association with MEIS1A-We have shown that two members of the recently identified CREB co-activator family, TORC1 and TORC2, bypass the need for PKA stimulation to induce MEIS1A transcriptional activity both in a heterologous GAL4 reporter system and an authentic MEIS1 target promoter (Figs. 1A and 2).
PKA redirects TORC proteins from the cytoplasm to the nucleus through inhibitory effects on SIK2 and related kinases. We showed that overexpression of TORC2 likewise results in strong nuclear accumulation, explaining how PKA is bypassed under these conditions (Fig. 1B). However, we were unable to show similar nuclear localization following overexpression of TORC1 (data not shown). This may be explained by low-level nuclear localization of TORC1 being sufficient for reporter activation. For example, TORC1 could be tenaciously bound to GAL-MEIS1A at UAS elements in the luciferase reporter, whereas overall nuclear levels of TORC2 are kept low.
Using truncated versions of TORC1, the inherent functions of two regions were found to contribute to the TORC1 effect on MEIS1A. The N-terminal 46 residues of TORC1 encompass part of a highly conserved coiled-coil domain required for tetramerization and association with CREB (37), the disruption of which is expected to interfere with TORC1 function. In addition, the TORC1 C-terminal 203 residues that overlap a transcriptional activation domain and were previously proposed to coordinate the assembly of the transcriptional apparatus (37) were also required for optimal MEIS1A activity (Fig. 5).
Our co-immunoprecipitation results demonstrate a direct or indirect physical interaction between MEIS1 and overexpressed TORC1, and between purely endogenous MEIS1 and TORC2 in HEK293 cells (Fig. 4). The MEIS1A interaction domain on TORC1 spans residues 1 to 290, and includes the TORC1 coiled-coil domain. TORC1 lacking this domain was unable to induce transcriptional activation via the MEIS1A C terminus (Fig. 5). We were unable to map the TORC1 interaction domain on MEIS1A in reciprocal immunoprecipitates (data not shown). This could be because our anti-MEIS1 antibody occludes the TORC1 binding domain. In the absence of this evidence, four observations argue that TORCs interact with MEIS1A via its C terminus. First, the C-terminal 56-residue fragment of MEIS1A fused to the GAL4 DNA-binding domain was sufficient to activate transcription in the presence of TORC1 and/or TORC2 (Fig. 1). Second, in GAL fusions, mutations within the MEIS1A C terminus abolished activation by TORC1 and TORC2 (Fig. 1). Third, by comparison to fulllength, unfused MEIS1A, a C-terminal truncated protein (MEIS1A-(⌬334 -390)) was impaired for activation of a Hoxb1 ARE reporter in response to TORC1 and TORC2 (Fig. 2). Fourth, a MEIS1A mutant lacking the C terminus does not coimmunoprecipitate with TORC1 (Fig. 4C).
In addition to MEIS1A, the C-terminal divergent MEIS1B isoform might also interact with TORCs because it was also found to cooperate with TORC1 and TORC2 to potentiate transcription from the Hoxb1 ARE (data not shown). On the basis of previous studies and results presented here, the TORC component of a TORC⅐MEIS complex may promote the assembly of a transcriptional initiation complex by recruitment of the TFIID-associated factor TAF II 130 and CBP (37,46).
TORCs Are Recruited to MEIS1 Target Enhancers in Vivo-Our ChIP studies demonstrate co-occupancy of endogenous MEIS1, PBX1, and TORC2 on the Hoxb2 r4 enhancer and Meis1 promoter upon elevation of cAMP levels, confirming the biological relevance of our findings (Fig. 6). Together with our in vitro tests of HOX-and MEIS-responsive enhancers, our results imply that HOX, MEIS, PBX, and TORC will collaborate to induce a subset of genes involved in embryonic development, and normal hematopoiesis or leukemogenesis. With regard to the latter, the FLT3 promoter recruits MEIS1 in acute myeloid leukemia-initiating progenitors (21,27) and possesses conserved cAMP-responsive elements that are occupied by CREB in vivo (47). We speculate that TORC, which binds both CREB and MEIS, may provide the means by which these two transcription factors respond cooperatively to PKA signaling on the FLT3 promoter.
MEIS and TORC are evolutionarily conserved, and Drosophila TORC strongly activates transcription through cAMP-responsive reporters (38), making it likely that the regulatory interactions reported here are used across species. PKA signaling regulates embryonic patterning and morphogenesis as demonstrated by the action of hedgehog family members in dorso-ventral patterning and skeletogenesis (48). The broad expression of TORC, MEIS, and PBX family members and their control of HOX expression and function, combined with numerous roles for PKA in cellular and developmental processes suggests that the functions of these factors will converge in many such events.
|
2018-04-03T03:03:46.672Z
|
2009-05-27T00:00:00.000
|
{
"year": 2009,
"sha1": "8a3bf1fb61fb7e49a741e859061641d1f9dcb723",
"oa_license": "CCBYNC",
"oa_url": "http://www.jbc.org/content/284/28/18904.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a3bf1fb61fb7e49a741e859061641d1f9dcb723",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
135468077
|
pes2o/s2orc
|
v3-fos-license
|
Defaults, normative anchors, and the occurrence of risky and cautious shifts
Choice shifts occur when individuals advocate a risky (safe) decision when acting as part of a group even though they prefer a safe (risky) decision when acting as individuals. Even though research in psychology and economics has produced a mass of evidence on this puzzling phenomenon, there is no agreement about which mechanism produces choice shifts. In an experiment, we investigate the performance of two prominent mechanisms that have been proposed to explain the phenomenon; (i) rank-dependent utility and (ii) a desire to conform to the wishes of the majority. The evidence provides clear support for the conformity explanation.
Introduction
Many important decisions under risk are taken in small groups. Examples include the investments made in clubs, managing a joint asset portfolio (cf. Barber and Odean 2000), decisions made by a company board or a political committee and the decision of a mountaineering party whether to make the final ascent to the top. It is well-known that in such situations choice shifts may occur. A choice shift happens when individuals advocate a risky (safe) decision when acting as part of a group even though they would prefer a safe (risky) alternative decision when acting as individuals. Although there are many examples of risky and cautious shifts, there is little consensus about the behavioral mechanisms that are driving these shifts.
In this paper, we consider two prominent mechanisms that may systematically produce choice shifts: (i) rank-dependent utility and (ii) conformity. The explanation based on rank-dependent utility focuses on the extra layer of uncertainty when people decide in groups while it abstracts from the social aspect of the situation. In contrast, conformity ignores the uncertainty dimension and zooms in on the social aspect that distinguishes a group decision from an individual decision. In a laboratory experiment, we investigate which of these two explanations correctly predicts when cautious and when risky shifts are observed. To the best of our knowledge, we are the first who distinguish between the two explanations.
Systematic evidence of choice shifts has been reported in psychology since the early sixties (cf. Stoner 1961;Bem et al. 1962;Pruitt 1971;Isenberg 1986). 1 The prevalence of risky shifts in the early experiments gave rise to diffusion of responsibility theory (Bem et al. 1962(Bem et al. , 1964(Bem et al. , 1965, which more recently inspired the formal approach of Eliaz et al. (2006) (ERR) based on rank-dependent utility. Diffusion of responsibility theory argues that individuals 'voting' for an outcome in a group might feel less responsible for the outcome than if they directly choose in an individual decision and that this might induce them to push for more risky prospects than they would choose in an individual-decision problem. The psychological cause of this behavior is held to lie in a feeling of disappointment following a failure to realize the good outcome in a risky prospect. When choosing in a group -the argument goes -individuals account less for this potential disappointment since their vote for a prospect matters less for the outcome than their choice in an individual-decision problem would (cf. Pruitt 1971). Starting from Nordhøy (1962) and Stoner (1968), the regular occurrence of cautious shifts in later studies that cannot be accommodated 1 While originally risky shifts were observed much more frequently than cautious shifts, later studies (Stoner 1968;Nordhøy 1962 and many more) gave a more balanced picture. By today's standards, evidence for risky and cautious shifts in these early studies should be taken with a grain of salt since they used choice dilemma questions as stimuli that cannot be unambiguously mapped into the types of decision problems studied in formal decision theory. Studies that have used standard prospects as stimuli were conducted both by psychologists (e.g. Kogan and Zaleska 1969;Pruitt and Teger 1969;Sherman et al. 1968;Davis et al. 1968Davis et al. , 1974Davis and Johnson 1972;Davis and Hinsz 1982) and economists (e.g. Shupp and Williams 2008;Casari and Zhang 2012;Colombier et al. 2009) and have provided evidence for both types of shifts. by diffusion of responsibility theory was seen as a strong empirical reason to doubt the validity of this approach.
ERR formalize the intuition behind diffusion of responsibility theory and show how a generalized model based on rank-dependent utility (RDU) preferences can explain risky and cautious shifts. They achieve this by considering the compound lotteries an individual expects to result from the group decision conditional on her own vote as the object of evaluation. These compound lotteries account both for the exogenous risk emanating from the random processes described by the prospects, but also the endogenous risk deriving from other group members' influence on the group decision. E.g., if for a binary group decision between a risky and a safe prospect a person votes for the safe prospect, she effectively chooses a prospect yielding the safe prospect with the probability that the group choice becomes safe if she voted safe and the risky prospect with the probability that the group choice becomes risky if she voted safe. The classical diffusion of responsibility theory ignores this additional layer of uncertainty and implicitly assumes that individuals base their choice on the primitive lotteries presented to the group -even where they do not have full influence regarding the outcome of the group choice.
For an exemplary group-decision problem, we demonstrate that ERR's RDU model accommodates cautious shifts. To see this, suppose an individual expects the safe choice to be implemented with certainty if her decision is not pivotal. This would be the case, for instance, if the decision rule is unanimous decision and the fallback outcome under disagreement is the safe prospect. Then, voting for risky in the group decision generates a compound lottery in which the probability of failing to receive the good outcome of the risky prospect is higher than in the original risky prospect. Now assume the decision maker maximizes RDU preferences with a strictly convex (gain-rank) weighting function such that she overweights probabilities attached to the bad outcomes of a lottery relative to the good outcomes. Then she might well prefer the risky choice if deciding on her own and vote for the safe choice if deciding in the group. The resulting cautious shift in a group decision is in this sense similar to the choice pattern generated in the Allais paradox; both patterns point to a violation of expected-utility theory's independence axiom. ERR show how rank-dependent utility may systematically produce cautious and risky shifts. While their results imply that assuming a certain type of RDU preferences is sufficient for choice shifts in group decisions, similar results can be achieved using other types of preferences outside the RDU class. Recently, Dillenberger and Raymond (2016) have generalized ERR's approach. They show that the choice-shift pattern predicted by ERR is exhibited by a larger class of preferences that includes ERR's RDU preferences, and they provide a set of axioms that is necessary and sufficient for preferences to cause choice shifts in group decisions.
In contrast, a competing explanation for choice shifts that has been developed in psychology assumes that people have a taste for conformity. Asch's line judgments experiments first showed the profound effect that social pressure can have on individuals' reported judgments. This taste for conformity can arise for a number of reasons that will typically lead to different behavioral predictions. In a survey, Cialdini and Goldstein (2004) emphasize two goals that people may implicitly or explicitly pursue when they respond to social pressure. First, people may pursue an accuracy goal. That is, when they are unsure of the appropriate choice in a social situation, they may revise their intended choice in the direction of the majority of the group when they are informed of the opinions or choices of other group members. Second, people may care about the outcomes for the others in their group and about how the others judge them. Thus, an affiliation goal may also encourage them to conform to the choices and opinions of others in their group. Nordhøy (1962), Brown (1965), andStoner (1968) first explained how such social pressure may cause risky and cautious choice shifts. They argue that in a group decision, individual votes get shifted towards the choice that most group members would have preferred in an individual decision. In this approach, choice shifts are conceptualized as a drift towards the ex-ante majority preference. Some studies advocate the affiliation goal that people may pursue when they give in to social pressure (cf. Brown 1965). Other studies favor the accuracy goal (cf. Brown 1965;St. Jean 1970;Stoner 1968;Pruitt and Teger 1967;Vinokur 1971).
Notice that the previously collected evidence in favor of the conformity mechanism does not contradict ERR's RDU theory. ERR predict that if the default in a group is the cautious decision, choice shifts will tend to go in the cautious direction. Likewise, if the default in a group is the risky decision, choice shifts will tend to go in the risky direction. In a group process, the majority position may easily serve as the default which will be implemented if an individual's vote is not pivotal. Thus, the two mechanisms may be quite similar in terms of behavioral patterns that are expected in previous designs.
Our experimental design is the first to allow for a direct comparison of the rank-dependent utility and the conformity explanations for choice shifts. 2 We use a simplified setup inspired by ERR's model of group processes. This amounts to having our subjects choose between a risky and a safe gamble that, conditional on the treatment, we augment with different layers of a real group decision. One treatment gives ERR's model its best shot. In the group decision of this treatment, subjects are informed of the risky or safe default that will be implemented with exogenous probability, while subjects are not distracted by information about the preferences of their peers and while they also know that their decisions have no consequences for the others. There is also a treatment that gives the conformity approach its best shot. In this treatment, there is no default that causes exogenous uncertainty, while subjects are informed of the preferences of their group members and know that their decisions have payoff consequences for their group members. In between these two extremes, we have some treatments that allow us to systematically study the effect of receiving information regarding the majority preferences, the effect of whether or not the individual decision has payoff externalities to the other group members and the effect of the presence of a default. Thus, a novel feature of our design is that we control the influence an individual's choice has on her final outcome independently of 2 We will not explicitly examine the performance of classical diffusion of responsibility theory but only consider the generalization provided by ERR. ERR's model (and Dillenberger and Raymond's 2016 extension) is the only version of this mechanism to date that accommodates risky and cautious shifts. Since we observe both types of shift in equal proportions (cf. Section 5) any other variant of diffusion of responsibility theory can already be ruled out upon superficial inspection of the data. social aspects of the choice situation such as the degree of responsibility for others' outcomes and the extent to which subjects learn about others' preferences.
Group discussion is not essential for either of the two mechanisms. To distinguish between the two mechanisms in a clean setting, we do not allow groups in the experiment to explicitly discuss their attitudes toward risk for the specific gambles that they face. 3 To create a sense of being in a group, our subjects briefly get to know each other before they are informed of the risky decisions that they make. 4 In agreement with previous work, we find that cautious and risky shifts regularly occur. Our results lend clear support for the conformity mechanism: individuals display a strong tendency to adapt their decisions to the majority preferences in their group. This pattern is strongest when a subject's decision has payoff consequences for other group members -suggesting that choice shifts are partly driven by the activation of the group affiliation goal of the conformity mechanism, or, in economic terms, by other-regarding preferences. Although shifts are common in both directions, we do find an asymmetry in the occurrence of choice shifts when decisions have payoff externalities. A choice shift is particularly likely when a subject exhibits a preference for the risky option when choosing in isolation while she shifts to the cautious option once she is informed that the majority preferred the cautious gamble. 5 We find only limited support for ERR's approach based on rank-dependent utility. Even in the treatment that gives the theory its best shot, the observed pattern of choice shifts does not agree particularly well with their mechanism; shifts are somewhat more often in the direction of the default (like ERR predict), but the difference is insignificant.
The only other study that sheds light on the empirical validity of ERR's model is an (unpublished) paper by Gurdal and Miller (2010). In their implementation of the ERR model, the group decision is the risky decision unless all group members vote for the cautious decision. They never provide subjects with information regarding the preferences of the majority. They find that subjects in the group decision tend to shift in the cautious direction even though they should shift in the risky direction if the ERR model drives the choice shift. They favor the explanation that in groups people are affected by a social norm to behave cautiously. An alternative explanation is that subjects implicitly respected other group members' preferences. That is, with unanimous decision making and a risky default, a subject's vote only matters if all the others vote cautiously. Conditional on being pivotal, a voter would know that he imposed the risky lottery on the others who voted cautiously and therefore a desire for conformity with the group preference might make subjects behave more cautiously in the group decision. Gurdal and Miller (2010) do not have observations of group decisions where the default is the cautious decision. Therefore, it is not clear whether subjects become generally more cautious in groups, as Gurdal and Miller (2010) suggest, or whether they move into the direction of the supposed majority preference of the group. Our setup has the advantage of clearly separating between the effects of defaults and information on majority preferences among group members. Possibly as a consequence of this, we find much clearer evidence of choice shifts than Gurdal and Miller (2010). Contrary to Gurdal and Miller (2010), we also find a sizable number of risky shifts, which should not occur if people generally become more cautious when acting in groups.
In this paper we focus on the extent to which rank-dependent utility and conformity contribute to the emergence of choice shifts. There may also be other mechanisms that cause choice shifts. When there are payoff externalities of people's decisions, it may be that individuals change their decisions because they feel responsible for others' outcomes. If such responsibility matters, then in the presence of payoff externalities people may shift their decisions even if they have no information about the majority preference. Charness (2000) reports an effect of social responsibility in a labor market experiment. Charness and Jackson (2009) find that subjects are somewhat more likely to choose the safe option in a stag-hunt game when they are responsible for the payoff of another group member. More closely related are the recent papers by Pahlke et al. (2015) and Vieider et al. (2016) that investigate the role of social responsibility in individual decision making. Social responsibility sometimes leads to less risk taking and sometimes to more risk seeking. Vieider et al. (2016) find that probability weighting becomes more extreme in the presence of social responsibility. 6 The remainder of this paper is structured as follows: Section 2 provides the model that we use and explains how rank-dependent utility may produce choice shifts.
Section 3 describes and motivates the experimental design we used. In Section 4, we show how the design allows us to derive predictions that distinguish the two candidate mechanisms for choice shifts. Section 5 presents the results of our experiment. Section 6 concludes.
Theoretical mechanisms
In this section, we introduce ERR's model of group decisions and explain how it can produce cautious and risky shifts where diffusion of responsibility theory could only yield risky shifts. ERR argue that, from the participant's point of view, we can decompose any group decision on a binary choice set into an individual decision and a random process as follows: let the choice set C = {R, S} be over two finite lotteries, risky (R) and safe (S). To capture the group decision, ERR introduce a pair of probabilities g = (a, b) where a ∈ (0, 1) is the probability that an individual's vote will be pivotal in the group decision and b ∈ [0, 1] is the probability that the group will choose S, conditional on the individual in question not being pivotal. For a given reduced-form group-decision problem (g, C), R and S represent the compound lotteries if the individual votes for R or S, respectively.
Given the reduced-form group decision, the prediction of expected-utility theory is that the preference between R and S is the same as that between R and S. The deciding individual should only care about the characteristics of the outcome that she can influence by her decision.
If, however, individuals maximize RDU preferences with a strictly convex (gainrank) weighting function, ERR show that choice shifts will systematically occur. This variant of RDU preferences has been referred to as pessimistic (Wakker 2001a, b) since individuals place excessive decision weights on bad outcomes as compared to the probability weight applied by an expected-utility maximizer. Clearly, this can capture the idea of classical diffusion of responsibility theory where individuals care to avoid the disappointment ensuing after they fail to realize the good outcome of a risky prospect.
The formal result for general binary choices between a prospect R and a degenerate prospect S is given by ERR's theorem 1 which is restated here without proof: 7 Theorem 1 Under rank-dependent utility, the following are equivalent:
For all prospects R and degenerate prospects S and arbitrary
We illustrate the theory in a numerical example.
Example 1 (Choice shifts with pessimistic preferences)
For this example, we assume a preference { i } represented by Bernoulli-utility function u = id and strictly convex (gain-rank) weighting function w = (·) 2 . Let us consider the prospects S = 5 and R = 20 0.5 0. 8 So i's decision is pivotal with probability a = 0.4 and if it is not, the group will choose S with probability b. The resulting compound lotteries are thus In what follows, we write r y for the (gain rank) of outcome y. For example, in prospect . Now let us determine b * ∈ (0, 1) as described in statement (2) above. We start by calculating the utilities of the compound lotteries: While the indifference condition in part 2 of Theorem 1 is hard to establish empirically, it is important to realize that it represents only a sufficient (but not a necessary) condition for the occurrence of choice shifts (b ∈ (0, 1)). Notably, it is easy to find prospects R, S such that a choice shift occurs for b * ∈ (0, 1) while dropping the indifference condition. A follow-up to the above example illustrates this.
Example 2 (Choice shifts without prior indifference)
We keep the setup from above, except for setting R = 20 0.6 0. Clearly, RDU(R) = 7.2 > 5 = RDU(S) such that R i S. For g b = (0.4, b) as before, the resulting compound lotteries have utilities In fact, it can be shown that for every set of prospects R , S such that R ∼ i S, we can find prospects R, S "close" to R , S such that a choice shift occurs at b ∈ (0, 1) without R ∼ i S.
Corollary 1
Assume { i } satisfies the conditions of Theorem 1. Then, we can find prospects R, S and b ∈ (0, 1) such that R i S (S i R) and A remaining point of concern is ERR's assumption of pessimistic (i.e. strictly convex) RDU preferences. As follows from the proof of their theorem, both of the above statements are equivalent to the assumption that a studied individual's preference { i } can be represented by an RDU functional with strictly convex gain-rank weighting function w (probability weighting). Empirical research related to RDU models starting from Baratta and Preston (1948) has shown that the most common (but far from unique) finding is an inverse-S-shaped (gain-rank) weighting function and not a strictly convex one. This variant of RDU preferences departs from pessimism in over-weighting favorable events occurring with small probability. The implication is that ERR's Theorem 1 and the implied comparative statics might hold only in a local version (insofar as the strictly convex regions of the weighting function are decisive for choices). The following numerical example illustrates this.
Example 3 (Choice shifts with inverse-S-shaped preferences)
For this example, we assume a preference { i } represented by Bernoulli-utility function u = id and (gain-rank) weighting function w = exp −1.0467(− log(·)) 0.65 . This is the weighting function originally introduced by Prelec (1998) with parameter vector (α, β) = (0.65, 1.0467). The reader may verify that this specification yields an inverse-S-shaped transformation function with inflection point exp(−1) ≈ 0.37 and fixed point close to 0.32. Wakker (2010, p. 260) argues that this specification captures the empirically-observed inverse-S-shaped weighting functions rather well.
Let us consider the prospects S 1 = 7, S 2 = 8 and R 1 = 35 0.2 0, R 2 = 16 0.6 0. Both C 1 = {R 1 , S 1 } and C 2 = {R 2 , S 2 } are choice sets that we used in the experiment. The former choice set is problematic for ERR's model when we assume preferences { i }. Indeed, we will show that, given { i }, a choice shift can occur with C 2 but not with C 1 .
It is easily derived that As the example shows, when global strict convexity of w is not satisfied, ERR's Theorem 1 (supplemented by Corollary 1) breaks down as a global result while we may retain it for specific choice sets. For the inverse-S-shaped weighting function considered here, this will be the case for prospects where the upside probability is sufficiently high to make the non-convex region of w irrelevant for the result. 9 We will explore this issue by studying two types of decision problems (cf. Section 3.1) involving either low or high upside probabilities. Comparing the results for these two types of prospects, we study the extent to which ERR's restrictive assumption of pessimistic preferences is problematic. Dillenberger and Raymond (2016) further explore the preference foundations of choice shifts 10 across a range of RDU models and non-RDU models, including Kőszegi and Rabin's (2007) reference dependent preferences. Most notably, they provide necessary and sufficient conditions for preferences to exhibit the pattern from statement 2 of ERR's theorem (where ERR's theorem only gives a sufficient condition). In our experimental test of ERR's model we attempt to induce this pattern in the lab for the two extreme cases where b ∈ {0, 1} (cf. Section 4). Our results are therefore equally fit to shed light on the more general types of preferences examined in this more recent contribution.
Experiment
The computerized experiment was run at CREED, the Economics laboratory of the University of Amsterdam. Subjects read the instructions of the experiment at their own pace on screen (see Online Appendix C). They had to correctly answer some control questions testing their understanding before they could continue with the experiment. Most sessions were run with 20 subjects. We ran 15 sessions with a total of 280 subjects. Subjects received a 5 euro show-up fee and earned on average an additional 8.5 euro with their choices (minimum 0 euro, maximum 45 euro). Each subject participated in only one of the 5 treatments. Each session of each treatment was divided into three stages: a preliminary communication stage, the individual-decision part of the experiment (part 1), and the group-decision part of the experiment (part 2). 11 The instructions were communicated in parts; subjects only received the instructions for a stage after a previous stage had been completed. There was no difference in the experimental design of the first two stages. The treatments only differ in how the group-decision part was shaped.
At the start of the session, subjects were randomly assigned to workplaces in the laboratory. Each subject was assigned to a group of 5 individuals that were seated nearby each other. We decided to have a group size of 5 instead of 3 to have enough chance of preference heterogeneity as would be needed to test the theories. Every subject was informed that at the end of the experiment only one of the choice problems from part 1 and part 2 would be randomly selected and used for payment. In addition, subjects were made aware that the payoff for a part-2 problem might be affected by other group members' decisions and that their decision for a part-2 problem might similarly affect the payoffs for other group members. There were visual barriers between tables and verbal communication during the session was not allowed. In the first stage, the experimenter invited subjects to stand up in order to freely look at the other members of their group over the barriers. Thereafter, the members of each group were invited to have an unstructured 3-minute conversation via chatboxes. We added this feature to the experiment to emphasize that subjects were part of an actual group. When subjects' decisions matter for other group members' payoffs, we think that it is natural and important that they know who will actually be affected.
In the individual-decision part of the experiment, subjects were presented with 6 binary choice sets, each containing a "safe" prospect S and a "risky" alternative prospect R. The choice sets we used are shown in Table 1. We use Savage's (1954) notational conventions. I.e., X p 0 is the prospect that yields the amount X with probability p and 0 otherwise. There were two classes of choice sets. In the first class, the risky prospect featured a low probability of winning a high amount. In the second class, the risky prospect featured a high probability of winning a moderate amount. Choice sets were presented one after another and the order of presentation was the same for all subjects. For each choice set, a subject made an irrevocable choice before she continued with the next choice set. The goal of this part was to elicit subjects' individual preferences over the relevant prospects.
The group-decision part differed across treatments along three dimensions: 1. whether or not a default was present; 2. whether or not information about the majority choice in part 1 in the group was provided and 3. whether or not a subject's decision had payoff consequences for the others in the group. The main features and names of the treatments are summarized in Table 2.
In the three treatments with a default, subjects were presented with 12 choice problems that were based on the ones from part 1 in the following way. For a given choice set {R,S} from part 1 a default outcome D in {R,S} was pre-selected. Subjects would receive the default outcome with 60% probability, no matter what they chose. This corresponds to ERR's non-pivotal case. Subjects were asked to choose to stay with the default or to deviate from the default. A choice to deviate would be implemented with the residual 40% probability and corresponds to ERR's pivotal case. 12 For each problem, subjects were informed about the default prior to making a decision. Each choice problem of part 1 was offered twice, once with the risky prospect as default and once with the safe prospect as default (cf. Fig. 1 for decision trees illustrating our part-2 problems with default). In the treatments without a default, subjects were presented with 6 choice problems that in terms of prospects were the same as the ones of part 1.
There was one treatment where we did not provide subjects with the information about the preferences of the majority in their group: Def-NoMaj-NoExt. In this treatment, subjects' decisions could not affect the payoff of other group members. 13 This treatment gave the best shot to the theory of ERR, because it excluded potentially confounding motivations such as a desire to conform to the majority preference or a desire to provide others with the prospect that they preferred. 14 In all 4 other treatments, we did provide subjects with information about the majority choice in their group for the corresponding part-1 problem. Specifically, before they made their choice, subjects were either informed that the default choice coincided with the majority choice of their group for the corresponding part-1 problem or, if this was not the case, that it coincided with the minority choice. In the treatments that provided information about the majority choice, we had a full 2x2 design in which we systematically varied the presence of the default and the presence of the payoff externality on the other group members.
In this paper we focused on the treatments that are most interesting to distinguish between the roles that conformity and ERR's theory play in explaining choice shifts. The left-out treatments would have been helpful to answer some adjacent questions. 13 All this implies that the 'group decisions' in Def-NoMaj-NoExt are actually individual decision problems with a more complicated decision tree that should bring the mechanism advocated by ERR into play. Cf. Section 4 for details. 14 In the treatments with a default, the order of presentation was randomized for each group as follows. We randomly picked one of the two classes of problems from part 1, each containing 3 choice sets. We then randomly determined a default for these three choice sets and presented the three resulting part-2 problems to the subjects. All problems from the other class were presented to subjects with randomly fixed defaults. We then presented the three problems with which they started but now with the other default. We ended with the three problems from the second block but now with the other default. For treatments without default, we randomized the order in which the prospects classes were presented across groups.
For instance, in the treatment NoDef-NoMaj-NoExt the decision problems in stage 1 and stage 2 would be identical. So conducting this treatment would tell us to what degree subjects randomly reverse their preferences between prospects. The other two left-out treatments we did not run because the absence of majority choice information implies that conformity does not predict anything. Still these treatments may be interesting to shed light on whether responsibility plays an independent role in explaining choice shifts. We come back to this possibility in Section 6.
In the treatments without payoff externality, subjects knew that their own decision only affected their own payoff. Here, the choices that subjects made in part 2 were never communicated to other subjects. Subjects were made aware of these facts at the start of part 2. In the two treatments with a payoff externality, one individual's choice became the choice for all members in the group. That is, if the payoff-relevant problem ex post turned out to belong to part 2, one group member's decision was selected at random to determine the payoff for all members. Each member's decision had an equal chance of being selected to matter on a given part-2 problem. If a subject's decision was implemented for the group, the identity and the decision of the subject were revealed to everyone in the group. Subjects were made aware of these facts in the instructions for part 2.
Gambles & payment
Each problem had an equal probability of being selected for payment and we selected the same problem for each group. For all problems with a pivotal player, one individual per group was selected to be pivotal with all members having equal probability. For all problems with default we made one random draw per group to decide if subjects' decisions would count or if the default would be implemented. Lastly, we played out the risky lottery one time for each group. All random draws used to determine the payments were computerized and visualized on screen for the concerned subjects.
The incentive compatibility of a randomized incentive scheme may be questioned if subjects choose in accordance with rank-dependent utility like advocated by ERR (cf. Holt 1986). Since rank-dependent utility violates Savage's sure-thing principle, the extent to which subjects reduce or do not reduce compound lotteries matters for behavior. This does not mean that RDU preferences necessarily imply that randomized incentive schemes are not incentive compatible (cf. Cohen et al. 1987;Bardsley et al. 2009). Specifically, the common assumption that behavior of a subject is independent across different decision problems will continue to hold with RDU preferences under the so-called isolation assumption (Kahneman and Tversky 1979). That is, we must assume that subjects consider the decision trees at all decision problems in the experiment in isolation from each other. For standard applications of the randomized incentive scheme in which one randomly-selected decision per player becomes payoff-relevant, there is an extensive empirical literature on this issue, starting with Cohen et al. (1987), Starmer and Sugden (1991), and Cubitt et al. (1998). Most studies, including these seminal contributions, report evidence in favor of the isolation assumption (cf. Hey and Lee 2005a, b;Laury 2005;Lee 2008) albeit the occasional negative result has also been presented (cf. Cox et al. 2014;Harrison and Swarthout 2014). The consensus in the field is that experimental subjects choose in accordance with the isolation assumption such that the use of a standard randomized incentive scheme is unproblematic for studies of RDU models. More importantly, our subjects received the choice problems one at a time, without information regarding the future choice problems, and without the possibility of revising the previous choices. Therefore, it was impossible for them to integrate all choices into one big decision problem. The design made it practically impossible to deviate from the isolation assumption.
Our treatments Def-Maj-Ext and NoDef-Maj-Ext introduce another layer of randomization. A pivotal subject is selected at random and that subject's decision is implemented for the whole group. This incentive system is very close to the betweensubject randomized incentive scheme where only some randomly selected subjects get paid for their choices in a given experimental session. This method has also been frequently used across a range of different setups (cf. Cohen et al. 1987;Camerer and Ho 1994;Abdellaoui et al. 2008Abdellaoui et al. , 2011Abdellaoui et al. , 2013aAndersen et al. 2008;Burks et al. 2009;Toubia et al. 2012). In the studies where an explicit comparison to other incentive schemes was made no differences in behavior have been found for simple choice tasks as used in our experiment (cf. Tversky and Kahneman 1981;Bolle 1990;Cubitt et al. 1998;Armantier 2006;Schunk and Betsch 2006;Harrison et al. 2007;von Gaudecker et al. 2011;Baltussen et al. 2012). Charness et al. (2016) provide an in-depth methodological discussion and offer support for paying only a subset of the decisions.
When designing the prospects (reported in Table 1), we had two goals in mind. First, we wanted to construct choice sets where subjects would be close to indifferent between the two prospects. The reason is that both theoretical mechanisms that we test in our paper predict that choice shifts occur primarily in situations of near-indifference. When one prospect is much better than the other, all group members may agree on the same prospect when choosing individually, which preempts a potential choice shift of the minority to the majority. When people prefer to conform to the majority, there is larger potential for choice shifts if the minority is larger. At the same time, rank-dependent utility in combination with pessimistic probability weights will only yield choice shifts when a decision maker is not too far from indifference when choosing individually.
Second, to shed more light on ERR's theory, we wanted to have observations of behavior in choice sets where the risky prospect has a large probability of a good outcome as well as observations of behavior where the risky prospect gives a small probability on a very good outcome. For lotteries featuring a rather small probability of winning a high amount relative to the prospect's expected value, subjects in previous studies have displayed a tendency to overweight these small probabilities attached to good outcomes (cf. Kunreuther and Pauly 2004;Harbaugh et al. 2010). This is essentially a manifestation of the inverse-S-shaped weighting functions that are often reported in the literature. The theory of ERR assumes a pessimistic probability weighting function and may fail if subjects overweight small probabilities of the good outcome. It may be that the conditions required by ERR's theory are fulfilled for risky prospects with a large probability of the good outcome but not for risky prospects with a small probability of the good outcome. Our design allows us to investigate whether ERR's theory performs better for choice sets that include a high-probability risky prospect.
Predictions
Our design in part 2 implements a special case of ERR's reduced-form model (g, C). To see this, take a = 0.4 and b ∈ {0, 1} to generate the decisions trees shown in Fig. 1 that were implemented in our treatments with a default. Here the studied individual's decision is pivotal with probability 0.4 and otherwise (i.e. with probability 0.6) the default choice of either S or R is implemented with certainty. The left side of the figure displays the decision tree of our part-2 problems with a safe default; the right side presents the part-2 problems with a risky default.
While the resulting decision problems are essentially individual-decision problems, a decision maker in ERR's framework would perceive them as equivalent to a group decision under a unanimous decision rule and with the default as the disagreement outcome. The equivalence is easily seen as follows: assume that a group chooses from a binary set of prospects C = {R, S} (as described in Section 2) by unanimity ruling and that a predetermined default outcome D ∈ {R, S} is implemented if there is no unanimous vote. Then the reduced-form model of the decision situation from the perspective of a given group member looks as follows: Either there is unanimous agreement to depart from D among the other group members and the individual's vote is decisive for the outcome. Or there is no unanimous agreement among the other group members in which case the outcome will be D irrespective of what the studied individual does. So unanimous decision maps into the special case of ERR's model where either the individual's decision is pivotal or otherwise one of the primitive prospects R and S (the default) will be implemented with certainty. We thus have the reduced-form group decisions ERR's theorem 1 predicts a clear pattern in the choice shifts that may be observed for the problems g = (0.4, 0) and g = (0.4, 1).
• If the default is R (such that b = 0), then any observed choice shift will be a shift from S to R (risky shift). • If the default is S (such that b = 1), then any observed choice shift will be a shift from R to S (cautious shift).
So the clear-cut prediction of ERR's model is that choice shifts in our experiment are possible only in the treatments with default and that they will go in the direction of the default choice D.
The conformity mechanism (cf. Section 1) predicts a different pattern. In this approach, a clear pattern of choice shifts is possible in the treatments when individuals are provided information about the majority choice among members of their group for the corresponding part-1 problem. The information about the majority choice may serve as an anchor that makes subjects in the minority change their mind. The conformity mechanism then makes the following simple prediction: • If the majority choice on a part-1 problem was R, then any observed choice shifts on the corresponding part-2 problems should go from S to R (risky shift). • If the majority choice on a part-1 problem was S, then any observed choice shifts on the corresponding part-2 problems should go from R to S (cautious shift).
In the informational or accuracy-driven version of the conformity mechanism according to which choice shifts are driven by a conformist revision of individual preferences, shifts should always go towards the part-1 majority preference for subjects independent of whether choices have a payoff externality on others in the group or not. Another possibility is that the conformity mechanism as based on other-regarding preferences applies (see Section 1 above). That is, it could be that individuals in a group move in the direction of the majority preference because they care about the externalities that their decision has on others' payoffs. If otherregarding preferences drive the conformity mechanism, choice shifts should only be observed when decisions have payoff externalities for others.
Results
In the analysis, we take the subject as the unit of analysis. For each subject, we calculate the number of actual shifts in a certain direction (for instance in the direction of the majority choice) as a percentage of the number of cases in which this particular shift was possible, and we compare it to the number of actual shifts in the opposite direction, as a percentage of the number of cases in which the opposite shift was possible (in the example in the direction of the minority choice). Therefore, in the statistical tests, each subject gives us one paired observation at most. In some cases, a subject does not give us a paired data point. For instance when a subject in part 1 always chooses in agreement with the majority, we cannot calculate how often this subject shifts when her position agrees with the minority. Therefore, we always report sample sizes for the conducted statistical tests. We use non-parametric tests to investigate whether differences are statistically meaningful. 15 Overall, choice shifts are a quite common phenomenon in our experiment. Aggregated across treatments, we observe choice shifts in 24% of the possible cases. Turning to risky and cautious shifts, a general point distinguishing Eliaz et al. (2006) from the early psychological research on choice shifts (cf. Stoner 1961;Bem et al. 1962Bem et al. , 1964Bem et al. , 1965 is that risky shifts are not in general supposed to occur more 15 Since subjects in a given group chatted only among themselves and received the same group-specific information about part-1 majority preferences, the part-2 problems might be held to systematically differ across groups in the treatments with majority-choice information. To account for this potential confound, we correct the Wilcoxon tests that we use throughout the empirical analysis for clustering at the group level. This is achieved by calculating the standard test statistics for pairs of groups and then performing the respective test on population averages of these statistics. The sampling distributions of these adjusted statistics are estimated using bootstrapping techniques. frequently than cautious shifts. Both risky and cautious shifts are very common in our experiment. Risky shifts occur in 21% of the possible cases and cautious shifts in 28% of the possible cases. Comparing the propensities of subjects to make risky and cautious shifts in a Wilcoxon signed-rank test, we find no significant difference at the subject level. 16 We first investigate the possibility that rank-dependent utility causes choice shifts in the treatment that gives ERR's theory its best shot. In Def-NoMaj-NoExt, subjects are not distracted by information of the majority choice or the possibility that their choice affects the payoffs of other group members. The top panel of Table 3 presents the mean frequencies of shifts towards and against the default. Overall, choice shifts are somewhat more often in the direction of the default, but the modest difference is not significant at the 10% level. While cautious shifts are completely independent of the default, the difference between risky shifts towards and against the default is significant at the 5% level. If subjects choose in accordance with an inverse-S weighting function instead of a pessimistic weighting function, ERR's pattern of choice shifts could be observed for the high-probability lotteries but not for the low-probability lotteries. The lower panels of Table 3 address this possibility. Indeed, shifts towards the default instead of against the default are somewhat more common in the high-probability lotteries than 16 Given that cautious shifts are somewhat more common than risky shifts, it is interesting to examine whether there is a significant shift to risk aversion as subjects move from part 1 to part 2. Given that the risky prospect has a (weakly) higher expected value than the safe one in all of our decision problems, the number of times that the safe prospect was chosen by a subject can be used as a measure of risk aversion. Comparing the frequency of risky choices in part 1 and part 2 of the experiment at the subject level, we do not find a significant difference (p=0.378, n=280). The result turns out to be robust across our different treatments. in the low-probability lotteries, but in both cases the difference remains insignificant. For both high-and low-probability lotteries, risky shifts are somewhat more likely to occur in the direction of the default. But unlike when we aggregate over lottery types, the difference is insignificant here. Again, cautious shifts seem to occur completely independent of the default. Overall, this treatment provides only limited support for ERR's theory. Next we zoom in on the occurrence of choice shifts in the treatments that provide the best shot for the conformity mechanism. That is, we look at the treatments NoDef-Maj-NoExt and NoDef-Maj-Ext in which the subjects were not potentially distracted by a default in the group choices. Table 4 presents the results for these treatments. In agreement with the conformity mechanism, subjects in the minority position frequently move into the direction of the majority position. The differences are substantial and significant, and the effect sizes are huge compared to what we observed in the treatment that focused on ERR's theory (cf. Table 3). Even though the results appear to be more accentuated when subjects' decisions have consequences for the payoffs of the other group members, the data without such externalities also agree with the conformity mechanism to a remarkable extent. Overall, our subjects display a strong desire to conform to the majority. Interestingly, subjects who found themselves to be part of the minority when they chose the risky lottery in part 1 are very likely to shift to the cautious choice in the group decision when there are externalities on others' payoffs. At the same time, in the settings without externalities the more common pattern is that minority individuals who chose the safe lottery will shift to the risky one. We lack sufficient observations to compare shifting propensities for cautious and risky shifts in the minority position (the sample of individuals who made both a risky minority choice and a cautious minority choice is very small). However, comparing propensities to shift cautiously and riskily across both minority and majority positions, we find no significant difference for either treatment.
In our treatments Def-Maj-NoExt and Def-Maj-Ext, we investigate what happens when subjects have the possibility to move in the direction of the default as well as in Table 5. On the main diagonal we report about the cases that agree or disagree with both theories. That is, these cells present the common cases where a subject in the minority position shifts to the default and the rare cases where a subject in the majority position shifts against the default. The numbers off the main diagonal are most interesting, because they represent the cases where the theories make competing predictions. In conflicting cases, subjects shift significantly more often in the direction predicted by the conformity mechanism (35.7%) than in the direction of the default (22.9%), (p = 0.037, n = 79). Notice further that for subjects in the minority position, the default is inconsequential in their decision to shift. In contrast, when subjects in the majority position shift, it is significantly more often in the direction of the default than against the default. So ERR's theory is able to pick up this secondary effect. In Table 6, we provide further evidence for the role that payoff externalities play in the propensity of subjects to shift. The table combines the data of all four treatments that provide subjects with information about the majority position. The presence of payoff externalities for the group members roughly doubles the gap between shifts at Table 7 presents a remarkable but unanticipated gender effect in our data. Overall, it appears that women are more likely to shift than men. Especially when they are in a minority position, women are much more likely to shift than men are.
The lower parts of the table break down the gender effect for whether or not the subject's decision has payoff externalities. Interestingly, even in the absence of payoff externalities, there is already a sizable gender effect. This result suggests that women are less sure of their preferences over risky options. If anything, the gender difference is more pronounced when subjects' decisions also determine the payoffs of the others in their group, which suggests that women are also more group-oriented than men. Surprisingly, women are also more likely to shift when they are in the majority position. This finding points to the possibility that women are generally less consistent in their choices than men. The difference in shifts between the genders at the majority position is much smaller than the difference in shifts at the minority position though, so it is not the case that the complete gender effect can be attributed to women being less consistent. Our results suggest that women have a stronger desire than men to conform to the majority, and that this effect is to a large extent due to women being less sure of their preferences. 18
Conclusion
This study experimentally compared two prominent explanations of choice shifts in groups. One contestant was the conformity mechanism that traces shifts from individual decision to votes in group decisions to the influence of a social norm on behavior in the group setting. The other was Eliaz et al.'s (2006) approach based on rank-dependent utility that operates entirely on individual risk preferences.
We find very clear signs that conformity matters. Already when only information on majority choices is provided and individuals make choices on their own behalf only, we find a strong and significant conformist influence (Table 3). This adds to previous evidence regarding direct effects of peer information on choices under risk (cf. Cooper and Rege 2011;Lahno and Serra-Garcia 2015;Goeree and Yariv 2015). Combining information about others' preferences with responsibility for these other subjects' payoffs makes decision makers take others' preferences into account more than they do if the decision is on their own behalf only. When subjects' decisions also determine the payoffs of others, we again find strong evidence that subjects shift in the direction that is preferred by the majority. So it is not the case that we always find a shift towards the cautious choice when subjects are responsible for the payoffs of others, as might be conjectured on the basis of Charness and Jackson (2009). 18 Another potentially interesting exercise is to examine whether the occurrence of choice shifts in the group-decision part correlates with violations of expected-utility theory in the individual-decision part. In our design, the only violation that can be observed in the individual-decision setting would be a violation of (first-order) stochastic dominance: Within the classes of low-probability and high-probability prospects, risky prospects in later problems either feature a higher probability of the good outcome (HP) or a higher good outcome (LP). Hence, stochastic dominance requires that, per prospect class, expected-utility maximizers switch their choice at most once and, if so, then from choosing the safe lottery in early problems to choosing the risky lottery in later problems. Across all 280 participants, we find that 29 subjects (10.4 %) violate this pattern in some way. Running Wilcoxon rank-sum tests concerning the propensity to shift of EU-violators and non-violators, we find the violators shift more often and that the difference is significant at all customary levels. Moreover, the result is robust to conditioning on risky (p = 0.003, n = 254) or cautious shifts (p = 0.001, n = 235).
Still, we think that it is quite likely that social responsibility plays an independent role besides the wish to conform to the majority. Teasing out the exact effects of social responsibility and conformity would require a different design. This provides an interesting avenue for future research. Support for the pattern predicted by ERR's model was much more limited. Even in the treatment that gave their theory its best shot, that is, the treatment in which subjects had no information on the majority preference and in which their decisions did not influence others' outcomes, ERR's mechanism could only pick up risky shifts to a notable extent. And in the treatments where both ERR's mechanism and the conformity mechanism could influence choices, the conformity mechanism was seen to predict choice shifts significantly better than ERR's model. The contributions of ERR and Dillenberger and Raymond (2016) point out that non-EU preferences may result in choice shifts in group decisions. Our experimental results suggest that subjects do not systematically have the types of non-EU preferences that result in choice shifts. Instead, the evidence supports the notion that a desire for conformity is a principal driving mechanism behind many choice shifts.
|
2019-04-18T07:12:33.468Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "c586280ed8c2c931b39ebf398c21c73800fea2b8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11166-018-9282-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7959814ddc383b0c1ad4a978a8c1dda3e00dc13a",
"s2fieldsofstudy": [
"Psychology",
"Economics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
135015807
|
pes2o/s2orc
|
v3-fos-license
|
Three new species of the killifish genus Melanorivulus from the Rio Paraná Basin , central Brazilian Cerrado ( Cyprinodontiformes , Aplocheilidae )
Three new species of Melanorivulus are described from the upper and middle Rio Paraná Basin, central Brazilian Cerrado. These species are members of the M. pictus species group, endemic to central Brazilian plateaus and adjacent areas, and are easily diagnosed by colour pattern characters, but their relationships with other congeners of the group are still uncertain. Melanorivulus proximus sp. n., from the middle Rio Aporé drainage, and M. nigromarginatus sp. n., from the Rio Corrente drainage, are possibly more closely related to other species endemic to streams draining the slopes of the Caiapó range, whereas M. linearis sp. n., from the upper Rio Pardo drainage, middle Rio Paraná Basin, is considered more closely related to M. egens, a species also endemic to this part of the Basin. This study corroborates the high diversity of species of Melanorivulus in the central Brazilian Cerrado plateaus repeatedly reported in previous studies, indicating once more that different species are often found restricted to short segments of the same river drainage. The intense habitat loss recorded in recent years combined to the high species diversity limited to specific Cerrado freshwater ecosystems, the veredas, indicates that species of Melanorivulus endemic to this part of the Brazilian Cerrado are highly threatened with extinction.
Introduction
The Cerrado savannas of central Brazil, with an area of about 2,000,000 km 2 , is among the 25 most important biodiversity hotspots of the world (Myers et al. 2000).The present study is directed to a group of small killifishes of the genus Melanorivulus Costa, 2006, with species reaching 45 mm of standard length (SL) or less, inhabiting the shallowest parts of the veredas, a typical Cerrado ecosystem comprising streams bordered by the buriti-palm (Mauritia flexuosa).About 50 valid species are presently placed in Melanorivulus, formerly considered as a subgenus of Rivulus Poey, 1860 (Costa 2011).Melanorivulus is similar to other South American aplocheiloid killifishes living in similar biotopes, such as Anablepsoides Huber, 1992, Atlantirivulus Costa, 2008, Cynodonichthys Meek, 1904, Laimosemion Huber, 1999, Rivulus Poey, 1860, which have slender body, short fins and long neural prezygapophyses on caudal vertebrae (Costa 1990).Species of Melanorivulus are distinguishable from species of those genera by the presence of black pigmentation concentrated on the whole margin of the caudal fin and on the distal margin of the dorsal and anal fins in females, ventral process of the angulo-articular short or rudimentary, and preopercular canal absent (Costa 2011).
A great diversity of species of Melanorivulus has been reported for the Cerrado region drained by the Rio Paraná Basin, the second largest river basin in South America.A total of 14 endemic species have been described for this region (Costa 1989(Costa , 2005(Costa , 2007a(Costa , b, 2008;;Nielsen et al. 2016;Volcan and Lanés 2017).These species belong to different and not closely related species groups (Costa et al. 2016): the M. punctatus group, containing slender small species, mainly diagnosed by the presence of oblique rows of red dots on the flank in males (e.g., Costa 2005); the M. pinima group, easily diagnosed by the reduction of black pigmentation on the head and humeral region in males, presence of longitudinal rows of red dots on the flank, and a longitudinally elongated white to light yellow mark above caudal spot in females (Costa 2007a), and the M. pictus group, diagnosed by a deeper body (i.e., body depth reaching about 25 % SL) and oblique red bars on the flank (Costa 2017).Eleven of the 14 endemic species are members of the last group, which are mostly concentrated in the rivers draining the Caiapó range and in the south adjacent areas of the middle Rio Paraná Basin (Costa 2005(Costa , 2012)).
The M. pictus group was first studied by Costa (1989), based on fish collections deposited in the Museu de Zoologia, Universidade de São Paulo, when M. apiamici (Costa, 1989), M. pictus (Costa, 1989) and M. vittatus (Costa, 1989) were described.In September 1994, efforts were directed to firstly sample typical Melanorivulus habitats along the Paraná River Basin, but based on the great morphological variability found among populations inhabiting close neighbouring areas, Costa (1995) concluded that all populations of this vast geographical area belong to a single polymorphic species, M. pictus.However, after more accurate field studies since 2004, numerous new species have been continuously described for the Paraná and other adjacent river basins (e.g., Costa 2005Costa , 2006Costa , 2012Costa , 2017;;Deprá et al. 2017).These studies have indicated that each species is limited to small areas and that different species may inhabit the same river drainage at different altitudes (Costa 2007b(Costa , 2017;;Volcan et al. 2017).However, some new species collected in recent years are still await formal descriptions.In this paper, three new species of the M. pictus group from the Paraná River Basin are described.
Materials and methods
Specimens were captured with small dip nets (40 × 30 cm) and fixed in formalin for a period of 10 days, and then transferred to 70 % ethanol.Collections were made with permits provided by ICMBio (Instituto Chico Mendes de Conservação da Biodiversidade) and field methods have been approved by CEUA-CCS-UFRJ (Ethics Committee for Animal Use of Federal University of Rio de Janeiro; permit number: 01200.001568/2013-87).Material is deposited in Instituto de Biologia, Universidade Federal do Rio de Janeiro, Rio de Janeiro (UFRJ) and Coleção Ictiológica do Centro de Ciências Agrárias e Ambientais, Universidade Federal do Maranhão, Chapadinha (CICCAA).Descriptions of colouration of living fish were based on observations just after collections, in small transparent plastic bottles.Type specimens were photographed live about 24 hours after collec tion.Measurements and counts follow Costa (1988).Measurements are presented as percentages of standard length (SL), except for those related to head morphology, which are expressed as percentages of head length.Fin-ray counts include all elements.Osteological preparations followed Taylor and Van Dyke (1985); the abbreviation C&S in lists of material indicates those specimens that were cleared and stained for osteological examination.Terminology for osteological structures followed Costa (2006), for frontal squamation Hoedeman (1958), and for cephalic neuromast series Costa (2001).In lists of material, geographical features are written according to Brazilian Portuguese local use (e.g., córrego, ribeirão, rio), allowing more accurate identifications of lo calities in the field and avoiding common mistakes when tentatively translating them to English.
Description.Morphometric data appear in Table 1.Body relatively deep, subcylindrical anteriorly, slightly deeper than wide, compressed posteriorly.Greatest body depth at vertical just anterior to pelvic-fin base.Dorsal and ventral profiles of trunk slightly convex in lateral view, approximately straight on caudal peduncle.Head moderately wide, sub-triangular in lateral view, dorsal profile nearly straight, ventral profile convex.Snout blunt.Jaws short; teeth numerous, conical, irregularly arranged; outer teeth hypertrophied, inner teeth small and numerous.Vomerine teeth 3-5.Gill-rakers on first branchial arch 2 + 7-8.Dorsal and anal fins short, sharply pointed in males, rounded to slightly pointed in females.Caudal fin rounded, slightly longer than deep.Pectoral fin rounded, posterior margin reaching vertical at about 90 % of length between pectoral-fin and pelvic-fin bases.Pelvic fin small, longer in males, tip reaching between base of 2nd and 3rd anal-fin rays in males, reaching anus in females; pelvic-fin bases medially in close proximity.Dorsal-fin origin at vertical between base of 9th and 10th anal-fin rays.Dorsal-fin rays 9-11; anal-fin rays 13-15; caudal-fin rays 30-32; pectoral-fin rays 13-14; pelvic-fin rays 7.No contact organs on fins.Second proximal radial of dorsal fin between neural spines of 18 th and 20 th vertebrae; first proximal radial of anal fin between pleural ribs of 13th and 14th vertebrae; total vertebrae 30-31.
Colouration in life.Males.Flank metallic greenish blue to bright blue, with narrow oblique red bars between humeral region and posterior portion of caudal peduncle; bars irregularly arranged, forming chevron-like marks with angle varying in position on flank, often connected to short adjacent bars, forming Y-and X-shaped marks; bars with minute vertical extensions on each scale margin; dorsal portion of flank with oblique rows of red dots; anteroventral portion of flank with rows of red dots, often coalesced to form zigzag red marks.Dorsolateral portion of body, between posterior part of head and anterior part of flank, above humeral region, pale golden.Humeral region with horizontally elongated black spot.Dorsum light brown, venter white.Opercular region greenish golden with dark red reticulation on scale margins; suborbital region yellowish white; lower jaw dark grey.Iris pale yellow, with dark brown bar on anterior and posterior portions.Dorsal fin bluish white, sometimes yellowish on distal portion, with 4-5 transverse, narrow faint red or red stripes.Anal fin pale yellow, base and posterior portion bluish white with row of light red dots or short stripes.Caudal fin pale yellow to bluish white, with 5-6 narrow red or reddish orange stripes.Pectoral fin yellowish hyaline.Pelvic fin orangish pale yellow.
Females.Similar to males, except flank base colour pale greenish golden; dorsal and caudal fin bars dark grey; caudal fin base colour pale orangish pink; and pres-ence of black spot on dorsal portion of caudal-fin base and dark grey pigmentation concentrated on distal margins of dorsal and anal fins, anterior margin of pelvic fin and entire caudal-fin margin.
Colouration in alcohol.
Head and trunk pale brown, fins whitish hyaline; dark marks recorded for live specimens varying from dark brown to black.
Etymology.
From the Latin proximus (near, neighbour), referring to its distribution area at the same drainage as M. scalaris.
Description.Morphometric data appear in Table 2. Body relatively deep, sub-cylindrical anteriorly, deeper than wide, compressed posteriorly.Greatest body depth at vertical just anterior to pelvic-fin base.Dorsal and ventral profiles of trunk slightly convex in lateral view; dorsal and ventral profiles of caudal peduncle nearly straight.Head moderately wide, sub-triangular in lateral view, dorsal profile nearly straight, ventral profile convex.Snout blunt.Jaws short; teeth numerous, conical, irregularly arranged; outer teeth hypertrophied, inner teeth small and numerous.Vomerine teeth 2-5.Gill-rakers on first branchial arch 1 + 8. Dorsal and anal fins short, tip slightly pointed in males, rounded in females.Caudal fin rounded, slightly longer than deep.Pectoral fin rounded, posterior margin reaching vertical just anterior to pelvic-fin insertion.Pelvic fin small, longer in males, tip reaching between urogenital papilla and base of 2nd anal-fin ray in male, reaching anus in females; pelvic-fin bases medially in close proximity.Dorsal-fin origin at vertical between base of 8th and 9th anal-fin rays.Dorsal-fin rays 10-11; anal-fin rays 13-15; caudal-fin rays 31-34; pectoral-fin rays 13; pelvic-fin rays 6-7.No contact organs on fins.Second proximal radial of dorsal fin between neural spines of 19th and 21st vertebrae; first proximal radial of anal fin between pleural ribs of 13th and 15th vertebrae; total vertebrae 30-32.
Scales small, cycloid.Body and head entirely scaled, except anterior ventral surface of head.Body squamation extending over anterior 25 % of caudal-fin base; no scales on dorsal and anal-fin bases.Frontal squamation E-patterned; E-scales not overlapping medially; scales arranged in regular circular pattern around A-scale without exposed margins.Longitudinal series of scales 30-33; Females.Similar to males, except flank base colour pale greenish blue; dorsal and caudal fin bars dark grey; caudal fin base colour pale orangish pink; absence of black pigmentation on post-orbital and humeral regions; and presence of black spot on dorsal portion of caudal-fin base and dark grey pigmentation concentrated on distal margins of dorsal and anal fins, anterior margin of pelvic fin and entire caudal-fin margin.
Colouration in alcohol.
Head and trunk pale brown, fins whitish hyaline; dark marks recorded for live specimens varying from dark brown to black.
Distribution and conservation.Known only from two close small streams in the middle section of the Corrente River drainage, upper Paraná River Basin (Fig. 3).
Etymology.The name nigromarginatus (black margin), from the Latin, is a reference to the presence of a black margin on the anal in males.
Melanorivulus linearis sp. n.
http://zoobank.org/9312393A-94FD-4433-9818-88CC6F1666D9Figs 6-7, Table 3 Holotype.UFRJ 11678, male, 25.Diagnosis.Melanorivulus linearis is similar to M. egens, and distinguished from all other species of the M. pictus group by the presence of red chevron-shaped marks regularly distributed on the flank (vs.irregularly), absence of distinctive dark marks on humeral region (vs.presence), and absence of red dots on the anteroventral portion of flank (vs.presence).Melanorivulus linearis is distinguished from M. egens by the presence of red bars restricted to the dorsal portion of the caudal fin in males (vs.absence), presence of black bars on the caudal fin in females (vs.black dots); presence of a pale green spot on humeral region in males (vs.absence); and second proximal radial of the dorsal fin between neural spines of 18th and 19th vertebrae (vs. between neural spines of 19th and 21st vertebrae).3. Body relatively deep, sub-cylindrical anteriorly, deeper than wide, compressed posteriorly.Greatest body depth at vertical just anterior to pelvic-fin base.Dorsal and ventral profiles of trunk slightly convex in lateral view; dorsal and ventral profiles of caudal peduncle nearly straight.
Description. Morphometric data appear in Table
Head moderately wide, sub-triangular in lateral view, dorsal profile nearly straight, ventral profile convex.Snout blunt.Jaws short; teeth numerous, conical, irregularly arranged; outer teeth hypertrophied, inner teeth small and numerous.Vomerine teeth 3-5.Gill-rakers on first branchial arch 1 + 8. Dorsal and anal fins short, tip slightly pointed in males, rounded in females.Caudal fin rounded, slightly longer than deep.Pectoral fin rounded, posterior margin reaching vertical just anterior to pelvic-fin insertion.Pelvic fin small, longer in males, tip reaching between base of 2nd or 3rd anal-fin ray in males, reaching between anus and urogenital papilla in females; pelvic-fin bases medially in close proximity.Dorsal-fin origin on vertical through base of 8th or 9th anal-fin ray.Dorsal-fin rays 10-11; anal-fin rays 13-15; caudal-fin rays 31-32; pectoral-fin rays 13-14; pelvic-fin rays 7.No contact organs on fins.Second proximal radial of dorsal fin between neural spines of 18th and 19th vertebrae; first proximal radial of anal fin between pleural ribs of 13th and 15th vertebrae; total vertebrae 30-31.
Colouration.
Males.Flank metallic greenish blue, sometimes purplish blue above anal fin, with oblique narrow red bars between humeral region and posterior portion of caudal peduncle; bars regularly arranged, forming chevron pattern directed anteriorly, with angle on flank midline or above it; bars with minute vertical extensions on each scale margin; dorsal portion of flank with few red dots; anteroventral portion of flank without red marks; pale green spot on humeral region.Dorsum light brown, venter white.Side of head light brown on dorsal portion, yellowish white on ventral portion to pale golden on opercle; melanophores dispersed, not forming distinct marks on post-orbital region; lower jaw dark grey.Iris pale yellow, sometimes with dark brown bar on anterior and posterior portions.Dorsal fin light yellow, with four to six oblique red bars through whole fin.Anal fin yellowish orange, basal portion purplish white with six or seven short red bars, distal margin black.Caudal fin light yellow, with six to eight narrow red bars extending between dorsal and middle portions of fin; fin margin dark grey.Pectoral fin hyaline.Pelvic fin light yellow with narrow black margin.Caudal peduncle depth 15.9 14.4-16.212.9-14.1 Females.Similar to males, except flank base colour pale greenish golden; no distinct marks on humeral region; dorsal and caudal fin bars dark grey to black; caudal fin base colour pale white; absence of pale green spot on humeral region; and presence of triangular black spot on dorsal portion of caudal-fin base and dark grey pigmentation concentrated on distal margins of dorsal and anal fins, anterior margin of pelvic fin and entire caudal-fin margin.
Distribution.Known only from the type locality, upper section of the Rio Pardo, middle Rio Paraná Basin, central Brazil (Fig. 3).
Etymology.From the Latin, linearis (consisting of lines), an allusion to the red oblique lines regularly arranged on the flank in males.
Discussion
Studies on systematics of Melanorivulus have consistently demonstrated the importance of colour pattern characters both to diagnose species and to support monophyletic groups (Costa 2016).According to recent phylogenetic analyses (Costa 2016;Costa et al. 2016), colour pattern characters highly corroborates groups that are supported by other morphological characters, as well as by molecular data.However, the relatively low variability of mor-phometric, meristic and osteological characters among species of the M. pictus group makes colour pattern characters essential source to diagnose species and to estimate their relationships, since molecular data are not still available for most species.Consequently, the new taxa herein described exhibit colour patterns characters that in combination easily allow their recognition as new species, but their relationships are still unclear.
Melanorivulus proximus is the second species recorded for the Rio Aporé drainage.Melanorivulus scalaris also occurs in the Aporé drainage, but in altitudes between about 740 and 800 m asl, whereas M. proximus is here reported for altitudes between about 440 and 540 m asl.The veredas of this drainage were first sampled in 1994, but specimens here recognised as belonging to M. proximus were then identified as M. pictus (Costa, 1995; see Introduction above for historical context).Costa (2005) described Rivulus scalaris Costa, 2005 (= M. scalaris) based on material collected in the Ribeirão São Luiz, upper Rio Sucuruí drainage.Specimens collected in the middle section of the Rio Aporé drainage were then tentatively identified as M. scalaris and listed as additional material (non-types).Costa (2007) recorded R. scalaris to the Rio da Prata floodplains, upper Rio Aporé drainage, in a plateau area where the upper Ribeirão São Luiz and the Rio da Prata are in contact.However, the taxonomic status of the middle Aporé populations was not clarified until now.
The frequent occurrence of irregularly interconnected chevron-shaped red marks on the flank in males of M. proximus suggests that it is closely related to M. scalaris, in which this colour pattern is always present (Fig. 8).However, the pointed anal fin in males and the strongly pigmented reticulation on the head side in females, suggest that M. proximus is more closely related to species endemic to neighbouring drainages that exhibit these derived character states, comprising M. faucireticulatus from the Claro and Verde river drainages (Costa 2007b: figs 1-2) and M. rutilicaudus from the Rio Verde drain-age (Costa 2005: Figs 8-9).In large adult specimens of M. scalaris the anal fin tip is not pointed (Fig. 8) and the caudal fin is pale yellow in females (Fig. 9).
Previous studies indicate that the Sucuruí, Aporé, Corrente, Verde and Claro river drainages, which drain the south-eastern slope of the Caiapó range and flow directly to the Rio Paranaíba as part of the upper Rio Paraná Basin, concentrates a great diversity of species of Melanorivulus (Costa 2005(Costa , 2007a(Costa , b, 2008)).These species are often members of clades endemic to Caiapó range drainages, including those belonging to the upper Rio Araguaia Basin, on its northern slope (e.g., Costa 2006).However, characters supporting phylogenetic relationships of M. nigromarginatus, from the Rio Corrente drainage, with other species of the Caiapó range drainages are ambiguous.
Melanorivulus nigromarginatus is easily distinguished from all other species endemic to the Caiapó range drainages by the presence of a black marginal band on the anal fin in males (Fig. 4), suggesting that it may be more closely related to M. egens (Costa 2005: fig. 11) and M. linearis (Fig. 6), which also have similar black anal-fin margin, but are endemic to tributaries of the middle section of the Rio Paraná (Fig. 3).Contrastingly, the presence of a distinctive dark humeral spot in M. nigromarginatus suggests that it may be more closely related to other species occurring in other Caiapó range drainages (e.g., M. faucireticulatus, M. formosensis, M. proximus, M. rutilicaudus, M. scalaris, M. vittatus).All these species share the presence of a distinct humeral blotch varying from dark red to black (Figs 1-2, 4-5), whereas this derived condition is not present in M. egens and M. linearis (Figs 6-7).In addition, the presence of orangish pink pigmentation on the caudal fin in females that occur in M. nigromarginatus (Fig. 5), M. proximus (Fig. 2), M. faucireticulatus (Costa 2007b: The present study once more reports the occurrence of different species of Melanorivulus inhabiting separate sections of the same river drainage as already described in previous studies (Costa 2007b(Costa , 2017;;Volcan et al. 2017;Fig. 3).Recently, Costa (2017) compared this distributional pattern to that reported for other vertebrates occurring in the Cerrado, which is explained to be correlated with Miocene topographical reorganization causing geographical isolation of ancestral populations in plateaus and peripheral depressions (Prado et al. 2012;Guarnizo et al. 2016).Although estimates of divergence time among lineages of the M. pictus group are not still available, this paleogeographical scenario could explain the present distribution of distinct species of Melanorivulus at different altitudinal zones of river drainages.
Costa (2012) reported a strong process of habitat loss in the rivers draining the Caiapó range as a result of the quick expansion of agriculture land use in areas previously occupied by natural vegetation.In recent years, the veredas have often been extirpated after diversion of their water sources for plantation irrigation, as well as widespread deforestation, which has reached their margins when water flow persists.Considering the great diversity of endemic species of Melanorivulus inhabiting the Veredas of the Caiapó range and the continuous extirpation of Vereda habitats, this study supports the endangered status of species inhabiting this region.
2
, otic 1, post-otic 1, supratemporal 1, median opercular 1, ventral opercular 1, pre-opercular 2 + 4, mandibular 3 + 1, lateral mandibular 1-2, paramandibular 1. Colouration.Males.Flank metallic light green, with narrow oblique red bars between humeral region and posterior portion of caudal peduncle; bars irregularly arranged, forming chevron pattern directed anteriorly, usually fragmented, with angle on flank midline or above it; bars with minute vertical extensions on each scale margin; dorsal portion of flank with oblique rows of red dots; anteroventral portion of flank with rows of red dots.Dorsum light brown, venter white.Side of head light brown on dorsal portion, yellowish white on ventral portion to pale golden on opercle; broad dark grey to black postorbital stripe, continuous to humeral black blotch; lower jaw dark grey.Iris pale yellow, with dark brown bar on anterior and posterior portions.Dorsal fin light yellow, with four to six oblique faint red bars.Anal fin light yellow to orange, basal portion greenish white with five or six orangish red spots, distal margin black.Caudal fin light yellow, with six to eight narrow orangish red bars extending on entire caudal fin, except its ventral-most portion.Pectoral fin hyaline.Pelvic fin light yellow to orange narrow black margin.
fig.2), and M. rutilicaudus(Costa 2005: fig.9), reinforces the hypothesis of close relationships.On the other hand, probably M. egens and M. linearis from the middle Rio Paraná Basin are closely related species, sharing the presence of red chevron-shaped
|
2018-10-04T20:57:23.163Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "b934b74348057455254c9e26ab970d1227160a02",
"oa_license": "CCBY",
"oa_url": "https://zse.pensoft.net/article/21321/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b934b74348057455254c9e26ab970d1227160a02",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
150768140
|
pes2o/s2orc
|
v3-fos-license
|
Motivation and demotivation over two years : A case study of English language learners in Japan
This paper is about four Japanese university students majoring in international studies, who participated in a two-year study examining changes in their motivation. Using monthly interviews and a 29-item questionnaire on Dörnyei’s (2005) L2 motivational self system that was administered alongside each interview, the trajectories of learner motivation were investigated, based on both quantitative and qualitative data. First, changes in the participants’ motivation were identified using quantitative data. Next, a variety of motivators and demotivators that learners experienced both inside and outside of their classrooms were analyzed using the qualitative data. With the data obtained, this study focuses on how four learners’ language learning motivation and contexts adapt to each other, and how the dynamics of the four learners’ motivation changes due to their learning experiences. Each learner was different in their trajectory of motivation and the kinds of motivators and demotivators that they experienced in their particular contexts. The four learners underwent unique motivators and demotivators, and reacted differently. While participants identified their ideal L2 selves, or ought-to L2 selves, these self-guides were not strengthened by their L2 experiences over time. Based on these findings, the importance of studying the rich experiences of language learners in motivation research is discussed.
Introduction
Although teachers see students regularly in their classrooms, it may not be easy to understand each learner's motivation to study the target language of the class.Some learners appear to be motivated during the first weeks, while others do not seem to be so even from the beginning of the course.Then, towards the end of the course, teachers cannot tell which students may maintain motivation towards language learning after the course.Learners' motivation may be influenced by reasons why they are taking the class or by their friends, parents, or other people in their lives.Teachers may want to know what their students' motivation is or how they can grasp it while many researchers argue that studying the motivation of language learners is not a simple task.In the view of Dörnyei and Ushioda (2011), language learners' reasons for studying, their lengths of sustained study, and their intensity of study should be researched.While there are a handful of studies investigating large groups of learners (e.g., Yashima, Nishida, & Mizumoto, 2017;You, Dörnyei, & Csizér, 2016), there have been few studies tackling the complexity of language learning motivation focusing on the aspects of motivationdiscussed Dörnyei and Ushioda (2011).In terms of why learners study English, how long they study, and how hard they study, there should indeed be a number of fluctuations.While cross-sectional studies can capture a snapshot of the learners' motivation, longitudinal studies of learners are also needed.This study serves to fill this research niche, following the experiences of four Japanese college students studying English in a Japanese college over two years.Dörnyei's (2005) L2 motivational self system has attracted the attention of many researchers.According to Boo, Dörnyei, and Ryan (2015), who reviewed a large set of journal articles and book chapters published between 2005 and 2014, the use of the L2 motivational self system (L2MSS) became very popular in 2011.Regarded as an integrative synthesis of several key constructs and theories in L2 motivational research, it consists of three principal constructs: the ideal L2 self, the ought-to L2 self, and the L2 learning experience.Using this theoretical background, large survey studies were first conducted in Hungary (Csizér & Lukacs, 2010;Kormos & Csizér, 2008).Then, Taguchi, Magid, and Papi (2009) conducted a comparative study in Japan, China, and Iran.Other researchers also conducted studies in Saudi Arabia (Al-Shehri, 2009), Sweden (Henry, 2009(Henry, , 2010)), Indonesia (Lamb, 2012), and Germany (Busse, 2013).
Literature review
While many small case studies on motivation were conducted focusing on the ideal L2 self or the ought-to L2 self, which work as participants' self-guides in language learning (e.g., Irie & Ryan, 2015;Nitta & Baba, 2015;You & Chan, 2015), studies focusing on L2 experiences that use the framework of the L2MSS are rare.However, if we track back two decades, Norton (1995) has conducted studies on investment, social identity and imagined communities, although she has eschewed using the term motivation due to the static nature of the term at that time.In her study, she used the term investment instead of motivation in studying the language learning experiences of five female immigrants to Canada since the concept of investment "more accurately signals the socially and historically constructed relationship of the women to the target language and their sometimes ambivalent desire to learn and speak it" (p.17).
In more recent years, many L2 motivation researchers have been treating motivation not as a trait, but rather "a fluid play, an ever-changing one that emerges from the processes of interaction of many agents, internal and external, in the ever-changing complex world of the learner" (Ellis & Larsen-Freeman, 2006, p. 563).In Ushioda's (2009) "person-in-context relational view," motivation is viewed "not simply as cause or product of particular learning experiences, but as process -in effect, the ongoing process of how the learner thinks about and interprets events in relevant L2-learning and L2-related experiences and how such cognitions and beliefs then shape subsequent involvement in learning" (p.122).She developed such views based on a qualitative study of 20 Irish learners of French.The study focused on their conception of motivation, their motivational evolution and language learning experiences over 15 to 16 months.
In the vein of this view of motivation in the socio-dynamic period (Dörnyei & Ushioda 2011), many researchers are studying language learners in context, and attempting to understand what is happening in their complex world.Studying how learners interact with a variety of external and internal factors in their particular learning experiences is needed in this line of research.Apparently, this way of looking at motivation is becoming closer to Norton's notion of investment.If we study motivation from the L2MSS perspective, for instance, we should study participants' ideal L2 selves or ought-to L2 selves as well as their L2 experience, and comprehend the dynamic interaction of learners' selves and their experiences.By doing so, learners' socially constructed relationships with the language and their learning histories, their motivation and demotivation, as well as ambivalence can be studied.Kikuchi (2017) reported on one of the few case studies that examined the dynamics of a group of Japanese learners of English over one school year.He studied the trajectory of English language learning motivation of five Japanese freshmen over two semesters, using monthly group interviews, a questionnaire, and reflective journals.Based on a quantitative analysis of the questionnaire measuring the L2MSS, he identified the types of learners and how their motivation changed through the lens of dynamic systems theory (DST).Turner and Waugh (2007) describe one important aspect of DST as follows: Within academic settings and events, each student may be thought of as a self-organizing system that acts and reacts to both external and internal informational signals.These processes may explain the unique, individual facets of students ' learning-related cognitions, emotions, motivations, and behaviors. (p. 229) Based on the qualitative analysis of the data obtained, he described each learner's self-organizing system and argued that the social environment outside of the classroom can be a crucial factor affecting learners' dynamic systems.Another key term in DST is attractor states, which are defined as "a critical value, pattern, solution or outcome towards which a system settles down or approaches over time" (Hiver, 2014, p. 21).Especially during a summer break, some learners had a great motivating experience, while others did not.That seemed to change their motivation in the second semester.Each learner settled down to a different attractor state.Using qualitative data, Kikuchi (2017) discussed the agents or experiences that might have helped to develop such attractor states, while not focusing on learners' L2 experiences per se.
In this study, I followed the research design used in Kikuchi (2017), but with two differences.First, a shorter questionnaire consisting of 29 items was developed (Kikuchi & Hamada, 2018) and used in this research.In addition, the length of the study was extended to two academic years.By extending it to two academic years, a richer history of language learning experiences over four semesters at university as well as three breaks was included in the study.This way, the development of participants' ideal L2 selves and ought-to L2 selves over four semesters, as well as their L2 experiences could be tracked.In tracking the development of student motivation as well as the influence of the L2 experience, this study focuses on the following research questions: 1. How do learners' motivational states change over two semesters in light of their L2 self systems? 2. How do learners' motivation and language learning experiences affect each other?
Participants
The four female students (Asako, Nana, Tamami, and Yuki; all pseudonyms) participating in the study were admitted to the Department of Cross-cultural Studies, Faculty of Foreign Languages, Aoi University (a pseudonym) in 2015.To be a student at Aoi University, there are several paths to admission.Asako was admitted as a scholarship student based on an entrance exam in December 2014.This scholarship guarantees coverage of most of the tuition costs and living expenses.Nana and Tamami took the same entrance exam but did not receive scholarships.However, they were admitted without taking the February general entrance examination for many university applicants in Japan.Yuki took the general entrance examination in February and was admitted.In this university program, students take the TOEIC (Test of English for International Communication), a very common English proficiency test in Japan.Table 1 shows the changes in TOEIC scores for these four participants, as well as the average score for all 119 students admitted to this department in April 2015.
Instruments
The questionnaire used in this study was developed by Kikuchi and Hamada (2018).The eight constructs (based on Taguchi, Magid, & Papi 2009) and the number of items for each construct were as follows: • Criterion Measure: Motivated Learning Behavior (Mot): the learners' intended efforts for learning English (3 items; e.g., "I am prepared to expend a lot of effort to learn English").• Ideal L2 Self (Ids): an L2-specific aspect of learners' ideal selves (4 items; e.g., "I can imagine myself as someone who is able to speak English").• Ought-to L2 Self (Ots): the attributes that learners believe they ought to possess (i.e., various duties, obligations, or responsibilities) in order to avoid possible negative outcomes (2 items; e.g., "I study English because close friends of mine think it is important").• Attitudes to Learning English (AttL): situation-specific motives related to the immediate learning environment and experience (4 items; e.g., "I like the atmosphere of English classes").• Instrumentality-Promotion (InPrm): the regulation of learners' personal goals to become successful, such as attaining high proficiency in English in order to make more money or find better jobs (3 items; e.g., "Studying English can be important because I think it will be useful in getting a good job someday").• Instrumentality-Prevention (InPrv): the regulation of learners' duties and obligations, such as studying English in order to pass an examination (3 items; e.g., "Studying English is important to me because, if I don't have a knowledge of English, I'll be considered a weak student").• Cultural Interest (CI): the learners' interests in the cultural products of the L2 culture, such as TV, magazines, movies, and music (5 items; e.g., "I like English magazines, newspapers, or books").• Attitudes to the L2 Community (AttC): the learners' attitudes toward the community of the target language (5 items; e.g., "I like to travel to English-speaking countries").
Procedures
In April 2015, I made an announcement in an English course to recruit students for a four-year-long project to study their motivation to learn English.I explained that participants would receive about 1000 yen (roughly the equivalent of ten US dollars) to compensate for their time participating in the interviews each month, held seven times throughout the year from April to January in the first year and nine times in the following school year.Five students agreed to participate in this project.At the beginning, they were told that they would be able to drop out of this project any time, respecting research ethics.One female participant dropped out in the middle of the second year since she decided to study abroad.Either during their lunchtime or after classes, they met with the researcher once a month for 30 to 60 minutes.After completing a questionnaire, interviews were conducted covering questions about their life experiences in general, experiences with their English learning, and factors that may have affected their motivation.All the interviews were recorded and transcribed.Three research assistants were hired in order to transcribe the interview data.Questionnaire sheets were scanned and responses saved in a spreadsheet program.
Data analysis
In order to answer the first research question, the questionnaire data were processed and analyzed in order to understand the changes in each of the eight constructs described in the previous section.The averages of the items measuring each of the eight motivational constructs were plotted: Criterion Measure (Mot), Ideal L2 Self (Ids), Ought-to L2 Self (Ots), Attitudes to Learning English (AttL), Instrumentality-Promotion (InPrm), Instrumentality-Prevention (InPrv), Cultural Interest (CI) and Attitudes to the L2 Community (AttC).
Qualitative data collected from each interview were analyzed through three processes, data reduction, data display, and conclusion drawing, as described by Miles and Huberman (1994).During the analysis, cognitive maps (p.134) were used to display learners' experiences and the effects on motivation.Seven figures were made for each participant displaying the data obtained in the spring (1S1, 1S2, and 1S3) and fall semesters in the 1st year (1F1, 1F2, 1F3and 1F4), and the spring (2S1, 2S2, 2S3, 2S3 and 2S4) and fall semesters in the 2nd year (2F1, 2F2, 2F3, 2F3, 2F4, and 2F5) after the spring break at the end of the 1st year.Out of seven figures, only one sample cognitive map for each participant is presented in the next section in order to save space.In Figures 2, 4, 6, and 8, the boxes with the solid line are for learners' comments related to their attractor states, while the ones with the broken line are for L2 experiences related to study at the university, and the ones with the dotted line are for L2 experiences outside the university.
Asako: A test-driven learner with the experience of studying in Cebu
As seen in Figure 1, Asako maintained a higher Instrumentality-Promotion (InPrm) and very low Attitudes to the L2 Community (AttC), which matched her regular account in the interview that getting a good TOEIC score would be important for getting a good job.She felt that she would rather study on her own than practice in the As expressed in the interview excerpt above, even though she was influenced by studying English in Cebu, she could not help but make excuses for not studying English the next month.Figure 2 shows the cognitive map drawn for the data obtained in the fall semester of her second year.After the life-changing experience from which she thought she should study something while she could at university, she did not appear to have a positive L2 learning experience.She thought about going to a free English conversation room, but she did not do that.She needed to take mandatory classes, but she explained that her classmates were not motivated, and she did not feel like studying hard.Towards the end, she found one class, the TOEFL preparation class, to be interesting and she took the TOEFL test at school.Throughout the two years, she had one friend from her high school, whom she often mentioned.She was the one who asked her to go to Cebu.As seen in Figure 2, Asako mentioned her again, saying that she had gone to Myanmar.For her, her friends' influence seemed to be the key.She started to watch DVDs in English because her friend had asked her to.She also participated in a volunteer project because of another friend.In terms of her English study, she usually thinks of getting good scores on proficiency tests.She appears to be test-driven in her language learning and she prefers things that she can do by herself, like studying for tests.
Nana: A learner becoming demotivated in studying English
As seen in Figure 3, Nana's motivation to study English went down, as observed by the drop in Attitudes to Learning English (AttL) and Criterion Measures (Mot).At the beginning of each semester, she usually shared her interests in new classes.However, she could not maintain her interest, as seen in Figure 4.She became a member of the English Speaking Society (ESS) club and felt that she was overwhelmed by the good English speakers from other universities during her first year.She mentioned in an earlier interview in the spring semester in her first year that she liked classes in which she could move around and talk with other students.In the fall semester, the teacher changed.She mentioned that her teacher did not seem motivated and often looked irritated during class.What she was describing in the interview excerpt above is the experience from that class.Figure 4 shows the cognitive map drawn for the data obtained in the fall semester of her first year.One can immediately notice that she has a lot of feelings inside about English and she just cannot motivate herself to do anything.The debate contest in the ESS club was an important moment for her.When she found a gap between herself and students from other universities, she said that she felt self-pity.As far as English classes were concerned, on the other hand, she said that she could not find a reason to take them.The following is an excerpt from another interview: During her demotivated state, she seemed to find many reasons not to study.As seen in Figure 4, people around her were developing better scores on TOEIC.Using a messaging application, she communicated with Taiwanese or German friends and became inspired.Yet, one can see that even these experiences did not work to bring her out of her demotivated state.
Tamami: A learner who likes English a lot but also enjoys her part-time jobs
As can be seen in Figure 5, Tamami maintained very high Attitudes to Learning English (AttL), although she did not attest to high Ideal L2 Self (Ids) or Criterion Measures (Mot).The important event during the two years in terms of L2 experience was that she went to Canada to attend a language school for two weeks with a group from the university including Yuki, in March.After coming back, however, she said in April that she felt that she did not want to study abroad anymore.She said that she did not have any bad experiences in particular, but she just felt that way.Over two years, on the other hand, she kept working part-time at a big clothing store in Yokohama and a Japanese pub in her neighborhood.During the interview, asked to share anything that she was highly motivated to do, she stated: Well, I can think of my part-time jobs first.Then I'd say assignments are next.I have many places that I want to go and things I want to buy.I want to move to a new place.That's why my part-time job has a high priority.English is not a priority.With the parttime job, I've got the feeling that I have to do it and I want to do it, too.With English, hmm, I have the feeling that I've got to do it.I don't have the feeling that I want to study.(Tamami, October [2], p. 4) From this interview excerpt above, one can probably notice her strong passion for her part-time jobs.English is something that she does not have a strong feeling to study.In another interview, she also stated: "I like English very much, but I don't want to use English for work.Because I like English so much, I wonder why not.I want to think of how I can take advantage of English" (Tamami, December [2], p. 3).
Figure 6
Cognitive map for Tamami's dynamic system in the fall semester of her second year Even though she said she liked English very much, English became something that she liked just towards the end of her second year.While she maintained very strong attitudes towards learning English, as seen in Figure 6, her focus was on a part-time job that she did only three or four times a week.Even though she wanted to study for certification tests such as the TOEIC, she said that she did not have time.In fact, throughout the two years, she sometimes canceled her interview, or looked very tired.She mentioned that her clothing shop, her part-time job, has peak seasons and off-seasons.She told me that she got stressed out and tired during the peak seasons, needing to work a lot, while she could relax during the off-seasons.While she was a full-time college student, her time for study was based on how busy her part-time job was at the time.Her two weeks in Vancouver in March and the fact that she liked English did not become motivators for studying English, while the job satisfaction and money that she received became motivators to work three or four times a week.
Yuki: A learner who is rather stable in motivational dynamics
Compared to the other learners, we can see that Yuki's response to the questionnaire was constant, as seen in Figure 7.She maintained fairly high Attitudes to Learning English (AttL) and Ideal L2 Self (Ids) while had lower scores for Ought-to L2 Self (Ots) and Attitudes to the L2 Community (AttC).Like Nana, she attended the ESS club.With Tamami, she went to study abroad for two weeks.While she had these L2 experiences, it was notable that she did not develop her ideal L2 self or motivation to study English.After coming back from Vancouver, Yuki said that she had already participated in short-term study abroad two times, including once when she was in her high school.She told me that it was enough.As expressed in the interview excerpt above, she developed the image that she probably would not use English in the future very much.As seen in Figure 8, a cognitive map describing her fall semester during the second year, she felt that English classes were easy and were not structured like in her high school.She thought she was more excited about English then, but not in college.While her time with the ESS club became a drive for her in college, she shared her feelings in another interview: When I was in high school, I devoted myself to the brass band club.In my junior high school, I wanted to go to a certain senior high school.I had a certain goal to achieve, and I was trying hard to accomplish it.That's why I was eagerly and positively working on things.However, right now, I don't have any concrete goals and I think about many things.I feel anxious then.(Yuki October [2], p. 10) While she worked very hard organizing the ESS club and had a part-time job on the weekend, she shared her anxiety about her future.Like Asako, not having a goal for the future seemed to bother Yuki while she was actually busy with her club activities and part-time job.
Discussion
By analyzing the four participants' motivational change, the general tendency that can be observed is that all learners have different trajectories for the motivational components of the L2 self system.In general, the mean of each construct stays somewhere between 3 (somewhat not true for me) and 4 (somewhat true for me).For all students but Nana, who became demotivated, the mean score stayed generally above 4 for Instrumentality-Promotion (InPrm) and Attitudes to Learning English (AttL).This can imply that many of the participants kept their instrumental motivation and attitudes towards learning English.It is also notable that both Asako and Yuki kept their Ideal L2 Self (Ids) generally high, with means of more than 4.
By analyzing the cognitive maps, including others not shown for lack of space, which depict their attractor states as well as experiences both in and outside the university, it was noted that at the beginning of each school year, many participants appeared to be excited about their new teachers and courses.However, all participants settled into an attractor state in which they focused on their club activities or part-time jobs.Before Tamami and Yuki participated in a short language program in March of their first year, they experienced attractor states in which they worked hard on English in the study abroad preparation course.Asako experienced an attractor state after she came back from Cebu, trying to find something that she was interested in studying.However, these attractor states did not last long.Many of the participants felt that the university classes were not interesting without any concrete course objectives.They were more attracted by part-time jobs in which they could get money and job satisfaction, or by club activities in which they played a certain role.In short, the language learning experiences did affect some of these learners, especially the ones who experienced the short study abroad program, but they did not seem to affect their dynamic system for a long time.
Over a decade ago, Irie (2003) published a review of studies of English language learning motivation in Japan, and observing the recurring patterns she concluded that "Japanese university students are likely to appreciate the instrumental value in learning English for exams and a career, and also to have an interest in making contact with native speakers of English and visiting their countries" (p.97).Even now, this observation applies.Some of the participants were interested in a long-term study abroad program, but they got to the state that it might be good enough if they could use English for a job later in their life, so they focused on proficiency test preparation.
Conclusion
From the four case studies of female college students presented in this paper, one can notice that the university classes that they were taking probably did not give them rich L2 experiences.Each participant had experiences outside of school.Asako went to Cebu, and Tamami and Yuki went to Vancouver to attend short English language programs.Nana saw people from other universities talking in good English.However, all of them had a hard time motivating themselves to find good language learning experiences in their daily lives.Why is that?While motivation is commonly regarded as an individual attribute, Lamb (2016) states: it is important to recognize that it is also a social construction; that is, we come to strive for certain things in life as a result of our socialization in a particular community or society, and the extent to which we can act on our desires is also constrained by our social environment.(p. 324) As presented in the four cognitive maps above, all of the students found that their classmates' motivation or the teacher's motivation affected them, and often negatively.Of course, we should not only be blaming classmates or teachers.Notably, the interaction of each individual's motivation with their classmates' and with their teachers' is clearly important.One might ask what motivates or demotivates Japanese learners of English.I hope that it is clear that we cannot easily answer these questions since each one of them is struggling to learn English in a particular community or small society, and interacting with a variety of people who are also a part of it.
Admittedly, this case study is merely focused on two years of language learning experiences shared by four female students who attended a private university and were majoring in international studies with the original intention to study abroad only for a short time.More studies are needed to understand the rich experiences of English use in the daily lives of different kinds of Japanese students.For instance, future studies might include participants who are not interested in studying abroad.It would be interesting to study a group of different genders, as well.
This paper attempted to answer the call from Dörnyei and Ushioda (2011) for research into language learners' reasons for studying, their lengths of sustained study, and their intensity of study.It was notable that without continuing rich L2 experiences and a personal goal to use English, learners in EFL situations have a hard time finding reasons to study; thus, they do not study hard for great lengths of time.
Figure 1
Figure 1 Asako's motivational changes in spring and fall semesters over two years
Figure 2
Figure 2 Cognitive map for Asako's dynamic system in the fall semester of her second year
Figure 3
Figure 3 Nana's motivational changes in spring and fall semesters over two years
Figure 4
Figure 4 Cognitive map for Nana's dynamic system in the fall semester of her first year
Figure 5
Figure 5 Tamami's motivational changes in spring and fall semesters over two years
Figure 7 Figure 8
Figure 7 Yuki's motivational changes in spring and fall semesters over two years
Table 1
TOEIC scores of the four participants She was aware that her listening ability was weak, so she watched movies in English in her first year, and TED Talks in her second year.She constantly said that she wanted to work on TOEIC test preparation in her second year, too.As seen in Table1, her score went up to 660 in January of her second year.In the spring semester of her first year, she often talked about her part-time job as staff at wedding ceremonies, which appeared to become a central part of her life.During the summer and spring vacations in her first year, she went to Tohoku and Niigata for volunteer work and became inspired.Through these experiences, she started to realize that she could not keep a stressful part-time job, so she changed to being a waitress at a cafe.During the two years, the main highlight for her was that she went to Cebu in the Philippines to study English at a language school.She said in an interview: Before studying in Cebu, I thought that social experience is important.I thought I could learn more from a part-time job and it would be helpful for my future.However, I think that I want to study more after hearing stories from other people.I don't know what I'm interested in studying, though.I just don't know because I haven't studied.Since I want to find out what I'm interested in, I want to work on English for now … 1 That's why I study for the TOEIC.(Asako,September [2], p. 4) 2Although she expressed her determination to study English to find out what she would like to study, in the interview the next month, she noted:
|
2019-05-13T13:05:38.019Z
|
2019-03-22T00:00:00.000
|
{
"year": 2019,
"sha1": "123e1b1535ffd6ee6c8b7307d5a8a565c86a2481",
"oa_license": "CCBY",
"oa_url": "https://pressto.amu.edu.pl/index.php/ssllt/article/download/18553/18318",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "123e1b1535ffd6ee6c8b7307d5a8a565c86a2481",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
233353521
|
pes2o/s2orc
|
v3-fos-license
|
Supply Chain Finance Factors: An Interpretive Structural Modeling Approach
Purpose: The present study aims to identify the critical factors of supply chain finance and the interrelationship between the factors using interpretive structural modeling. Methodology: Factors of supply chain finance were identified from the literature and experts from both industry and academia were consulted to assess the contextual relationships between the factors. Then, we applied interpretive structural modeling to examine the interrelationships between these factors and find out the critical factors. Findings: The model outcome indicates information sharing and workforce to be the most influential factors, followed by the automation of trade and financial attractiveness. Originality/value: Previous literature identified various factors that influence supply chain finance. However, studies showing interrelationships between these factors are lacking. This study is unique in the field as it applies total interpretive structural modeling for assessing the factors that affect supply chain finance. Our model will aid practitioners’ decision-making and the adoption of supply chain finance by providing a necessary framework.
Introduction
In 2008, the global economy witnessed a financial crisis affecting market liquidity and credit deployment of financial institutions, thereby restricting the overall credit growth of the economy. Traditional loan portfolios declined increasing firms' dependence on alternate sources of financing (Ivashina and Scharfstein, 2010). Moreover, the financial crisis aggravated the financial crunches faced by small and medium-sized enterprises (SMEs; Casey and O'Toole, 2014;Lee, Sameen, and Cowling, 2015). Supply chain finance (SCF) emerged to be of greater relevance after the financial crisis. Furthermore, the suitability of SCF to meet the financing needs of small and medium-sized enterprises (SMEs) that usually face difficulties in raising required funds, which contributed to the significance of SCF (Klapper, 2006;Marak and Pillai, 2019). Thus, SCF experienced a considerable growth rate with a compounded annual growth rate (CAGR) of 5%, and experts expect it to experience a similar growth rate in the years to come (Sommer and O'Kelly, 2017). Supply chain finance outpaced the traditional trade finance market that occupies about half of the revenue pool of trade finance. Furthermore, at a global level, scholars estimate that there are USD 2 trillion financeable highly secured payables (Herath, 2015) and a trade finance gap of USD 1.5 trillion (Asian Development Bank, 2017), thereby indicating a massive opportunity that lies ahead for SCF (BSR, 2018).
Supply chain management comprises physical, informational, and financial flows. Previous studies on supply chain management concentrated on the physical and informational flow in the chain (Lamoureux and Evans, 2011;Caniato et al., 2016). However, to achieve effectiveness, efficiency, and competitiveness, scholars realized that the three components of SCF were equally essential, so they sought to align physical, informational, and financial flows that greatly contributed to the significance of SCF and ensuing studies.
Studies on SCF can be traced to the early 1970s. However, the concept of SCF lacked formal definition until the beginning of the 2000s (see Pfohl and Gomm, 2009;Xu et al., 2018). Supply chain finance is an inter-organizational optimization and the integration of financing processes to increase the value of all involved parties (Pfohl and Gomm, 2009). This includes financing and risk mitigation practices and techniques (Global Supply Chain Forum, n.d.). Supply chain finance is not limited only to working capital but may also include the financing of fixed assets . There are multiple definitions and resulting perspectives of these definitions, while Gelsomino et al. (2016) find three perspectives on SCF: supply-chain-oriented, finance-oriented, and buyer-driven-oriented. The notion itself is an umbrella term that Vol. 29, No. 1/2021 Zericho R. Marak, Deepa Pillai gathers multiple instruments/solutions (see Chakuu, Masi, and Godsell, 2019;Marak and Pillai, 2019).
The main aim of SCF is to optimize inter-firm financial flows, ideally through solutions offered by financial and technology service providers (Hofmann, 2005;Camerinelli, 2009;Lamoureux and Evans, 2011). According to Wuttke et al. (2013a), SCF ultimately aims to align financial flows with physical and financial flows, thus improving cash flow from the supply chain perspective.
Supply chain finance is influenced by several factors, so understanding the critical ones among them (see Marak and Pillai, 2019) is vital to the success and efficient application of SCF. This article seeks to identify SCF's influential factors and their interrelationships with the use of interpretive structural modeling (ISM). This is one of the first articles to explore the relationships between several factors that affect SCF with ISM. Besides, this study also shows the critical factors which can be improved in the implementation of SCF by the managers and practitioners belonging to a firm, its supply chain partners (suppliers and buyers), and even service providers.
Literature Review
The literature highlights several factors that influence the implementation and success of SCF. Some of the highly discussed factors are collaboration, the automation of trade processes, the digitalization of trade, trust, reputation, bargaining power, financial attractiveness, financing costs, information sharing, the availability of other external financing, the frequency and volume of transactions, and workforce (Marak and Pillai, 2019).
Collaboration
Supply chain finance involves collaborative means of improving the flow of funds, thus making collaboration a highly discussed factor in the literature (Blackman and Holland, 2006;Pfohl and Gomm, 2009;Hofmann and Belin, 2011;Popa, 2013;Wuttke et al., 2013b;Zhang, 2016;Protopappa-Sieke and Seifert, 2017). Collaboration is not only limited to inter-organizational activities but may also include interactions between departments of the same organization (Wandfluh et al., 2015;Caniato et al., 2016).
Information Sharing
Information sharing is crucial for supply chain effectiveness in general and SCF in particular. The concerned parties in SCF should make information available and share it with each other (Silvestro and Lustrato, 2014;Wandfluh et al., 2015;Jiang et al, 2016;Ding et al., 2017). Some of the important types of information to be shared in the supply chain are data that refer to inventory level, sales, sales forecasting, order status, production/delivery schedule, performance metrics, and capacity (Lee and Whang, 2000;Lotfi et al., 2013).
Trust
Trust is also crucial to the implementation of SCF, as two or more organizations are always involved in SCF. Moreover, trust should also accompany SCF instruments (Randall and Farris, 2009;Hofmann and Kotzab, 2010;Liebl et al., 2016;Martin, 2017). What contributes to building trust is h onesty and benevolence (Martin, 2017). Moreover, Iacono et al. (2015), Liebl et al. (2016), Chen (2016), and Zheng and Zhang (2017) discuss the importance of reputation and track record/image in supply chain financing.
Bargaining Power
Bargaining power of one party over the other may influence the use of SCF as it can affect the kind of SCF instruments use, e.g. terms and conditions (Hofmann and Kotzab, 2010;Caniato et al., 2016;Liebl et al., 2016;Wuttke et al., 2016;Chen et al., 2017;Protopappa-Sieke and Seifert, 2017;Wuttke et al., 2019). It is the capability of an organization to have an effect on the actions and intentions of another organization (Maloni and Benton, 2000). Other authors such as Martin (2017) and Wuttke et al. (2013b) Caniato et al. (2016) use a multiple-case-based approach to posit that financial attractiveness influences the acceptance of SCF. The authors define financial attractiveness as the "attractiveness of the adopter as a potential market opportunity for a service provider" (Caniato et al., 2016, p. 541). This attractiveness may be due to the quality of receivables like factoring/receivables pledging (Sopranzetti, 1998;Soufani, 2002) or the saleability of inventories like inventory finance (Buzacott and Zhang, 2004;Li et al., 2011;Popa, 2013).
Financing Cost
Several studies argue that the cost of financing is crucial (Yan et al., 2014;Iacono et al., 2015;Babich and Kouvelis, 2018;Xiao and Zhang, 2018;Yu and Zhu, 2018). A firm and its supply chain partners would consider the financing cost while making decisions on raising funds through an SCF route. This cost of financing will refer to both SCF and non-SCF sources (Yan et al., 2014;Iacono et al., 2015;Babich and Kouvelis, 2018;Xiao and Zhang, 2018;Yu and Zhu, 2018).
Availability of Other External Financing
The availability of other external financing -i.e. non-SCF external financing -also influences the adoption of SCF. If financing options are wide for the firm, then SCF may seem unattractive (Martin, 2017;Chen and Kieschnick, 2018).
Frequency and Volume of Transactions
The use of SCF is also affected by the regularity and magnitude of transactions. Financial institutions and even supply chain partners may evaluate this frequency and volume of transactions before participating in SCF (Hofmann and Zumsteg, 2015;Iacono et al., 2015;Pellegrino et al., 2018).
Workforce
Moreover, SCF is influenced by the knowledge, skill, and expertise of the people in the organization and in the supply chain partnering organizations (Fairchild, 2005;Chen, 2016;Jiang et al., 2016). While studying the challenges in the adoption of SCF in the Indian context, More and Basu (2013) find human resource challenges to be one of the major barriers in the adoption of SCF. Table 1 offers a brief summary of these factors along with supporting sources. (2006)
Research Methodology
We identified factors of SCF from the comprehensive literature review. Subsequently, we consulted experts from industry and academia about the contextual relationships
Interpretive Structural Modeling
Interpretive structural modeling is a method that helps to generate solutions for complex problems through discourses based on structural mapping of interconnections of elements (Malone, 1975;Pfohl et al., 2011). ISM is a highly accepted and established method to uncover the relationships among elements that define a problem, and it is a modeling technique for examining the effect of one element on other elements (Agarwal et al., 2007;Attri et al., 2013;Al-Muftah et al., 2018). That is, ISM comprises nodes and links that depict a system's variable and direction of a relationship. The result of ISM is a model that shows the contextual relationships among its elements (Baykasoglu and Golcuk, 2017). ISM is widely used to study causal and hierarchical relationships among diverse factors (see Table 2).
Area of Study Source/Authors
Factors influencing supply chain agility Agarwal et al. (2007) Factors to enhance the competitiveness of SMEs Singh et al. (2007) Identification of supply chain risks and their relationships Pfohl et al. (2011) Evaluation of critical success factors (CSF) for ERP Baykasoğlu and Gölcük (2017) Factors affecting the e-diplomacy implementation Al-Muftah et al. (2018) Antecedents to innovation through Big Open Linked Data Dwivedi et al. (2017) CSF for traceability of food logistics system Shankar et al. (2018) Internal supply chain management benchmarking Kailash et al. (2019) Source: own elaboration.
Interpretive structural modeling follows a systematic methodology. The different steps involved in ISM -along with data analysis -are presented in Figure 1 and are explained in the following subsections:
Structural Self-Interaction Matrix (SSIM)
At this stage, we established the contextual relationship between the two elements (i and j). Moreover, we examined the direction of the causal flow using SSIM, for which we employed four symbols to denote the directional relationship between elements i and j: V -factor i will influence j; A -factor j will influence i; X -factors i and j influence each other; O -factors i and j do not influence each other.
Based on experts' opinions, we formed the SSIM as depicted in Table 3.
Reachability Matrix
Reachability matrix is a pair relationship of factors obtained through structural self-interaction matrix as developed in Table 3. At this stage, we focused on the development of the initial reachability matrix of SSIM, which is a binary matrix in which the entry V, A, X, and O are transformed into binary numbers 0 and 1 on the basis of the following rules: -If the (i, j) entry in the SSIM is V, then the (i,j) is converted to 1 in initial reachability matrix and (j,i) entry is converted to 0. -If the (i, j) entry in the SSIM is A, then the (j,i) is converted to 1 in initial reachability matrix and (i,j) entry is converted to 0. -If the (i, j) entry in the SSIM is X, then both (i,j) and (j,i) entries in initial reachability matrix are converted to 1. -If the (i, j) entry in the SSIM is O, then both (i,j) and (j,i) entries in reachability matrix are converted to 0. Table 4 shows the SSIM transformed into initial reachability matrix.
Final Reachability Matrix
The final reachability matrix was formed on the basis of the assumption of transitivity, which is the basic assumption of ISM wherein element A is related to B, and B is related to C, then A is related to C. In the final reachability matrix, we also calculated the driving power and the dependence power (see Table 5).
Canonical Matrix
The canonical matrix was prepared for variables in the same level grouped together based on the outcome of the final reachability matrix (see Table 10).
Developing the Diagram
Based on the canonical matrix and final reachability matrix, we formed the initial diagram of factors influencing SCF. The final diagram was developed after removing indirect links. Figure 2 shows the final ISM-based model of factors influencing SCF.
Information sharing (8) and workforce (11) occupy the lowest level of the hierarchy (level IV) in the ISM-based model, which shows them as the most important factors that drive all other factors. These two (8 and 11) reveal that the availability and sharing of information among parties involved in the SCF are crucial for its implementation. Equally important is the knowledge, skill, and expertise of the people in the organization and partnering firms. Without proper and capable human resources, the adoption and success of SCF will suffer. The automation of trade process (2) and financial attractiveness (6) occupy level III in the hierarchy. The factors in the level II category are collaboration (1), trust (2), reputation (4), and the availability of other external financing (9). These factors are more dependent and have a lower driving power than the first two levels (i.e. level IV and III factors). Level I factors, bargaining power (5), financing cost (7), and the frequency and volume of transactions (10) have maximum dependence and the least driving power as compared to the factors in former levels (i.e. level IV, III, and II). These factors are highly dependent on the factors from the other levels.
MICMAC Analysis
Matrice d'impacts croisés multiplication appliquée á un classment (MICMAC), a cross-impact matrix multiplication applied to classification, is a structural prospective analysis used to study indirect relationships (Saxena et al., 1990 dramowli et al., 2011). In this study, the MICMAC analysis was used to classify the factors that influence SCF based on driving and dependence power. The method served the purpose of validating the results of ISM by helping in the critical analysis of the scope of each element. For that purpose, we formed groups: autonomous factors, dependent factors, linkage factors, and independent factors (Mandal and Deshmukh, 1994;Agarwal et al., 2007). Group I (Autonomous) contained the autonomous factors with weak driving and dependence powers. Group II (Dependent) included dependent elements with weak driving and strong dependence powers. Group III (Linkage) contained linkage factors with strong driving and dependence powers. Lastly, Group IV (Independent) contained driver factors with strong driving and weak dependence powers. Figure 3 shows the factors that influence SCF classified based on driving and depen dence power. Bargaining power (5) emerged in Group II, meaning it was a dependent factor. The majority of variables appeared in Group III (Linkage Factor), wherein the dependence power and driving power are both strong and affect each other. No factor appeared in Group I, which consists of factors disconnected from the rest. Surprisingly, also no factor appeared in Group IV, which contains those high in driving power and low in dependence power. Even the factors such as information sharing (8) and workforce (11) -whose dependence was lower compared to the rest -were grouped in the linkage factor. The in the MICMAC analysis showed that all the factors have strong interrelationships with each other.
Discussion and Conclusion
Supply chain finance has gained importance recently due to such reasons as the earlier neglect of financial flows of the supply chain and the reduction of bank loans after the financial crisis that would be suitable to service the financing needs of SMEs. Our study identified the factors from the literature and explored the relationships among these factors. For this, we sought expert opinions from both the industry and academia, after which we developed the ISM-based model.
The ISM-based model showed the highest level (level I) in its hierarchy to be occupied by bargaining power, financing cost, and the frequency and volume of transactions. These variables have the lowest driving power and are highly dependent on other factors. Several authors report that these variables influence SCF (Wuttke et al., 2013a;Yan et al., 2014;Iacono et al., 2015;Caniato et al., 2016;Liebl et al., 2016). These factors depend on variables such as collaboration, trust, reputation, and the availability of other external financing (level II; Randall and Farris, 2009;Hofmann and Kotzab, 2010;Iacono et al., 2015;Liebl et al., 2016;Martin, 2017;Protopappa-Sieke and Seifert, 2017). Trust and collaboration between supply chain partners can lead to an increase in transactions, which can result in a reduction in costs and improvement in financial performance. Although trust in a relationship can be affected by the degree of power between the parties (Farrell, 2004;Ando and Rhee, 2009), it may also influence the power itself as when one party trusts the other there may be no need to control the behavior of the other party (Inkpen and Currall, 2004). Similarly, the collaboration between the supply chain partners can result in a decrease in the exercise of power and control by one of the partners (Simatupang, Wright, and Sridharan, 2004). Similarly, a firm's reputation or goodwill may also have a bearing on transactions between the parties. Studies show that reputation can offer a strategic advantage to the firm and introduce superior performance (Roberts, 2003). The availability of other sources of external financing influences the cost of financing through the SCF route. Thus, SCF will be attractive only if one of the SCF partners -e.g. the supplier -has access to external financing at a higher cost (Martin, 2017). Generally, SCF should help in reducing the cost of financing as it may influence either one or more dimensions of the supply chain (Pfohl and Gomm, 2009). Besides, access to external financing will also affect the frequency and volume of transactions between the trading partners. Lesser access to other external financing by one of the trading partners is expected to lead to more dependence and hence more transactions between the parties (Martin, 2017). The level III variables -i.e. the automation of trade and financial attractiveness -influence the level II variables. The influence of these on SCF can be observed in the literature (Buzacott and Zhang, 2004;Blackman and Holland, 2006;Popa, 2013;Wuttke et al., 2013;Caniato et al., 2016). The automation of trade -meaning technology -can play a key role in improving trust and collaboration between the trading partners (Lee and Gao, 2005;Angerhofer and Angelides, 2006;Crook et al., 2008;Lee, Palekar, and Qualls, 2011;Hudnurkar, Jakhar, and Rathod, 2014). Having a sound technology could help in establishing a good reputation among the supply chain partners. Moreover, it could help in reducing information asymmetry between them by having a bearing on the availability of other external financing. Resources contribute to the financial attractiveness of a firm (Caniato et al., 2016). Previous studies show that resource sharing among supply chain partners enhances trust and collaboration (Ye and Zhang, 2010;Zhang and Huo, 2013;Hudnurkar et al., 2014). Firm resources can help in securing a good reputation and availing external financing as well (Gynther, 1969;Greyser, 1999). The lowest level (level IV) variables are information sharing and workforce. These variables are considered to be strong drivers of SCF (Fairchild, 2005;Silvestro and Lustrato, 2014;Wandfluh et al., 2015;Chen, 2016;Jiang et al., 2016) as they influence other factors through the automation of trade and financial attractiveness. The recognition of the need or desire for a particular kind of information may lead to the choice of information technology. Information is vital in enhancing the financial attractiveness of a firm as non-transparency or the asymmetry of information can act as a hindrance in availing financing (Shinozaki, 2014). The knowledge, skill, and expertise of a firm -along with the partnering firm -will have a bearing on the automation or digitalization of trade. For the technology to be seamless, it should be properly integrated with the supply chain partners. Theories involving innovation adoption also discuss the significance of people in the implementation and diffusion of innovation at the organizational level (Davis, 1989;Rogers, 2010).
Finally, we performed the MICMAC analysis, which showed that none of the variables under scrutiny is autonomous while all the factors are highly interconnected with each other.
Managerial Implications
This study will have implications for managers and owners of firms and partnering firms (mostly SMEs) whereby they can concentrate on influential factors, particularly information sharing, workforce, the automation of trade process, and financial attractiveness. This will help them improve the implementation of SCF, increase its effectiveness, and ensure its maximum benefits. Managers should focus on information sharing in the organization and with supply chain partners. This information could pertain to sales data, sales forecast, inventory level/policy, order status, production, or delivery schedule. To improve workforce knowledge and skill, managers could provide it with the necessary orientation, training, and development. Several parties could also take the responsibility of providing such an orientation, training, and development, e.g. associations and chambers to which a firm/supply chain partners belong, larger organizations in the supply chain, government authorities, or policy-makers that work on improving entrepreneurship and SMEs, and even financial/technology services providers. A firm and its supply chain partners should focus on the automation or digitalization of trade processes. Managers should concentrate on the compatibility, interoperability, and integration of digitalization across the supply chain for enhancing effectiveness and efficiency in the facilitation of SCF. Financial attractiveness is important, particularly in the case of financial service provider mediated SCF. However, what can be of immense benefits for organizations -especially for SMEs -is understanding that e.g. the quality of receivables, purchase orders, and the saleability of inventories can serve as underlying assets (instead of other assets as collaterals or pledging personal wealth) against which financing can be availed in SCF.
Furthermore, understanding the relationship between the SCF factors could help the firm and its supply chain partners in managing working capital from the collaborative supply chain perspective rather than from a single organization perspective. Firms and financial managers tend to maximize their financial gains at the cost of supply chain partners while approaching working capital from a single organizational perspective. As such, firms may resort to actions such as delaying payments, extending credit periods to their suppliers, minimizing credit periods to their customers, or aggressive recovery policies. These actions can increase the overall cost, risk, and disruption in a supply chain. Such behaviors of firms can be observed more apparently during economic downturns, e.g. the global financial crisis of 2008, when banks and financial institutions frequently decline loans. The goal of firms and financial managers should be to create a win-win situation for all the engaged parties, which SCF can create. An understanding of the factors and their interlinkages will also help the financial/technology services providers in improving the facilitation of SCF solutions/technology.
Study Limitations
The main limitation of our study is that the model and subsequent results base on the opinions of experts. Although we sought to include experts from both the industry and academia, the results are limited to their knowledge and perceptions. Thus, the findings may not be generalizable.
Future Research
This study may be further extended by collecting larger data sets by using survey methods from the field or building a model with structural equation modeling (SEM). However, we suggest that future studies concentrate on such parameters as size (be it SMEs or large enterprises) or industry/sector. Moreover, it would be interesting to perform system dynamics modeling with these factors so as to understand the behavior of these factors and their effect on SCF.
|
2021-04-23T13:13:49.283Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "65a809bcc74139989310e501608d09ece2194f49",
"oa_license": "CCBY",
"oa_url": "https://journals.kozminski.edu.pl/system/files/Zericho.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "62e7f91ae77ab774349183f3eb0caa3ba3ea5adc",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
234038363
|
pes2o/s2orc
|
v3-fos-license
|
RELATIONSHIP OF WORKING CAPITAL MANAGEMENT AND PROFITABILITY OF THE FIRMS-AN APPLICATION OF UNIT ROOT AND CO-INTEGRATION TEST ON THE VARIOUS CORPORATE SECTORS OF PAKISTAN STOCK EXCHANGE
The main purpose of this research is to find the impact and the long-run relationship of working capital, and profitability in different major sectors of Pakistan stock exchange; for this purpose eight sectors with 95 listed companies selected that can be representative of the Pakistani mindset and practices of the corporate world. For this reason, ROA used as the dependent variable and CCC, CR, QR, WCT ART, APD, ROCE, DR to check the long-run relationship with Firm Performance. OLS is not possible due to the trend in data. In this research unit root test and Penal Co-integration test used for finding the long-run relationship equilibrium. This research paper provides guidelines to corporate practitioners and academia to understand and focus on working capital to improve profitability in the organization. Findings revealed that different sectors have different characteristics of working capital in the long-run equilibrium. This research intends to give future direction for the researcher to develop theories of liquidity and working capital.
INTRODUCTION
Working capital management is a significant function of corporate finance. Liquidity and profitability directly associated with working capital management. Working capital management administrated the current assets and current liabilities of the corporate (Horne, 2008). Shapiro et al (2019) also mentioned that the management of working capital is an important responsibility of finance manager for multinational firms and its domestic counterpart, both are interested to manage current assets and current liabilities to maintain the profitability of the firm. Working capital management is crucial for financial health and the formal process created an impact on the size of corporate as well (Padachi, 2006). Eljelly (2004) mentioned that effective working capital management has the capacity to manage current assets and current liabilities in a way to manage current assets on current liabilities and the firm must be in the position to easily pay of all the uncertain obligations whenever it urgently required. Horne and Wachowicz (2000) highlighted the significance of working capital management and realized it as an essential tool for corporate finance.
Moreover, claimed that it is very important due to several reasons. For instance, manufacturing firms usually maintained current assets at least half of the total assets of the firm, distributions companies maintain even more current assets as compared to current liabilities and a number of the industry having various strategies to maintained financial performance. Eljelly (2004), also emphasized that effective working capital management deals with proper planning and control all the concerned matters of current assets and current liabilities in a such a way to handle all upcoming challenges of operational activities. Horne (2008) maintained that corporate having the excessive current assets perform much better as compared to those firms who are unable to manage the administered issues of current assets and current liabilities. Moreover, Kumar (2011) highlighted that working capital is a core responsibility and essential part of investment decision to face the challenges of short term investment and short term financing of the corporate. Therefore, the novelty of this current study is to find long term relationships.
However, past researchers have developed consensus that working capital management is not as simple as the theory suggested. Somehow, it is a very complex decision in short term perspectives. The prime goal of the finance manager is to maintain the profitability of the firm, although liquidity and profitability are the inverses of each other. Therefore, a firm focuses on increasing profitability in the short and long term perspective to developed strategies regarding http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 working capital management. (Banos-Caballero et al. 2011;Lazaridis & Tryfonidis 2006;Deloof 2003;Shin & Soenen, 1998). Smith (1980) highlighted that working capital comprised of the management of cash, inventory, receivables and payable and directly and indirectly influences the liquidity, profitability of the firm. Sharma and Kumar (2011) also highlighted that working capital is a core responsibility and essential part of investment decision to face the challenges of short term investment and short term financing of the corporate. Therefore, the novelty of this study is to find long term relationships. Horne (2012) mentioned that working capital policy, directly and indirectly, created an impact on liquidity, risk, and profitability and emphasized that there are three forms of working capital policy contemporaneous. Conservative, Trade-off and Hedging and these policies formed a huge impact for instance; in conservatives' policy maintain high liquidity, low risk and low profitability of the firm. In the case of trade-off policy; liquidity, risk, and profitability of the firm occurred as average. Moreover, in the case of the Hedging policy, Liquidity of the firm reduced. However, risk and profitability increase as before.
INDEPENDENT JOURNAL OF MANAGEMENT & PRODUCTION (IJM&P)
Many researchers agreed that working capital management as short term functions of corporate finance is essential for all kinds of firms. Firm size, country, sectors, type of industry, etc. (Gill, Biger & Mathur, 2010;Lazaridis & Tryfonidis, 2006;Deloof, 2003;Shin & Soenen, 1998). However, past researchers argued that literature of corporate finance was concentrated and given intense time on long term decision criteria and ignored the short term decisions impact on long term perspectives (Juan, 2007). This current study focuses on a ground-breaking approach, merge operational and financial skills and an all encircling view of the company's operations that will help in discovering and executing strategies that create short term cash. Furthermore, if the issue of liquidity and working capital can be tackled on a corporate-wide basis it will benefit the firm. OLS is not possible due to the trend in data. Hence, in this study, the unit root test and Penal Co-integration test used for finding the long-run relationship equilibrium. http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 The remaining paper consists of four major sections. Section-2 explore the Literature review, Section-03 is related to methodology, Section-04 dealt with Research Results and Analysis and section-05 composed on conclusion. Sagan (1955) was the first researcher of this domain and found the relationship between working capital and firm performance and argued that the working capital is necessary for all types of corporates to manage firm performance. In the contemporary world, it is necessary to evaluate country to country, level of influence of working capital on firm performance moderated because of the passage of time the situations had changed. Therefore, past researchers have a different opinion on different time periods and different parts of the world. Therefore, in the section, past researchers' findings provide a framework for conducting new horizons of working capital management.
LITERATURE REVIEW
Working capital management (WCM) is crucial as it fervently impacts on corporate performance. WCM is an essential concept to make liquidity and profitability differentiation among corporations, is to involve the decision and combination of short term financing and short term resources. All aspects of working capital, which involve cash, marketable securities inventory management play a significant role in any organization. Mudanya and Muturi, (2018) described working capital as a tool that manages current assets and current liabilities.
Moreover, working capital management is an essential function of investment decisions that are properly implemented to increase the worth of shareholders. However, Igbekoyi (2017) demonstrated that working capital management is dependent on profitability and put a significant impact on the firm's performance. The objective of working capital is to create a realization about to retain the reasonable cash flow in the normal and ideal condition of a firm which minimizes the risk to pay immediate commitments. However, unnecessary investment should be taken into consideration in working capital management. Meanwhile, working capital investment minimizes to reduce the chances of risk which influences liquidity. Working capital management puts the adversely impact on regular operations if the amount of WCM is insufficient. It is helpful for the decision-maker to provide an example of risk-return. Samiloglu (2016) demonstrates that the working capital management connects to control the payable, inventories, account receivable and cash. It is a necessity daily for business, the firm requires a regular amount of cash for paying bills, accounts payable, for covering the unexpected cost and purchase materials for a daily basis. However, Edem (2017) believed that http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 WCM is also known by cash conversion cycling, this is the weight of time that turn the present assets and present liabilities in the cash. Firm's goal to reduce their WC by collecting accounts receivable quickly and sometimes by stretching the accounts payable. The importance of WCM is incontestable and whether its element they are managed whole or individual. This management is important to order the organization to manage the cash flow effectively and continue the operation. accomplished that company managers can augment profitability by limiting the cash conversion cycle, the receivables collection period and the inventory conversion period. The outcome also recommended that lengthening the payables delay period might enlarge profitability. Though, managers should be careful as lengthening the payables deferment period could harm the company's credit standing and damage its profitability in the long run. http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 Nazir and Afza (2009) It means that reducing working capital investment would absolutely persuade the companies' profitability by dropping the proportion of current assets in total assets. The majority of the studies in this area demonstrate that companies can improve their profitability by shortening the cash conversion cycle as they originate a strong negative relationship between these two variables. A variety of consequences were attained when it comes to the association between diverse components of the Cash Conversion Cycle and corporate profitability.
METHODOLOGY
The current study was quantitative in nature. In this research, developed eight categories of thirteen major sectors of Pakistan stock exchange with 95 listed companies; calculated all concerning financial ratios related to working capital management which exclusively covered http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 liquidity parameter and long-run variable is also included to measure the long term effect with the profitability of listed companies for the period of 2009 to 2018.
INDEPENDENT JOURNAL OF MANAGEMENT & PRODUCTION (IJM&P)
This research is to be the largest and to be the first-ever in Pakistan, where the relationship and impact of liquidity and working capital management on profitability is tested by unit root test and penal Co-integration test. The idea of co-integration arose out of the concern about spurious or nonsense regressions in time series. Specifying a relation in terms of levels of the economic variables, say , often produces empirical results in which the R 2 is quite high, but the Durbin-Watson statistic is quite low. This happens because economic time series are dominated by smooth, long term trends. That is, the variables behave individually as non-stationary random walks. In a model that includes two such variables, it is possible to choose coefficients that make appear to be stationary. But such an empirical result tells us little of the short-run relationship between yt and xt. In fact, if the two series are both I (1) then we will often reject the hypothesis of no relationship between them even when none exists. The following are the eight unique sectors of Pakistan stock exchange which were evaluated with the help of the unit root test and Co-integration test. 3.1. The hypothesis of the study.
The hypothesis of this study is as follows: • H1: There is no relationship between efficient working capital management and profitability of Pakistani firms.
Defining Different Variables
To observe the fact of the impact and relationship of working capital management on the firm's profitability, Return on Assets (ROA) will be used as a proxy of profitability. As far as the working capital variable are concerned; current ratio, quick ratio, accounts receivable
INDEPENDENT JOURNAL OF MANAGEMENT & PRODUCTION (IJM&P)
http: //www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 turnover, inventory turnover, account payable turnover, working capital turnover, return on capital employed and to measure long term issues Debt ratio used as a proxy. An indicator of how profitable a company is relative to its total assets. ROA gives an idea as to how efficient management is at using its assets to generate earnings. Calculated by dividing a company's annual earnings by its total assets
Independent Variables
Current Ratio
Current Assets /Current liabilities
The current ratio is a popular financial ratio used to test a company's liquidity. The concept behind this ratio is to ascertain whether a company's short-term assets (cash, cash equivalents, marketable securities, receivables, and inventory) are readily available to pay off its short-term liabilities (notes payable, current portion of term debt, payables, accrued expenses, and taxes). In theory, the higher the current ratio, the better.
Quick Ratio (current assets -inventories) / current liabilities The quick ratio is an indicator of a company's short-term liquidity. The quick ratio measures a company's ability to meet its short-term obligations with its most liquid assets. For this reason, the ratio excludes inventories from current assets and is calculated as follows
Age of Debtor / Receivables
Turnover Ratio
Net Credit Sales/ Average Account Receivable
The receivables turnover ratio can be calculated by dividing the net value of credit sales during a given period by the average accounts receivable during the same period. Average accounts receivable can be calculated by adding the value of accounts receivable at the beginning of the desired period to their value at the end of the period and dividing the sum by two.
Inventory Turnover
Inventory Turnover = Cost of Goods Sold / Average Inventory Inventory turnover is a ratio showing how many times a company's inventory is sold and replaced over a period. The days in the period can then be divided by the inventory turnover formula to calculate the days it takes to sell the inventory on hand or "inventory turnover days." The accounts payable turnover Total Supplier purchase /Avg. Account payable The accounts payable turnover ratio is a short-term liquidity measure used to quantify the rate at which a company pays off its suppliers. Accounts payable turnover ratio is calculated by taking the total purchases made from suppliers and dividing it by the average accounts payable amount during the same period.
Working Capital Turnover
Sales / Working Capital A measurement comparing the depletion of working capital to the generation of sales over a given period. This provides some useful information as to how effectively a company is using its working capital to generate sales. The CCC is the combination of inventory turnover in days and account receivable turnover in days and detected account payable turnover days. The purpose of CCC is to know the efficiency of the daily operation and management of the firm
Debt Ratio
Total Debt / Total Assets The debt ratio is the important financial ratio that is used to measure the financial leverage of the firm.
The panel Unit Root Test Findings
The analysis starts by testing the stationary of all variables in the study to determine the order of integration of the 9 variables. In this research Philips-Perron test based on equation one (1), the maximum leg length used in this research is 1. The Philips-Perron unit root test explains that the null hypothesis that each time series contains a unit root, and the alternative hypotheses that each time series is stationary The tables in Appendix-1 indicate eight sectors for the data to be non-stationary at the level and found trends in the data. After taking the first difference of the data it becomes stationary. When data is non-stationary at the level and becomes stationary at first difference, the result of the ordinary least square method becomes spurious. Therefore, the long-run equilibrium relations among the variables are drawn. Hence, now using the Johansen cointegration test for further process. The condition of the Johansen cointegration test is that all the variables are integrated at order.
The Panel of Co-Integration Test Results
The results indicate to reject the null hypothesis of no co-integration between variables in the two models. There is a long-run relationship between profitability, working capital management, size of the firm and debt ratio. Also, there is a long-run relationship between profitability, liquidity and the size of the firm where those variables are moving together in the long run. In this portion, eight sectors of Pakistan are explained one by one.
[Insert Exhibit 1 here]
The Exhibit-1 shows that the co-integration test shows that at most one co-integration equation in the system. DOI: 10.14807/ijmp.v12i1.1305 profitability. CCC, Inventory, A/c payable turnover, ROCE, and Debt ratio is positively related to the return on assets. It means credit sales are performing an important role in this sector. When credit sales increase then similarly account receivable and inventory turnover increase in this sector, delays in account payable turnover will be the reason for improving CCC. CCC will be caused to improve return on assets of the company, although the current ratio is insignificant in this sector in the long-run equilibrium. Working capital and quick ratio are negatively related to ROA. It will be guidelines for this sector to manage and concentrate on liquidity and working capital issues to increase profitability position in the sector. Moreover, it may be very much sure that this situation will be different from other sectors of Pakistan.
[Insert Exhibit 2 here]
Exhibit-2 shows that the co-integration test shows that at most one co-integration equation in the system. http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 related to return on assets in long-run equilibrium. On the other hand, ROCE and Debt ratio and account payable turnover are positively related in the long-run equilibrium. This is an amazing finding of the cable and electric goods sector of Pakistan, which highlights sector differences. In short, findings reveal that the previous sector of Pakistan i.e. the automobile sector discussed earlier are comparatively different as to the cable and electric goods sector of Pakistan.
[Insert Exhibit 3 about here]
Exhibit-3 shows that the co-integration test shows that at most one co-integration equation in the system. This table 5 shows the position of the cement sector of Pakistan which consists of 22 listed companies. This sector is a very strong manufacturing sector and has a contribution to the industrial sector of the GDP of the country. Table-5 shows the position of liquidity and working capital position with profitability. Interesting findings revealed that all concerned variables are significant with ROA Table 5 also explain the long-run relationship with the profitability.
The table 5 explains that liquidity and working capital management is negatively related to profitability in this sector, except CCC and ROCE is positively related to the ROA. The expert should be concentrating on this issue where liquidity is negatively related in ROA but CCC is positively associated with ROA.
Food & Personal Care
[Insert Exhibit 4 here]
Interestingly account payable turnover and debt ratio have a negative relation too with ROA in the food and personal care sector of Pakistan in the long-run equilibrium. Hence, these sectors firms should be careful regarding designing the capital structure of the firms in long-run perspectives.
Glass & Ceramics and Paper & board
[Insert Exhibit 5 about here] The Exhibit-5 shows that the co-integration test shows that at most one co-integration equation in the system. http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 important indicators of liquidity have a positive long-run equilibrium relation with ROA.
Concerned variables affect ROA except for Quick ratio, Working capital turnover, and ROCE is insignificant with the ROA. Findings revealed the managing working capital could be beneficial for a long term perspective for designing the optimal capital structure. This sector is insignificant as far as ROCE is concerned in long term equilibrium is concerned. Although, Inventory turnover in days and CCC is negatively related to the ROA in the long-run equilibrium. Therefore, the policymaker of these sectors must concentrate on results and prepare the policy accordingly.
Oil and Gas sector of Pakistan
[Insert Exhibit 6 about here] Exhibit-6 after conducting the co-integration test shows at most one co-integration equation in the system. The table 8 is indicated the true picture of working capital, and profitability in the sector of oil and Gas of Pakistan, where all concerned variables are significant with ROA in the oil and gas sector of Pakistan which shows that all impact on ROA. However, Liquidity and working capital are negatively related to the return on assets, as liquidity reduces the profitability increases and as working capital reduces then profitability also increases in the long-run equilibrium. Debt ratio is positively related to ROA, which depicted that debt increases in the oil and gas sector then profitability will defiantly increase in long-run equilibrium.
Technology and Communication
[Insert Exhibit 7 about here]
Textile
[Insert Exhibit 8 about here] Exhibit-8 after conducting the co-integration test shows at most one co-integration equation in the system. http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 whereas Cash conversion cycle, account receivable turnover, inventory turnover, ROCE, are insignificant in long-run equilibrium. This is the first sector in our available sectors where most of the variables are insignificant in the long-run equilibrium. This is one of the largest sectors of Pakistan listed companies of PSX-100 index. Due to several economic policy changes and adoption of a new model by the govt.
Severe impact coming on the textile sector of Pakistan. Research findings are near to the reality of Pakistan. Therefore, all decision-makers from the textile sector are advised to focus on sector abrupt changes due to several prevailing issues of Pakistan's economy. Further operational research required for finding the reasons for the insignificance of mostly variables in this sector.
CONCLUSION
The reason for this huge and comprehensive research is to find the relationships of working capital management with firm performance amongst multiple sectors in Pakistan. This research was designed with the goal of investigating the relationship between working capital with the firm's profitability and so after empirical analysis with the help of the Unit root and Penal co-integration test, many imperative findings revealed that working capital is significant for profitability in long term equilibrium. In the first step, the unit root test was applied to check the data trend and for all variables. This study stated that all variables except financial ratios were stationary at first difference, thus 9 variables were included in the panel co-integration test. Moreover, the penal Co-integration test discovered that there was a long-run relationship between variables.
Findings, after the broad and meticulous inspection, concluded that liquidity and working capital perform a very important role in improving profitability in the long run. The novelty of this current study is provided guidelines for managers of the corporate world, who are involving in managing working capital activities. Moreover, findings are worthwhile for academia to focus on and understand the efficiency and effectiveness of the liquidity and working capital issues in the company. Stakeholders of the company are also interested in the proper operational activities of their corporations to maintain good yearly performances. These are operating activities that can prolong corporate success. Working capital affects profitability and this can cause to improve the wealth of shareholders. In other words, properly managed liquidity and working capital are the reason for improving the value of the firm and the worth of shareholders. http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1305 This research focused on a variety of sectors of Pakistan, where working capital and profitability are positively related in the long-run equilibrium and some sectors are negatively related in the long-run equilibrium. It depends on the sector's operational activates. This research, however, neglected all the previous researches in Pakistan that directly claimed that liquidity and profitability can be traded off of each other. In this study, eight major sectors of Pakistan, where situations according to the findings are comparatively different from each other.
INDEPENDENT JOURNAL OF MANAGEMENT & PRODUCTION (IJM&P)
However, despite being very careful with all the concerned aspects of research. It always likely to have gaps and fissures; however, these gaps could serve as foundations for further research in this area. It is recommended that in the future, new researchers could work on liquidity and profitably sector-wise and prove their findings based on unit root test, Penal integration, vector analysis, and granger causality test and avoid OLS. Findings suggested that OLS in Pakistan can never predict the correct response because data has a trend. The grounds for further research should be based on this research paper to develop sector-wise theories for the betterment of liquidity, working capital and profitability.
|
2021-04-16T15:28:57.979Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "93b9398ee7fdf7a5b4bb2183a07a1b498cfeae97",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.ijmp.jor.br/index.php/ijmp/article/download/1305/1728",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "93b9398ee7fdf7a5b4bb2183a07a1b498cfeae97",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
233770997
|
pes2o/s2orc
|
v3-fos-license
|
MBSE Testbed for Rapid, Cost-Effective Prototyping and Evaluation of System Modeling Approaches
: Model - based systems engineering (MBSE) has made significant strides in the last decade and is now beginning to increase coverage of the system life cycle and in the process generating many more digital artifacts. The MBSE community today recognizes the need for a flexible framework to efficiently organize, access, and manage MBSE artifacts; create and use digital twins for verification and validation; facilitate comparative evaluation of system models and algorithms; and assess system performance. This paper presents progress to date in developing a MBSE experimentation testbed that addresses these requirements. The current testbed comprises several components, including a scenario builder, a smart dashboard, a repository of system models and scenarios, connectors, optimization and learning algorithms, and simulation engines, all connected to a private cloud. The testbed has been successfully employed in developing an aircraft perimeter security sys-tem and an adaptive planning and decision-making system for autonomous vehicles. The testbed supports experimentation with simulated and physical sensors and with digital twins for verifying system behavior. A simulation-driven smart dashboard is used to visualize and conduct comparative evaluation of autonomous and human-in-the-loop control concepts and architectures. Key findings and lessons learned are presented along with a discussion of future directions.
Introduction
Model-based systems engineering (MBSE) is making impressive strides both in increasing systems life-cycle coverage [1] and in the ability to model increasingly more complex systems [2,3].Recently, MBSE has begun to employ the digital-twin concept [4] from digital engineering (DE) to enhance system verification and validation.Not surprisingly, these developments are increasingly producing more MBSE artifacts [5] that need to be organized, metadata-tagged, and managed to facilitate rapid development, integration, and "test drive" of system models in simulation in support of what-if experimentation.
Currently, MBSE researchers work with specific models and simulations to address a particular problem, thereby producing mostly point solutions.Furthermore, they seldom document assumptions and lessons learned.This practice implies that most MBSE researchers are largely starting without the benefit of the knowledge gained by others.Fortunately, MBSE is a methodology-neutral construct that allows researchers to pursue different modeling approaches, experiment with different algorithms, and learn from such experiences.Most recently, in an attempt to make modeling more rigorous, MBSE researchers are turning to formal ontologies to underpin their system models and facilitate assessment of model completeness, semantic consistency, syntactic correctness, and traceability.In light of these deficiencies and emerging trends and opportunities, this paper introduces the concept of a MBSE testbed to organize and manage MBSE artifacts, support MBSE experimentation with different models, algorithms, and data, and catalogue the case studies and lessons learned.It presents a prototype implementation of the testbed and demonstrates the capabilities of the testbed for real-world operational scenarios, along with findings and lessons learned.It also presents an illustrative quantitative analysis of results produced through simulation-based experimentation.
Materials and Methods
The testbed encompasses the following: software programming environment; multiple modeling methods; analysis and optimization algorithms; repositories packages and libraries; simulation environments; and hardware components and connectors.The following list presents the core components of the testbed.Software Programming Environment DroneKit, an open-source platform, is used to create apps, models, and algorithms that run on onboard computers installed on quadcopters.This platform provides various Python APIs that allows for experimenting with simulated quadcopters and drones.The code can be accessed on GitHub [9].
Results
This section describes the research objectives, prototype implementation, and experimental results, along with key findings and implications.
Research Objectives
The key research objectives are to
•
Develop a structured framework for cost-effective and rapid prototyping and experimentation with different models, algorithms, and operational scenarios.
•
Develop an integrated hardware-software environment to support on-demand demonstrations and facilitate technology transition to customer environments.
The first objective encompasses the following: 1. Defining the key modeling formalisms that enable flexible modeling based on operational environment characteristics and knowledge of the system state space.2. Defining a flexible and customizable user interface that enables scenario building by nonprogrammers, visualization of simulation execution from multiple perspectives, and tailored report generation.3. Defining operational scenarios encompassing both nominal and extreme cases (i.e., edge cases) that challenge the capabilities of the system of interest (SoI) The second objective encompasses the following: 4. Identifying low-cost components and connectors for realizing capabilities of the SoI 5. Defining an ontology-enabled integration capability to assure correctness of the integrated system.6. Testing the integrated capability using an illustrative scenario of interest to the systems engineering community These objectives are satisfied through the realization of a MBSE testbed specifically designed for rapid prototyping, what-if experimentation, data collection, and analysis.
MBSE Testbed Concept
The MBSE testbed concept is broader than that of conventional hardware-in-the-loop (HWIL) testbeds.HWIL testbeds are used for early integration and testing of physical systems as well as formal verification and validation (V&V).Typical HWIL testbeds consist of hardware, software modules, and simulations in which system components are either physically present or simulated.Simulated components are progressively replaced by physical components as they become available [10].The MBSE testbed construct extends HWIL capabilities by including the means for developing and exercising abstract models independently or interoperating with HWIL [11][12][13][14][15]. Just as importantly, the MBSE testbed construct provides a modeling, simulation, and integration environment for developing and evaluating digital twins [5].The envisioned capabilities of the MBSE testbed include the ability to: • represent models at multiple scales and from different perspectives.The testbed offers a means to improve understanding of functional requirements and operational behavior of the system in a simulated environment.It provides measurements from which quantitative characteristics of the system can be derived.It provides an implementation laboratory environment in which modeled real-world systems (i.e., digital twins) can be evaluated from different perspectives.
A testbed, in essence, comprises three components: an experimentation subsystem; a monitoring and measurement system; and a simulation-stimulation subsystem.The experimentation subsystem comprises real-world system components and prototypes which are the subject of experimentation.The monitoring and measurement system comprises interfaces to the experimentation system to extract raw data and a support component to collate and analyze the collected information.The simulation-stimulation subsystem provides the hooks and handles to experiment with real-world system agents and outputs to ensure a realistic experimentation environment.
However, testbeds can have limitations.For example, they cost too much, and they are limited to modeling systems and components that satisfy the testbed environment constraints.In addition, for some problems, analytic and/or simulation models may be more appropriate.This would be the case for complex distributed systems.Therefore, testbeds can be viewed as a flexible and modeler platform that complements/subsumes simulation and analytic methods.
Figure 1 presents the MBSE testbed concept.The testbed comprises a user interface that supports the following: scenario authoring, dashboard capabilities for scenario execution monitoring visualization, and control, and report generation; a suite of modeling and analysis tools including system modelers, machine learning and data analytics algorithms; simulation engines for discrete event simulation, hybrid simulation, and component simulation; and repositories of operational scenario vignettes, system models, component libraries, and experimentation results.
The testbed supports system conceptualization, realization, and assurance.System conceptualization comprises use case development; requirement elicitation, decomposition, and allocation; system concept of operations (CONOPS); logical architectures; metrics; initial system models; and initial validation concepts.System realization entails detailed design, physical and simulation development, and integration and test.System assurance comprises evaluating system safety, security, and mission assurance.
Complex systems are invariably a combination of third-party components and legacy components from previously deployed systems.As such, some components tend to be fully verified under certain operating conditions that may or may not apply in their reuse.Furthermore, complex systems are subject to unreliable interactions (e.g., sporadic or incorrect sensor inputs, and control commands that are not always precisely followed) because they interact frequently with the physical world.Finally, with increasing connectedness, they are increasingly susceptible to security threats.In light of the foregoing, the MBSE testbed needs to provide:
•
Inheritance evaluation, in which legacy and third-party components are subjected to the usage and environmental conditions of the new system.
•
Probabilistic learning models, which begin with incomplete system representations and progressively fill in details and gaps with incoming data from collection assets; the latter enable learning and filling in gaps in the knowledge of system and environment states.
•
Networked control, which requires reliable execution and communication that enables satisfaction of hard time deadlines [5,17] across a network.Because networked control is susceptible to multiple points of cyber vulnerabilities, the testbed infrastructure should incorporate cybersecurity and cyber-resilience.
•
Enforceable properties define core attributes of a system that must remain immutable in the presence of dynamic and potentially unpredictable environments.The testbed must support verification that these properties are dependable regardless of external conditions and changes.
•
Commercial off-the-shelf (COTS) components, which typically communicate with each other across multiple networks and time scales [5,18].The latter requires validation of interoperability among COTS systems.
•
Support for safety-critical systems in the form of, for example, executable, real-time system models that detect safety problems and then shut down the simulation, while the testbed can be queried to determine what happened.
Logical Architecture of MBSE Testbed
Incorporating the capabilities described in Section III into the testbed is being performed in stages.Figure 2 presents the initial logical configuration of the testbed.
As shown in Figure 2, the testbed prototype comprises: (a) a user interface for scenario definition and system modeling as well as for the dashboard used for monitoring, visualization, and controlling scenario execution; (b) models created by the systems engineer or researcher that reflect an envisioned or existing system are stored in the repository; (c) a multiscenario capable simulation engine that dynamically responds to injects from the user interface and collects experiment results that area sent to the repository and user interface; (d) experiment scenarios stored in the repository or entered from the GUI; and (e) a private cloud that provides testbed connectivity and protects MBSE assets.The prototype testbed implementation supports virtual, physical, and hybrid simulations.It supports virtual system modeling and interoperability with the physical system.It is able to access data (initially manually and eventually autonomously) from the physical system to update the virtual system model thereby making it into a digital twin of the physical system.The testbed supports proper switching from the physical system to the digital twin and vice versa using the same control software.
The prototype testbed currently offers the following capabilities: System Modeling and Verification The testbed offers both deterministic-modeling and probabilistic-modeling capabilities.In particular, it offers SysML modeling for deterministic systems and partially observable Markov decision process (POMDP) modeling for probabilistic systems.Exemplar models of both types are provided in an online "starter kit" to allow users to make a copy before commencing the system modeling activity.Verification in this context pertains to ascertaining model correctness (i.e., model completeness with request to questions that need to be answered, semantic and syntactic consistency, and model traceability to requirements).
Rapid Scenario Authoring
Eclipse Papyrus is used along with SysML and Unity 3D virtual environment for scenario authoring and definition of entity behaviors.The testbed offers scenario authoring and visualization for multiple domains.For example, Figures 3 and 4 show the results of autonomous vehicle scenario authoring.Figure 5 shows visualizations for aircraft perimeter security scenario.These exemplar scenarios are used in experimenting with planning and decision-making models and algorithms.The initial scenario contexts are defined in SysML (Figure 6, with Python XMI being used to extract data from the SysML model to populate the Unity virtual environment.The behaviors of scenario entities such as autonomous vehicles, pedestrians, UAVs, and surveillance cameras are defined in Unity.The UAV, pedestrian, and vehicle behaviors are defined using standard waypoint-following algorithms in Unity.The planning and decision-making algorithms are exercised and tested with both autonomous vehicle navigation and control operations, and multi-UAV operations in aircraft perimeter security mission.Figure 3a presents a visualization of a pedestrian crossing scenario, while Figure 3b presents a visualization of a four-way stop sign scenario.Similarly, Figure 4a presents a visualization of a vehicle crossing scenario, while Figure 4b presents a visualization of a vehicle braking scenario. Figure 5a depicts the aircraft perimeter security scenario with one UAV and one video camera conducting surveillance.On the bottom right corner of the figure, camera views are shown.Figure 5b presents the aircraft perimeter security with three UAVs.Eclipse Papyrus for SysML, Python XMI, and Unity 3D are used to rapidly author the scenarios with various agents.The scenarios have static agents, perception agents, dynamic auxiliary agents, and system-of-interest (SoI) agents.Static agents such as standing aircraft, traffic signs, and buildings are part of the scenario context.Perception agents, such as cameras that capture simulation environment data, are used for processing.Dynamic auxiliary agents, such as pedestrians and auxiliary cars, as well as auxiliary UAVs, follow predefined behavior in experiments.The system-of-interest (SoI) agents such as the autonomous car or UAV are used to test different algorithms defined by the experimenters.
Figure 6 presents a SysML representation of scenario context for the aircraft perimeter security example.Included in the context are the UAV, airstrip, building, surveillance camera, and aircraft.Attributes and operations of each of those entities are defined in specific blocks.
Model and Scenario Refinement
The experiment/test scenarios allow rapid and straightforward substitution of refined models for coarse models.In addition, hardware components can be substituted for virtual models.In Unity 3D, it is possible to extract various relevant properties of scenario objects such as velocities, locations, and states.Entity behaviors are assigned to objects using Unity 3D scripts written in C#.This capability affords greater flexibility in experimentation.A Python interface is used for testing various machine-learning (ML) algorithms.
MBSE Repository
The testbed repository contains libraries of scenarios, scenario objects, 3D objects, behaviors, and system/component models.For example, 3D objects such as UAV models, ground vehicle models, pedestrian models, and camera models are part of the scenario object repository.The model repository comprises system models and behaviors that can be associated with various objects in the scenario.
Experimentation Support
The testbed's virtual environment supports the collection and storage of data from experiments.For example, variables such as distances between vehicles, velocities, and decisions made by autonomous vehicles in various scenarios can be captured and stored for post hoc analysis.Data collected during experimentation can be used by machinelearning algorithms to train models.The MBSE testbed provides access to the properties of scenario objects.For example, velocity, size, shape, and location of static objects and auxiliary agents are directly extracted from the virtual environment.This capability enables the creation of feedback loops and facilitates the definition of abstract perception systems of autonomous vehicles or SoI agents.C# scripts are used to extract, process, and transfer data to other components of the dashboard.The virtual environment allows for manual control of objects, thereby affording additional flexibility in experimentation and testing.Multiple human users are able to interact with virtual objects, thereby realizing complex behaviors during experimentation.
Multiperspective Visualization
The virtual environment offers visualization of system behaviors during simulation in intuitive ways.Exemplar visualizations for self-driving cars and multi-UAV operations are presented in Figures 7-9. Figure 7a,b, show the change in the state of the car from "safe-state" (blue clouds surrounding the cars) to "about-to-collide state" (red clouds surrounding the cars).10 shows another perspective in which UAV trajectories can be visualized during experimentation with planning and decision-making algorithms.These visualization assets and respective scripts are stored in the repository.The experimenter can drag and drop the asset on the scenario object and integrate it with the experiment.The assets have a user interface to customize the visualization parameters.
Implementation Architecture
Figure 11 provides the implementation architecture of the testbed.As shown in the figure, a private cloud interfaces with virtual simulation, physical component simulation, modeling tool, ontology representation, user interface, and analysis tool.
A ground-vehicle obstacle-avoidance scenario comprising multiple models was developed to demonstrate the utility and use of the testbed.This relatively simple scenario is used to demonstrate the integration of testbed components needed for model validation.
In this simple scenario, an autonomous vehicle has to drive safely behind another vehicle (which is viewed as a potential obstacle from a modeling perspective).The obstacle-avoidance scenario is represented by a state machine diagram in SysML (Figure 12).In this example, we define maintaining a distance of three meters as the safe distance between the two vehicles.Figure 12 shows that no action is taken when the vehicles are at least three meters apart.A transition occurs to the ActionState when the gap is less than three meters.The SysML model is mapped to the 3D virtual environment.For this mapping, a Python XMI translation tool automatically populates the asset container in Unity 3D from SysML models.Objects in Unity 3D are stored in an asset container.Figure 13 presents the architecture of the SysML to 3D virtual environment translation.Figure 13 As shown in Figure 13, car, road, and obstacle-car blocks are extracted from the scenario context SysML model.These block names are then matched with existing 3D objects in the repository.When a match is found, the object is duplicated from the repository to the asset container of the 3D virtual environment.When a matching 3D object is not present in the repository, the translation program creates an empty object in the asset container.For example, in Figure 13, because the "obstacle car" car object is not present in the 3D repository, an empty object is created in the asset repository.Users can further model the 3D object as per requirement.Here, the car object is duplicated to create the obstaclecar object.The user can employ the asset container to populate objects in the Unity 3D simulation.In case of a change to the SysML model, a refresh function triggers the translation program to update the asset container.A user can further customize 3D objects in the virtual environment and then populate additional objects and behaviors from the repository.Waypoint-following behavior is assigned to the front vehicle in Unity 3D.A set of points and respective velocities to be followed by front vehicle are then assigned.The simulation generates navigation areas and trajectories for a given safe distance.
During the 3D simulation, we discovered that safe distances vary based on the velocities of the vehicles, which can impact the navigation area and trajectories.We conducted experiments that varied safe-distance values and vehicle velocities to assess the impact on navigation area and possible trajectories of the autonomous vehicle.Initially, for small values of safe distance, the autonomous vehicle has multiple paths to navigate around an obstacle, but as the safe distance between vehicles is increased, the safe navigation area around the vehicle shrinks.Figure 14 shows simulation results of changing the safe distance on navigation area around the obstacle.As expected, the simulation showed that the navigation area for the autonomous car shrinks when the front vehicle moves relatively slower.The simulation also uncovered an assumption error in our initial experiment resulting from the use of a fixed safe distance regardless of the vehicle's relative position and velocity.Currently, various testbed components can be integrated to explore and experiment with "what if" scenarios.In addition, conceptual models can be created and refined as needed.In the above simple experiment, it became evident that the overall model needed refinement to explicate variables such as longitudinal and lateral safe distances, vehicle velocities, and vehicle acceleration and braking capacities.The simulation confirmed the obvious need for implicitly defined safe-distance rules, while also confirming that acceptable driving practices are context dependent.The ability to experiment with heterogeneous models and collect and analyze data to uncover patterns and trends enables more comprehensive elicitation of requirements.Repositories of 3D assets and their behaviors were used for rapid authoring of scenarios.For example, for the obstacle car that had a waypoint-following behavior in 3D simulation, the model of a car, road, and waypoints in the 3D environment were used from repositories and customized for a particular scenario.Additionally, algorithm visualization methods and behaviors available in the testbed repository were employed.
Quantitative Analysis
The MBSE testbed allows users to integrate and evaluate different machine-learning models with different parameters.We created an exemplar simulation environment.The exemplar simulation environment consists of a UAV in an indoor building setup searching for a predefined object.In the search mission, the UAV agent receives a negative reward for colliding with objects in the environment and a positive reward for touching the predefined goal object.Users can test multiple reinforcement learning algorithms on the UAV agent.We tested the integrated models with the UAV making observations in the environment and then taking corresponding actions.Five reinforcement learning algorithms were evaluated in the experiment: proximal policy optimization (PPO), soft actorcritic (SAC), PPO with generative adversarial imitation learning (GAIL), PPO with behavioral cloning (BC), and PPO combined with GAIL and BC [1][2][3][4].Model parameters such as cumulative reward, episode length, policy loss, and entropy were evaluated against the number of simulation runs for selected reinforcement learning algorithms.Figure 15 presents the learning cycle, while Figure 16 presents the testbed setup for quantitative analysis.As shown in Figure 16, the structural model in SysML is mapped to the 3D virtual environment.For this mapping, a Python XMI translation tool was built.This tool automatically populates the asset container in the simulation environment using the SysML models.Objects in the simulation environment are stored in an asset container.The UAV, indoor environment, and goal object blocks are extracted from the scenario context SysML model.These block names are then matched with existing 3D objects in the repository.When a match is found, the object in the repository is duplicated and inserted into the 3D virtual environment's asset container.When a matching 3D object is not present in the repository, the translation program creates an empty object in the asset container for the user to further modify.The user can then employ the asset container to populate objects in the simulation.Observations and actions are defined for the simulation setup, and the rewards mechanism is created in the experiment.
Figure 17 presents simulation runs on the horizontal axis, and cumulative rewards gained by the agent for a given model on the vertical axis.For PPO and PPO with GAIL, the mean cumulative episode reward increases as the training progress.For the rest of the agents, reward decreases, indicating that the agent is not learning successfully for the given simulation environment and for the given simulation runs.Figure 18a shows that the mean length of episodes goes down in the environment for successful agents as the training progresses.The "is training" Boolean in Figure 18b indicates whether the agent is updating its model or not.
The different models exhibit different behaviors for a given simulation set up.The way the user sets up the training environment impacts the performance of the models differently.
In Figure 19a,b, the policy loss parameter indicates how much the policy is changing for each agent.Various models have different profiles, and for most models, the magnitude of policy loss decreases indicating successful training.In Figure 20 Additionally, it is possible to analyze specific model parameters such as GAIL expert estimate, GAIL policy estimate, and GAIL rewards (Figure 21).The testbed also allows quantitative analysis of behavioral models.Users can manipulate various model parameters and run experiments.The testbed capabilities enable users to formulate the decision, observation, and reward problem more holistically.Various use cases and scenarios can be considered before defining agent behaviors.The holistic approach and quantitative analysis allow users to determine effective strategies for intelligent agents.
Discussion
This paper has presented the system concept, architecture, prototype implementation, and quantitative analysis supported by a MBSE testbed that enables experimentation with different models, algorithms, and operational scenarios.Several important lessons were learned from the prototype implementation.First, a minimal testbed ontology [24] with essential capabilities can be quite useful to begin initial experimentation with models and algorithms.Second, it is possible to adapt system model complexity to scenario complexity and thereby minimize computation load when possible.For example, for simple driving scenarios, a finite state machine (FSM) can suffice for vehicle navigation.However, as the driving scenario gets more complicated (e.g., poor observability and uncertainties in the environment), more complex system models such as POMDP can be employed to cope with environment uncertainty and partial observability.The value of POMDP modeling becomes evident when operating in complex uncertain environments.Third, when it comes to system development, it pays to start off with simple scenarios and ensure that safety requirements are met, and then progressively complicate driving scenarios and employ more complex models while continuing to assure satisfaction of safety requirements.The most straightforward way to implement this strategy is to control the complexity of the operational environment by imposing constraints (e.g., have a dedicated lane for autonomous vehicles or avoid driving in crowded streets).Then, after the simple model has been shown to satisfy safety requirements, constraints can be systematically relaxed to create more complex driving scenarios.In the latter case, more complex system models can be employed, verified, and validated with respect to safety requirements.Fourth, system and environment models can be reused, thereby reducing development time.To facilitate model reuse, the models can be metadata-tagged with usage context.Then, contextual similarity between a problem situation and system models can be employed to determine suitability of a particular system model for reuse in a particular context.This reuse feature can accelerate experimentation and development.Fifth, a smart, context-sensitive, scenario-driven dashboard can be used to dynamically adapt monitoring capability and dashboard display to maximize situation awareness with manageable cognitive load.To this end, the familiar METT-TC construct employed by the military can be employed as the underlying ontology.Based on the prevailing context, either the total set or a subset of variables in METT-TC may be applicable to characterize context.The flexible architecture of the scenario-driven dashboard can support both human-in-theloop and autonomous operations.Just as importantly, it provides a convenient and costeffective environment to try out different augmented-intelligence concepts [25].Sixth, a hybrid, distributed simulation capability enables integration of virtual simulation with real-world systems (e.g., self-driving cars and unmanned aerial vehicles).This integration, in turn, enables the creation of digital twins which can enhance system verification and validation activities [5].Furthermore, by assuring individual control of simulations, the testbed offers requisite flexibility in experimentation with system/system-of-system simulation.In addition, by allowing different models (e.g., scenario model and threat model) to run on different computers, simulation performance can be significantly increased.Finally, an illustrative quantitative analysis capability is presented to convey how simulation results can be analyzed to generate new insights.
Future directions include the creation of formal ontologies and metamodel [24] to guide systems integration, human-systems integration [26,27], adversarial modeling, introduction of digital twins at the system, and subsystem levels [5], reinforcement learning techniques to cope with partial observability and uncertainty [17], support for distributed simulation standards (i.e., IEEE 1278.2-2015), and ontology-enabled reuse [28,29] and interoperability [30].
Figure 8 Figure 7 .
Figure 7. (a) Cars with "safe states", and (b) cars with about-to-collide state.
Figure 9
Figure9presents examples of visualizations for multi-UAV operations.Figure9a,bshow the change in the state of the UAV from "safe-state" (blue cloud surrounding the UAV) to "unsafe-state" (red cloud surrounding the UAV).Virtual environments offer a convenient means for stakeholders to contribute to what-if experimentation.Figure10shows another perspective in which UAV trajectories can be visualized during experimentation with planning and decision-making algorithms.These visualization assets and respective scripts are stored in the repository.The experimenter can drag and drop the asset on the scenario object and integrate it with the experiment.The assets have a user interface to customize the visualization parameters.
Figure
Figure9presents examples of visualizations for multi-UAV operations.Figure9a,bshow the change in the state of the UAV from "safe-state" (blue cloud surrounding the UAV) to "unsafe-state" (red cloud surrounding the UAV).Virtual environments offer a convenient means for stakeholders to contribute to what-if experimentation.Figure10shows another perspective in which UAV trajectories can be visualized during experimentation with planning and decision-making algorithms.These visualization assets and respective scripts are stored in the repository.The experimenter can drag and drop the asset on the scenario object and integrate it with the experiment.The assets have a user interface to customize the visualization parameters.
displays the SysML block definition diagram for the obstacle-avoidance scenario context.This diagram presents various entities that are part of the scenario.The blocks in the diagram are arranged under the "ScenarioEntities" package.The structured packaging of SysML model entities facilitates the extraction of model elements.
Figure 14 .
Figure 14.Simulation results of changing the safe distance on navigation area around the obstacle.
Figure 16 displays the SysML block definition diagram for the "indoor search" scenario context.This diagram presents various entities that are part of the scenario.The blocks in the diagram are arranged under the "ScenarioEntities" package.The structured packaging of SysML model entities facilitates the extraction of model elements.
Figure 18 .
Figure18ashows that the mean length of episodes goes down in the environment for successful agents as the training progresses.The "is training" Boolean in Figure18bindicates whether the agent is updating its model or not.The different models exhibit different behaviors for a given simulation set up.The way the user sets up the training environment impacts the performance of the models differently.In Figure19a,b, the policy loss parameter indicates how much the policy is changing for each agent.Various models have different profiles, and for most models, the magnitude of policy loss decreases indicating successful training.In Figure20, the entropy measure indicates the degree of randomness of decisions made by the model.The slow decrease of this parameter is an indicator of a successful training session.Entropy profiles are different for different agents.
Figure 19 .
Figure 19.(a) Simulation runs vs. policy loss (excluding SAC), and (b) simulation runs vs. policy loss for all models.
|
2021-05-07T00:03:11.380Z
|
2021-03-05T00:00:00.000
|
{
"year": 2021,
"sha1": "c66485c22f404af957721e550442b74a3d3e8a31",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/5/2321/pdf?version=1615382522",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ae421f57f99c7fbc265bc5c6e39210a5731239d7",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
248496963
|
pes2o/s2orc
|
v3-fos-license
|
The Competition Between Deformation Twinning and Dislocation Slip in Deformed Face-Centered Cubic Metals
The competition between deformation twinning and dislocation slip underpins the evolution of mesoscale plasticity in face-centered cubic materials. While competition between these mechanisms is known to be related to the critical features of the generalized planar fault energy landscape, a physical theory that tracks competition over extended plasticity has yet to emerge. Here, we report a methodology to predict the mesoscale evolution of this competition in deformed crystals. Our approach implements kinetic Monte Carlo simulations to examine fault structure evolution in face-centered cubic metals using intrinsic material parameters as inputs. These results are leveraged to derive an analytical model for the evolution of the fault fraction, fault densities, and partitioning of plastic strains among deformation mechanisms. In addition, we define a competition parameter that measures the tendencies for deformation twinning and dislocation slip. In contrast to previous twinnability parameters, our derivation considers deformation history when examining mechanism competition. This contribution therefore extends the reach of deformation twinning theory beyond incipient nucleation events. These products find direct applications in work hardening and crystal plasticity models, which have previously relied on phenomenological relations to predict the mesoscale evolution of deformation twin microstructures.
INTRODUCTION
The mesoscale plasticity of face-centered cubic (FCC) metals is underpinned by the operation of competing deformation mechanisms. Amongst these, dislocation slip and deformation twinning are widely recognized to be two important mechanisms that actively compete during plastic deformation. The comparative dominance of one mechanism is determined by a complex interplay between intrinsic material properties and extrinsic factors. Competition in the former category can be conceptualized using the generalized planar fault energy (GPFE) landscape, which has its roots in works from Vítek [1,2]. Various investigators have leveraged the GPFE landscape concept to produce parameter-based descriptors of deformation mechanism competition. For competition between deformation twinning and slip, Tadmor and co-workers provided the seminal parameters.
Their earliest work defines a twinning tendency criterion for the onset of deformation twinning at a crack-tip [3], where a direct relationship between the critical features of the GPFE landscape (i.e., the unstable stacking fault and twinning energies) and deformation twinning is defined. These results demonstrate the multi-parameter dependencies of deformation twinning and challenge the general belief that twinning tendency is driven solely by the intrinsic stacking fault energy. A subsequent work broadened this approach by homogenizing the crack-tip model over a distribution of crack orientations in a polycrystal [4]. Asaro and Suresh [5] considered a specific slip system geometry, under the crack-tip parameter of Tadmor and Hai, to examine the competition between deformation twinning and dislocation slip at grain boundaries in nanostructured FCC materials.
Jin et al. [6] reparameterized the criterion of Asaro and Suresh to provide a single parameter relation for twinning tendency under the original analytical framework of Tadmor and co-workers.
In an independent approach, Jo et al. [7] consolidated considerations of crystal orientation and the GPFE to develop a unified parameter that predicts tendencies for deformation twinning, slip, and stacking fault emission. These descriptors of competition between deformation twinning and slip are referred to here as 'twinnability' parameters, following the nomenclature of Tadmor and Bernstein [4]. Each of these parameters is summarized in a recent review from De Cooman et al. [8].
While these twinnability parameters provide a fundamental understanding of the intrinsic competition between deformation mechanisms, there are some notable limitations. Namely, these descriptors offer insight into incipient deformation tendencies (i.e., the first emission of an extended dislocation or formation of a twin embryo from stacking of adjacent planar faults) but do not track competition as deformation proceeds. Consequently, these parameters cannot be leveraged to determine the evolution of correlated phenomena such as work hardening, which requires consideration of deformation history. Nor can they be used to predict the partitioning of plastic strain amongst the mechanisms of deformation twinning and dislocation slip. These limitations become evident in twinning-induce plasticity (TWIP) steels [9][10][11][12][13][14], where the relative contributions of deformation twinning and dislocation slip are well-known to vary over the stages of work hardening [8,15,16]. Additional systems of technological relevance, where the evolution of mechanism competition is important, include nanotwinned materials [17][18][19] and high entropy alloys [20][21][22][23][24]. Analytical efforts to segment the contributions of dislocation slip and deformation twinning in work hardening and crystal plasticity models are well documented, with significant contributions presented in the works of Bouaziz and coworkers [25][26][27][28][29], Kim et al. [30], Steinmetz et al. [16], and Kalidindi [31,32]. However, a shortcoming in each of these approaches is the reliance on empirical relations for the accumulation of deformation twins during deformation, which can provide aphysical results. For instance, early empirical modeling efforts estimate a twin fraction as high as 0.69 in TWIP steels [25]. Later works have predicted a twin fraction in the range of ~0.10-0.20 [27,33], with 0.15 being the commonly accepted value [8]. While these later predictions better align with experimental observations, the broad applicability of current evolution models remains poor due to their reliance on phenomenology and empirical fitting.
Within the context of twinnability, an opportunity exists to propose new, physical models that not only track the competition between deformation twinning and dislocation slip but provide predictive tools to examine deformed microstructures under extended plastic deformation. Here, we present a methodology to quantify the partitioning of plastic strain between deformation twinning and dislocation slip mechanisms and measure the accumulation of fault structures in deformed FCC crystals. For this purpose, the competition between deformation twinning and dislocation slip is studied using kinetic Monte Carlo (kMC) simulations. Based on kMC simulations, a set of analytical relations are derived that leverage the critical energies of the GPFE landscape to predict the evolution of fault structures. The outcomes of this study are two-fold. The primary result provides a new method to predict the evolution of competition between deformation twinning and dislocation slip over extended plastic deformation using only intrinsic material properties as inputs. From a fundamental perspective, this contribution expands the twinnability framework originally developed by Tadmor and coworkers [3,4] by extending its scope beyond incipient events. The second outcome is a series of relations to predict the partitioning of plastic strain between deformation twinning and dislocation slip mechanisms and the storage of fault structures over extended deformation in FCC metals. We anticipate that this product will enhance existing work hardening and crystal plasticity models, by providing first-principles-based predictions of mesoscale planar fault evolution in deformed microstructures.
Kinetic Monte Carlo approach
To address the question of mechanism competition, we have implemented the relevant kinetic equations for dislocation slip and deformation twinning mechanisms following the kMC algorithm outlined in Bortz et al. [34]. The kMC simulation cell can be envisioned as a discretized FCC crystal, where the kMC relations are evaluated at each node of the mesh. The nucleation and progression of defects in this cell are considered by traversing system states that are separated by kinetic barriers. These features are well-suited to the objectives of this work, which require tracking defects over extended deformation and monitoring the relative kinetics between deformation mechanisms. A similar approach has been used to examine the competition between the process of deformation twin nucleation and deformation twin thickening in our previous work [35]. The kMC method described in this section has been implemented in Python and will be made available to the community upon publication of this work through a Github repository.
The kMC simulation cell is considered as a FCC single crystal that is initially deformation free with the <110> and <111> crystallographic axes oriented along the global and direction, Upon removal of the ISF, the lattice returns to a fault-free configuration, but is in a slipped state.
To enable an intrinsic study of a single twin/slip system, cross-slip mechanisms, and detwinning and slip within the interior of fault structures are not considered.
The boundary-mediated partial dislocation emission mechanism implemented in this study can be seen as an extension of the crack-tip problem considered by Tadmor and coworkers [3,4].
However, to facilitate an intrinsic comparison of deformation mechanisms over extended plasticity we have replaced the crack-tip with a surface of equivalent nucleation sites. This treatment is inspired by the boundary-mediated twin formation mechanism that is established in the experimental literature for a diverse set of systems including TWIP steels [10], nanostructured FCC materials [36][37][38], nanowires [39,40], and hexagonal close-packed metals [41]. This twin formation mechanism is distinct from classical processes such as the Cohen-Weertman [42] and Fujita-Mori [43] cross-slip mechanisms and the pole-based mechanism of Venables [44] but bears some similarities to the three-layer twin nucleus mechanism of Mahajan and Chin [45]. We have previously validated our implementation of this formation mechanism against molecular dynamics simulations of deformation twin nucleation and growth in FCC nanowires [35]. One important note regarding our approach is that dislocation processes are considered as homogeneous events, where the system is agnostic of local microstructural heterogeneities (e.g., crack-tips, grain boundary energies) that may bias rates. This treatment has the intended effect of providing an intrinsic comparison of deformation mechanisms that arise explicitly from their various process barriers. Our approach is similar to that of Jo et al. [7], where a homogeneous treatment was used to study the competition between incipient mechanisms. Heterogeneities may only arise in this study due to fault structures that emerge from the deformation history. Yet, the kMC approach is sufficiently general such that microstructure heterogeneities can be specified with some effort.
Although this modification is not trivial, it is not necessary to achieve the objectives of this work.
The barriers to dislocation nucleation and glide processes are defined using the energies ( ) of the GPFE landscape (see Fig. 1b), following the method of Ogata et al. [46]. In this approach, the barrier that acts at the j th slip plane within the crystal is determined by the local fault environment and thus reflects the deformation history of the system (see Fig. 1a). We have selected four common FCC metals (Ag, Au, Cu, and Al) for kMC simulations, for which the GPFE landscape is well-known. This selection was found to encompass the extremes in the behaviors of mechanism competition. Deformation twinning initiates with the incipient nucleation barrier ( 1 + ) for a leading <112>-type Shockley partial dislocation. The thickening of deformation twins proceeds by overcoming additional process barriers ( 2 + , 3 + , ∞ + ) that are defined as the difference between the relevant fault (i.e., , , ) and the peak energies (i.e., 1 , 2 , 3 , ∞ ) of the subsequent defect along the GPFE landscape. Conversely, the reverse parameters ( 1 − , 2 − , 3 − , ∞ − ) describe the process barriers for the nucleation of trailing <112>-type Shockley partial dislocations, which activate dislocation slip. The peak energies 1 and 2 refer to the unstable fault energies that must be overcome to form an ISF and an ESF, respectively. Similarly, the peak energies of 3 and ∞ define the energies for an embryotic and thickened deformation twin, respectively. In each case, the superscript refers to the number of leading dislocations required to form the relevant fault structure. Table I provides the values for the critical energies of the GPFE landscape (i.e., 1 , 2 , ∞ , , ) used in kMC simulations. These values are obtained from density functional theory calculations using the climbing-image nudged elastic band method, as reported in Jin et al. [6]. In FCC metals, the critical energies of the GPFE landscape are known to stabilize after the formation of an ESF [46], which can be considered as a twin embryo with two adjacent twin boundaries. Therefore, the process barriers to twinning ( 3 + ) and detwinning ( 3 − ) of the twin embryo are determined using the approximation 3 ≈ 2 . The energy of the threelayer twin embryo is taken as ≈ 2 , where is the energy of an isolated coherent twin boundary. The process barrier for twinning and detwinning at thicknesses of beyond three {111} planes is defined by ∞ + and ∞ − , respectively. Each of these approximations are common within the community, as discussed in Jin et al. [6] and De Cooman et al. [8].
The rates ( , ) of nucleation and glide events are evaluated at nodes along a 2-dimensional mesh that maps to the activation sites for these dislocation processes in the slip planes of the kMC cell. Following the kMC method, these rates are determined using the Arrhenius relation: where is the pre-exponential factor (taken as the Debye frequency [47]), is the activation volume (taken as 10 112 3 , as per Ramachandramoorthy et al. [48]), is the Boltzmann constant, and is the temperature (set at 300 K). ̂, and , are the process barrier and elastic stresses, respectively, that operate at the i th activation site in the j th slip plane of the kMC simulation cell.
The values for ̂, represent the stress to nucleate a partial dislocation or the stress for glide of a partial dislocation depending on the deformation history of the kMC simulation. For instance, in a pristine simulation cell ̂, reduces to ̂0 , , which defines the stress to nucleate a leading partial dislocation. After nucleation of a leading partial in the j th slip plane, ̂0 , then becomes the stress to nucleate a conjugate trailing partial (for dislocation slip) and ̂, is the stress required for glide of the leading partial at the i th activation site (taken as the Peierls-Nabarro stress, , see Fig. 1a).
These nucleation and glide stresses are then updated as the kMC simulation proceeds to reflect the local fault environment. Following the method of Ogata et al. [46], the undulations of the GPFE landscape are taken as a Peierls potential and may be used to directly determine the process barriers of nucleation and glide. Dislocation nucleation stresses are retrieved from the athermal limit using a harmonic approximation for the shape of process barriers (see Ref. [46]).This simple model for nucleation finds excellent agreement with benchmarking validation studies [35]. More complex nucleation models consider the elastic energy of the nucleated dislocation and the stressdependency of the critical energies of the GPFE [49][50][51]. When glide is operative, the process barrier stress may be calculated from the solution to the Peierls-Nabarro problem for a partial dislocation [52]. These considerations lead to a conditional definition for the process barrier stress ̂, : where , is the process barrier (e.g., , = 1 is the half-width of the dislocation core. is a geometric parameter that represents the distance between adjacent atomic rows along the shear direction. is an elastic constant that is defined by the shear modulus ( ) and Poisson's ratio ( ).
Following the approximation of Nabarro [53], partial dislocations were assigned an edge character for glide stress calculations (i.e., = 3 2 112 , = (1− ) ). This modest simplification allows the glide barrier of dislocations to be defined by a single shear stress, which is required for kMC rate determination steps in Eq. (1). The elastic constants are calculated using the method of Bacon and coworkers [54,55]. This method provides effective isotropic constants from dislocation energy factors in anisotropic media. The relevant material parameters used in all kMC calculations are provided in Table I. The effective process barrier stress (i.e., ̂, − , ) is determined by considering the additive contributions of elastic stress fields from partial dislocations stored in the kMC simulation cell. Individual stress fields are calculated using the Volterra solution to the dislocation elasticity problem for each leading and trailing partial dislocation [56]. The relevant stress tensors are rotated to align with the Burger's vectors of the respective defect (i.e., ± 60º partial dislocations). Boundary effects are accounted for using the image dislocation method, which enforces a vanishing condition along free surfaces (i.e., the <110> surfaces of the kMC simulation). Further details on the dislocation elasticity calculations performed in this study are provided in the online supplementary material. In addition to stresses arising from internal defects, the application of external far-field loadings can reduce the effective process barriers. The effects of far-field loadings are not specifically considered here as they exert a uniform influence on rate kinetics. However, it should be noted that our formulation is sufficiently general to include their effects along with the associated Schmid factors.
Analytical model
An analytical model has been developed to track the competition between deformation twinning and dislocation slip over extended plasticity. This model consists which only has one nucleation site (see Fig. 1a). For further details on the derivation of Eq. (5), the reader is referred to our earlier work [35], which examines the competition between deformation twin nucleation and thickening using a related approach.
The partitioning of plastic strain amongst the mechanisms of deformation twinning and dislocation slip can be determined using the parameters defined for the evolution of the fault fraction. The dislocation slip strain ( ) is incremented through the operation of trailing dislocations, with the probability of these events defined by − . The increment to the dislocation slip strain is counted as twice the partial strain increment, to account for the prior operation of the leading partial that then contributes to the slip mechanism. The deformation twinning strain ( ) can then be calculated from the difference of the total plastic and dislocation slip strains, which leads to the following set of relations:
Evaluation of Eqs. (3)-(6) requires a series of evolution rules for the fault number densities.
We consider two outcomes that can alter the number of faults -namely, the nucleation of leading and trailing partial dislocations. As in Eq. (3), the evolution in the number of faults with plastic strain is described by an additive relation: where + ⊥ and − ⊥ are related to the probabilities of an increase or decrease in the fault number density, respectively. To model the probability of an increase in the number of faults, we consider the comparative kinetics of these outcomes as deformation proceeds. That is, an increase to only occurs when a leading partial dislocation is nucleated in a defect-free area of a crystal and a decrease to is accompanied by the nucleation of a trailing partial dislocation at an ISF. The relevant quantities are defined as follows: In order to solve Eqs. (8a) and (8b), a rule for the evolution of is required, which is described by the following relations: which accounts for the change in the number densities of the various fault structures due to fault nucleation/thickening or detwinning/slip processes.
The ratio of the probabilities for leading and trailing partial dislocation nucleation is also of interest, given the influence that these parameters have on mechanism competition. We define here the competition parameter (η) as the ratio of leading and trailing probabilities as: Examination of Eq. (10) offers interesting insights. Leading dislocation nucleation is favored when > 0 and trailing dislocation nucleation is favored when < 0. Therefore, this simple criterion enables facile tracking of deformation tendencies over extended plasticity. Given the relevance of leading/trailing nucleation to the mechanisms of deformation twinning and dislocation slip, this parameter may also be viewed as a measure of mechanism competition.
Indeed, the relevant process barriers (e.g., 1 + , 2 + , 1 − and 2 − ) in the competition parameter contain the GPFE parameters (i.e., 1 , 2 , ) that are found in many of the incipient twinnability parameters available in the literature [3,4,6,7]. In addition to these variables, additional parameters appear (i.e., , , and ) that account for the evolution of the mesoscale defect structures during deformation. In this regard, this competition parameter combines two distinct components -intrinsic material properties and microstructure parameters -to examine the evolution of mechanism competition from its measure of nucleation preferences for leading/trailing dislocations.
RESULTS AND DISCUSSION
The kMC model described in Section 2.1 is implemented to study the mechanism competition in a variety of crystal sizes for Ag, Au, Cu, and Al. The results for kMC systems measuring
Mesoscale evolution of deformed microstructures
Representative snapshots of the kMC simulation cell at several stages of plastic deformation are shown in Fig. 2 The evolution of the fault fraction with increasing plastic strain for Ag, Au, and Cu is plotted in Fig. 3. The raw data from all replications of the kMC simulations is provided as normalized contours, as described above, and the averaged data is plotted using the relevant markers.
Analytical predictions from the model defined in Section 2.2 are plotted in dashed stroke. In each material, a monotonic increase in the fault fraction is predicted, with Ag exhibiting the highest fault storage. The analytical model is in excellent agreement and captures the critical features of the kMC simulation data. In each material, the rate of increase in the fault fraction is highest at the incipient stages of plasticity and approaches a linear relation at higher plastic strains. Cu and Au exhibit only modest increases in the fault fraction after the early stages of plastic strain 0.01-0.05 plastic strain, whereas Ag tends to continue to store faults at higher strains, albeit at a reduced rate.
This behavior is in line with the experimental literature, which reports a steep increase in the twin fraction during incipient plastic events [8]. In the case of Cu, a noticeable asymmetry exists in the binned kMC data, where the average response deviates significantly from the most common fault behavior. Since the fault fraction cannot be negative, this asymmetry arises due to the preference for the removal of fault structures in Cu over extended deformation, which produces frequent system configurations at = 0.
Results for the evolution of the deformation twinning-accommodated strain are provided in The ability of the analytical model to capture the evolution of defect structures over extended plasticity is perhaps the most notable outcome of these results. Indeed, using only process barriers derived from the GPFE landscape, we have developed a methodology that can predict the partitioning of plastic strains between the mechanisms of deformation twinning and dislocation slip for several FCC materials. This physical description of strain partitioning is free from empirical fitting and is therefore anticipated to improve the phenomenological relations currently implemented in work hardening and crystal plasticity models. Examples, where a direct application of this approach would be beneficial, are found in the dislocation storage framework of Bouaziz and coworkers [25][26][27] and deformation-twinning crystal plasticity models [57][58][59], among others.
Evolution of mechanism competition under extended deformation
The excellent agreement between kMC simulations and analytical modeling motivates an examination of the evolution of mechanism competition in FCC metals over extended plastic deformation. Fig. 6 presents the evolution of the competition parameter, η, with the fault fraction.
The analytical definition of η (Eq. (10)) is plotted in dashed stroke for each material. The average kMC data is provided as markers and data distribution is represented as contours. As shown in the initiates at much lower stresses in Au than in Cu [60]. Indeed, this discrepancy in twinnability predictions and twinning stress data is noted in the seminal work from Tadmor and Bernstein [4].
We also note that returns the same conditional inequalities as other twinnabilty parameters when the microstructural evolution parameters are omitted and process barrier definitions are aligned.
For instance, Jo et al. [7] defined twinnability process barriers using the relation , where θ is the angle between the Burger's vectors of dislocations. For θ = 60° (i.e., conjugate leading/trailing partial dislocations) considered at comparable incipient conditions, = 1 , 1 = = 1 , 1 − − 2 = 0 with one leading nucleation site, and using the transformation 2 ≈ 1 + 1 2 [6], the competition parameter reduces to 1 − < 2, which is the same inequality presented by Jo et al [7]. Through the development of this parameter, we have demonstrated a method to predict the twinnability of FCC metals by separately weighing the contributions of process barriers and deformation history towards the competition between deformation mechanisms. In a broad sense, this outcome expands the application of the twinnability concept to describe the evolution of deformation twinning and dislocation slip in deformed microstructures.
CONCLUSIONS
The competition between deformation twinning and dislocation slip has been studied for four common FCC metals (Ag, Au, Cu, and Al) using kMC simulations. In contrast to previous efforts, which examine only incipient events, the evolution of mechanism competition has been considered over extended plastic deformation. Kinetics in kMC simulations are informed directly by the critical features of the GPFE landscape and therefore provide an intrinsic comparison of mechanism competition. From the kMC simulation data, the evolution of the fault number density, fault fraction, and the partitioning of plastic strains between deformation twinning and dislocation slip mechanisms was measured. Results from these efforts show that Ag exhibited the highest storage of faults and the highest fault fraction over the entire deformation range studied. Based on kMC results, an analytical framework has been developed to provide a physical model for the mesoscale evolution of defect structures in FCC crystals. Predictions from this model find excellent agreement with kMC simulations. In addition, the relations of this model were used to define a competition parameter that can be used to examine the evolution of mechanism competition in FCC metals over extended deformation. Predictions from this parameter agree with experimental data showing the higher twinnability of Au relative to Cu, which is not captured by existing twinnability parameters. The outcomes of this study expand the applicability of deformation twinning theory beyond incipient plasticity and provide the community with relations for the evolution of fault fraction, fault number density, and strain partitioning between deformation twinning and dislocation slip mechanisms. These relations are free from empirical fitting constants and may be implemented to improve current work hardening and crystal plasticity models, which have previously relied on phenomenology. Error bars represent ±1 standard deviation over 500 replications of the kMC simulation. The raw kMC simulation data is shown in the contour plots. The contour plots are color-coded using a normalization scheme implemented along the ordinate axis. See the main text for further details. [61] using the method of Bacon and coworkers [54,55]. b Retrieved from Jin et al. [6]
|
2021-01-28T20:19:52.302Z
|
2021-01-26T00:00:00.000
|
{
"year": 2021,
"sha1": "0f7e6acbc6c116a354f440c1df2250b9286cd37a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0f7e6acbc6c116a354f440c1df2250b9286cd37a",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235505106
|
pes2o/s2orc
|
v3-fos-license
|
Final year medical students versus interns: information seeking behaviour about COVID-19 therapy in India
Background: Doctors alone must be capable of taking ultimate responsibilities for making decisions in clinical uncertainties. A right clinical judgement and management was the ultimate priority for health care workers during the COVID-19 pandemic. The objective of our study was to access knowledge about COVID-19 treatment among the final year bachelor of medicine and bachelor of surgery (MBBS) students and interns and thereby to understand the information seeking behaviour. Methods: Multicentric cross-sectional questionnaire-based study among the final year MBBS students and interns. The google form questionnaire was sent to the participants through whatsapp or mail. The questions were related to the drugs, the precautionary measures and the dead body disposal in COVID-19. Attitude regarding seeking information about the new disease, updated treatment guidelines as well as the preferred resource materials was also studied. The sample size was calculated based on a pilot study. Results: Out of 316 participants, 30.7% had good, 53.2% had adequate and 16.1% had inadequate knowledge regarding the updated treatment guidelines. In one of the questions about hydroxychloroquine, 51.5% final year MBBS students (n=200) and 63.8% interns (n=116) responded correctly (p<0.034). 65.4% gathered information by self-directed learning through various sources. 45.8% gathered information from social media while 44.4% read printed materials and 39.3% heard online/offline lectures. Conclusions: We conclude that the final year MBBS students and interns have satisfactory knowledge about COVID19 treatment. Interns had better awareness than the final year MBBS students. Retaining the theoretical knowledge during internship will make the young doctors more confident while practicing.
INTRODUCTION
In December 2019, in the city of Wuhan, China, an outbreak of an emerging disease COVID-19 due to a novel coronavirus, later named as SARS-CoV-2 was dectected. 1,2 On March 2020, WHO declared this epidemic of COVID-19 as a pandemic. 3 COVID-19, a novel pandemic was claiming many lives including healthcare workers is an imminent field to take action to save lives including doctors. As of now no specific treatment protocol has been found for the treatment. 4,5 We are gaining disseminated knowledge by trial-and-error strategy and a conjecture is being used. Amongst the healthcare professionals, it is always the doctors who should take ultimate responsibilities for difficult decisions in situations of clinical complexities and uncertainties, drawing on their scientific knowledge and well-developed clinical judgement. The total cases have come to a plateau phase and now we dwell on the fear of a second wave due to mutated viruses. After a year, the reports say, 10 crore people had been infected in the whole globe with 21 lakh deaths. The global situation is reflecting in our populated country too. 1 crore Indians had the disease, of which 1.5 lakh died due to COVID-19.
In a descriptive study by Khasawneh et al at Jordan, among medical students, it was found that the students showed expected levels of knowledge and attitude regarding COVID-19 and took good precautionary measures. They also concluded that in the current global situation there is a need for frequent utilization of social media by medical schools to impart knowledge. 7 Similarly, a cross sectional study conducted by Joshi et al in India about the knowledge, attitude and practices regarding COVID-19 among medical students, 94.15% of the medical students had extensive knowledge of the COVID-19 pandemic. The study also proved that there is a clear need for regular orientation and training programs to improve and update knowledge regarding COVID-19 infections and prevention strategies. 8 Our study aims at exploring the knowledge about the current treatment modalities of COVID-19, among the final year medical students and interns across the country. The knowledge about the proper precautionary measures, dead body disposal and the information seeking behaviour among the students is also being evaluated.
Aim
The aim of this study was to understand and compare the knowledge about COVID-19 therapy among final year MBBS students and interns across the country.
Objectives
The primary objective of this study was to assess the knowledge about COVID-19 therapy among final year MBBS students and interns and to compare the knowledge about COVID-19 therapy between final year MBBS students and interns.
The secondary objective of this study was to assess the student's interest to be updated with the latest treatment guidelines and preferred resource for updation.
Rationale and relevance
The world braces for the COVID-19 pandemic and healthcare workers on the frontlines are particularly vulnerable to this infection. Discussions and research are taking place worldwide because it is a disease not only affecting the health of the people but also affecting every aspect of an individual including mental health, financial fields and economy. Our study helps us to know, how vigilant are our young doctors to know about the updated COVID-19 treatment protocols and the precautionary measures they practice themselves to serve the community.
METHODS
It was a multicentric cross-sectional questionnaire-based study among final year MBBS students and interns across India. The google form 20 item questionnaire was sent through social media. The data was analysed by the principal investigator at Amrita institute of medical science, Kochi.
Study tools
The validated multiple option questionnaire was prepared in the google form. The questions were self-designed by the investigators after referring standard pharmacology textbooks and was validated. The WHO and Centre for Disease Control and Prevention (CDC) updated treatment guidelines for COVID-19 was also referred.
The google form comprised of 5 sections, which takes about 5-10 min to complete. The participant information sheet was in the first section. The second section had informed consent. Those who agreed to give consent will proceed to the third section. This section had the 20-item quiz. Of the 20 questions, 5 were regarding precautionary measures, 2 were related to dead body disposal and disinfecting premises. The remaining 13 questions were related to drugs of which 2 questions were on antibiotics. Those who answered correctly, received a score 1 for each question. The fourth section had 4 qualitative open-ended question about their information seeking behaviour and the last section was to choose their category as final year MBBS student or intern. After clicking the submit button, the participants received their total score out of 20 with the correct responses and explanation.
Study duration
The study was conducted from August 2020 to December 2020. The google form was sent through social media for data collection from 17 August 2020 to 9 November 2020.
Consent
The consent was taken through the google form.
Sample size
Since there is no other study done to assess and compare the knowledge about COVID-19 therapy among final year MBBS students and interns, we conducted a pilot study with 16 participants. From that study report we estimated the minimum number of sample size as 300 for the main study.
Data analysis
All the collected data were entered into microsoft excel 2020 and cross checked for presence of any error to maintain its accuracy. Qualitative data were represented as descriptive statistics. Chi-square test was used to investigate the level of association among study variables. A p value of less than 0.05 was considered statistically significant. Statistical analysis was performed using IBM SPSS version 16.
RESULTS
Among the 316 participants, 63.3% were final year MBBS students and 36.75% were interns. The excel data was then uploaded in SPSS software version 16 and used the descriptive statistics, Pearson chi-square test and Levene's test for analysing and representing the data.
Based on the scores obtained in the quiz, the participants (n=316) were categorised into 3 groups: inadequate/mild knowledge, adequate/moderate knowledge and good knowledge category ( Table 1). The categorised knowledge among the final year MBBS students and interns were depicted in Figure 2, which was statistically significant since the p value is 0.005 (p<0.05).
With respect to the precautionary measures, 75% of final year MBBS students (n=200) and 72.4% (n=116) interns correctly answered about the duration of hand wash. 71.5% and 81% of final year MBBS students (n=200) and interns (n=116) respectively agreed on washing with soap and water as an effective method to prevent coronavirus infection.
The knowledge about COVID-19 disinfection and dead body burial was also analysed. The question on disinfection was answered correctly by 65. 5% Knowledge about the use of uncommon dugs like febuxostat, interleukins and anti-parasitic drugs in COVID-19 treatment was evaluated. For the question about febuxostat, 32.5%, 65 (n=200) final years MBBS students and 56.9%, 66 (n=116) interns answered correctly (p<0.001). About the knowledge on use of tocilizumab in COVID-19, 69% final year MBBS students (n=200) and 70.7% interns (n=116) scored right and about antiparasitic drugs in the prophylaxis, 53.5%, 107 (n=200) final year MBBS students and 69.8%, 81 (n=116) interns answered correctly (p<0.004). 93.4% agreed yes-to be updated with COVID-19 treatment protocols with available resources ( Figure 5). 94.3% participants agreed yes-they wish to seek information about new disease. The rationale for the same was assessed (Figure 6). Our study results showed that the depth of knowledge among medical students and interns were only satisfactory. This difference can be attributed to the fact that our study assessed the knowledge regarding the treatment of COVID-19 infection rather than pathophysiology which were assessed in the other studies.
When we compare among the participants, interns scored >14 while most of the final year MBBS students scored between 7-13 score. This difference could be due to the reason that interns were in the frontline during the pandemic and they were learning as they worked. Even though the questions that were asked in this survey were not from the current MBBS curriculum, only 19% final years MBBS students and 11.2% interns scored <6 score in this study, which shows that medicos had an enthusiasm to learn about the new disease.
About the hand hygiene,>70 % participants in final year MBBS students and interns agreed that washing with soap and water is more effective than alcohol-based hand sanitizers as precautionary measure. In 2017 Pranav et al conducted a questionnaire-based survey about hand hygiene practices among medical undergraduate students in India. 68% agreed that hand rubbing is not more effective than hand washing and only 36% knew that the exact time required for hand washing was 20 seconds. 13 The study finding was comparable with study by Taghrir et al among medical students in Iran where>85% agreed upon washing hands with soap and water as an effective method to prevent infection from coronavirus. 10 Here in this study, the final year MBBS students and interns were equally right on the quantum of time required for hand washing. This could be due to fact that there is a vast advertisement through social media about 20 seconds hand wash to prevent coronavirus. The WHO recommended percentage of ethanol and isopropyl alcohol content in hand sanitizers was assessed (Q1 and Q4). Though the results were not satisfactory among both study group, their working knowledge is more important. 14 The use of antibiotic in COVID-19 treatment was assessed and interns answered better than final year MBBS students because they were experienced and have seen the use of these drugs. But other studies outside India showed that antibiotics are not effective in the first line treatment for COVID-19 which was agreed by 63.3% medical undergraduate students in Lahore, Pakistan as well as 66.9% medical undergraduates in Bagdad city. 9,15 In a study among medical and allied health science students by Gohel et al about COVID-19, 25% thought antibiotics might be useful, which was considered a wrong perception, which is a debatable subjective issue as proposed by the investigators. 11 More than two third of the participants from final years and interns had good knowledge regarding the dead body disposal. This indicates our students had sought information and were empowered with necessary knowledge.
In reference to hydroxychloroquine and remdesivir both the study groups had satisfactory understanding about classification, indication and adverse drug reaction. When we compared, interns had better clarity than the final year MBBS students regarding the contraindication as well as mandatory requirement for the initiation of abovementioned drugs. In 2017 Meghna et al conducted a comparative study in India where the final year MBBS students were pitted against interns about pharmacotherapeutics. It was found that interns had better awareness regarding cardiovascular-pharmacology, drugs in emergency use as well as chemotherapy. 16 This was attributed to the application of knowledge by the interns.
Use of other drug was well answered by both final year MBBS students and interns. In a question regarding use of febuxostat, interns answered correctly with a difference of 25% than the final year MBBS students. 74.5% opted to be an updated physician rather than gaining knowledge for personal safety (56%). This is a good indicator of their inherent professionalism. In an openended question, one of the students wants to reach out the latest treatment options to the public as well. So, this means that the students are highly professionals and always wish to be an experienced clinician.
We note that the overall knowledge is satisfactory and same among the final year MBBS students and interns. A higher awareness and completeness were observed among the interns. This is expected due to the good academic grounding from in house training which is focussed on interns but not with the undergraduates to be an expert clinician. Similar results were also noted from Singh et al study in India where they emphasised the need of improvement among undergraduates about the generic medicines. 18 In contradictory, a study done by Mira et al in 2016 in India about clinical pharmacology and rational therapeutics (CPT) among MBBS students and interns, stated that the undergraduate students have better score than interns. The rationale could be because theoretical teaching was given to undergraduate students but was not retained during internship. 19 Although there are many studies conducted among medical students, health care professionals and various other categories, we could not find a study which assessed the COVID-19 treatment knowledge among young medical students and interns.
Our study had a few limitations. This was an online questionnaire-based survey, so the sample size need not directly represent the target population. Since limited number of questions were asked, we included only basic and relevant pharmacology related topics.
CONCLUSION
The final year MBBS students and interns are well versed with the knowledge about treatment of new disease COVID-19 but interns are more accurate because the interns had good foundation in basics of pharmacology and its extrapolation during internship. A new disease of different source and pathophysiology may emerge in future. Therefore, we should focus more intellectually while moulding young doctors to withstand every toughest situation with the power of wisdom and knowledge. Seeking apt, accurate and authoritative information is necessary for building-up the knowledge. Retaining basic pharmacotherapeutics while facing newer real scenarios will help them gain more confidence for the trial-and-error methods of success.
|
2021-06-22T17:55:40.991Z
|
2021-04-26T00:00:00.000
|
{
"year": 2021,
"sha1": "041783bd6239767993466ff79a7d83a7634fb04a",
"oa_license": null,
"oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/4630/3174",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1947cc465b98f7cc9080c5b10611659089619a85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
110137689
|
pes2o/s2orc
|
v3-fos-license
|
Real-time measurement of dust in the workplace using video exposure monitoring: Farming to pharmaceuticals
Real-time, photometric, portable dust monitors have been employed for video exposure monitoring (VEM) to measure and highlight dust levels generated by work activities, illustrate dust control techniques, and demonstrate good practice. Two workplaces, presenting different challenges for measurement, were used to illustrate the capabilities of VEM: (a) poultry farming activities and (b) powder transfer operations in a pharmaceutical company. For the poultry farm work, the real-time monitors were calibrated with respect to the respirable and inhalable dust concentrations using cyclone and IOM reference samplers respectively. Different rankings of exposure for typical activities were found on the small farm studied here compared to previous exposure measurements at larger poultry farms: these were mainly attributed to the different scales of operation. Large variations in the ratios of respirable, inhalable and real-time monitor TWA concentrations of poultry farm dust for various activities were found. This has implications for the calibration of light-scattering dust monitors with respect to inhalable dust concentration. In the pharmaceutical application, the effectiveness of a curtain barrier for dust control when dispensing powder in a downflow booth was rapidly demonstrated.
Introduction
Real-time dust monitors are employed by occupational hygienists for tasks such as walk-through surveys, background sampling, site dust measurements, assessment of the effectiveness of dust control systems and measurement of indoor air quality [1]. They also form part of Video Exposure Monitoring (VEM), the combination of video and synchronized real-time exposure data [2], to illustrate dust control techniques and highlight dust levels generated by work activities. VEM can also play an important role in exposure risk communication by demonstrating good practice.
In VEM, dust monitors are used qualitatively and semi-quantitatively. For qualitative measurements, the relative effect of changing controls on exposure is measured. For quantitative measurements, the real-time monitors must be calibrated with respect to the respirable and, if required, inhalable dust concentrations [3,4,5]. Quantitative data can place the exposure in context by reference to control and exposure limits, although the exposure data will only be semi-quantitative because of uncertainties due to sampling and, particularly for aerosols, calibration associated mainly with the real-time instrument response which is a function of particle characteristics, e.g. size distribution and other scattering properties [6].
Two workplace examples, at opposite ends of the spectrum in relation to the working environment, are described here to illustrate the role of VEM with real-time dust monitors for the measurement of respirable dust concentration and, by inference, inhalable dust concentration, arising from (a) poultry farming activities and (b) powder transfer operations in a pharmaceutical company. They present different challenges for measurement, and particularly for VEM: low light, dusty farming conditions with a highly mobile workforce contrasting with ultra-clean pharmaceutical operations where dust levels are typically very low.
Background
Recent research into the incidence of ill health in agriculture in Great Britain [7] concluded that there is very little current information available on the incidence or prevalence of occupational ill health in the agricultural industry. However the data indicated that respiratory disease generally and upper respiratory tract infection symptoms in particular were high, and reported by just under 40% of farm workers exposed to organic dusts. The prevalence of chronic bronchitis amongst agricultural workers is reported to be 6.5%. The agricultural workforce totals about 400,000 with the majority exposed to organic dust during the course of their work. The figures would therefore suggest that up to approximately 162,000 workers could be suffering from respiratory symptoms at any one time and 27,000 suffering from bronchitis.
A specific group of farm workers, those in poultry farming, can be exposed to significant amounts of dust produced from poultry dander, feathers and dry wastes during their work activities. While other studies [8] have focused on identifying the scale of the risk through more comprehensive exposure measurements, in this work VEM was used to provide information on the hazard and its control which could form part of a package of electronic training material. Such risk management guidance for managers and safety representatives of large poultry farms and trade associations can then be disseminated in a suitable form to employees and contract workers.
Poultry farm activities
Various tasks in a commercial poultry farm, identified from previous work [8], were targeted for action as part of the investigation. The cycle of poultry rearing can be simplified to: Laying new litter; Repopulation (introducing new chicks); Growing the birds to adult size; Depopulating (removing the birds); Litter removal; and Cleaning and disinfecting.
All the above activities, apart from the growing phase, were identified as producing significant dust levels and suitable for VEM. At the farm, these activities can be described as follows: around the edges of the shed and the supporting pillars. A mechanical rotating brush attached to the 'Bobcat' shovel enables the floor to be swept clean. Cleaning and disinfecting: A power-washer with hand 'gun' is used to wash everything from the top down (roof, roof mounted ventilation, walls, bird feeder containers and floor). Following washing, a disinfectant is mixed with the water and the same equipment (with a long lance replacing the hand 'gun') and technique is used to disinfect all of the surfaces.
Materials and methods
The VEM technique (Exposure Level Visualization -'ELVis') involves real-time, personal monitoring while simultaneously videoing the worker and combining the exposure profile with the video image on a computer; it has been described previously [2]. Radio telemetry was not used in this case for transmission of the real-time monitor data in order to simplify the kit and minimize the weight carried by the subject. The data was simply stored on the real-time monitor's datalogger. The real-time monitor (1 s logging period) and the camera clocks were synchronized with the PC clock and a digital radio-controlled clock respectively. Then the real-time monitor's clock was filmed alongside the radio-controlled clock to synchronize them to 'real' time. The video and data files were joined to display on one screen as illustrated in figure 1. Here, one subject was monitored and carried the monitor in a harness with gravimetric samplers located immediately adjacent to the monitor. A camcorder (Sony Handycam) was used to video activities. Although it was able to work in low light, depopulation and repopulation occurred in very low light levels and it was not possible to use natural light. Satisfactory footage was obtained by illuminating the subjects with infra-red from portable infra-red lights (Sony Battery IR Light) situated next to each camera, and by setting the cameras to 'night-shot'. Reflective tape was affixed to the real-time monitor and the harness to further improve identification of the subject and instrument being filmed.
The real-time monitors (Thermo Personal DataRAMs -PDR) were located in a harness attachment sampling in the breathing zone towards the top of the chest (the poultry catchers bent forward a lot during their activities). Up to three real-time monitors were used during a visit: two subjects were monitored, each with a real-time monitor and gravimetric samplers; and a further real-time monitor with adjacent gravimetric samplers recorded the background dust level.
Personal gravimetric samplers (IOM head and cyclone samplers for inhalable and respirable fractions respectively) were also employed to obtain exposure data and retrospectively calibrate the real-time monitors. The samplers were operated according to the HSE reference method MDHS14/3 [9]. Each time-series of real-time monitor data was compared with its corresponding gravimetric samplers and the relevant calibration factors applied.
Inhalable dust concentrations
The exposure levels to inhalable dust determined from gravimetric measurements using the IOM head for three activities are shown in table 1. The workers were monitored for periods between 0.5 -2 hr to determine exposure concentrations for the activities. Three other activities (routine cleaning, laying new litter and cleaning/disinfecting) were also measured but only one measurement was taken for each (this data was used in section 2.4.2).
The There are various differences between the farm studied here and those used for the earlier work. In this work there were fewer data collected (11 samples compared to 60), the farm was a much smaller enterprise, processes were less mechanised and there were possibly less time constraints.
Larger farms require more materials and there is more product and waste to be removed. During litter removal at larger farms there may be two or more vehicles collecting manure compared with the one vehicle used in this study which would confer some control over where the dust is generated. Also, collecting damp with dry manure can reduce dust, as was performed at the farm studied here. Petrol-engined blowers used at larger farms to clear remaining dust around stanchions can raise clouds of dust. At this farm, new litter was manually broken away from an opened bag, then distributed using a pitchfork. Laying new litter at larger farms is, however, largely mechanised and can include using a 'hay-turner' attachment to spread the litter. Depopulation resulted in high inhalable dust concentrations in this study but fairly low concentrations at the other larger farms. Different types of bird feathers (eg from chickens, turkeys and ducks) may result in differing dust levels, as well as differences between the bird catchers' technique (e.g. farm employees or contracted staff may operate differently). The use of powered and natural ventilation at larger farms tends to prevent accumulation of dust inside the shed. In other respects both depopulation and repopulation were carried out similarly at this and larger farms.
Comparison of monitors
The use of inhalable and respirable samplers and the PDR at the farm allowed comparisons to be made between the measurements; the results are shown in table 2.
There is a wide variation in gravimetric inhalable/gravimetric respirable ratio for the activities -a factor of approximately 8 between the maximum and minimum. The dust particle size profiles are therefore different for the activities measured, which necessitates the application of separate calibration (response) factors for each activity to each PDR. Repopulation and laying litter are the two activities which generate the greatest proportion of inhalable dust. The gravimetric inhalable/PDR ratios do not follow the same ranking as the gravimetric inhalable/gravimetric respirable ratios, although the statistics do not allow detailed analysis. The range of gravimetric respirable/PDR ratios is not as great as that for gravimetric inhalable/PDR (especially if the cleaning/disinfecting result is considered anomalous, ie the low gravimetric inhalable/gravimetric respirable ratio and the high gravimetric respirable/PDR ratio). Nevertheless, the respirable dust from the different activities still requires different PDR calibration factors, as is found for most dusts other than 'standard' silica (eg Arizona road dust) used to factory calibrate the instruments [4,5].
The accuracy of real-time dust monitors for workplace (and environmental) monitoring relies on the validity of the calibration, which is typically based on gravimetric samplers. Photometric (light scattering) dust monitors, which are commonly used for workplace air monitoring, including the PDR used in this study, have a response as a function of particle size quite similar to the respirable sampling convention and therefore to the standard cyclone sampler [6]. The real-time monitor can, however, only have an average response factor applied over the duration of the activity. Consequently, if there are any variations in the dust particle characteristics (size distribution, refractive index etc) over this period, then at any given instant there may be significant deviation from the average value. This situation could be exacerbated when the photometric ('quasi-respirable') real-time monitor is calibrated with respect to an inhalable reference derived from a gravimetric sampler. Here the respective particle size profiles of the monitors are very different which increases the chance of wider deviations of the real-time monitor values from the 'true' values. For example, generated particles that are greater than 10 µm will be captured by the reference sampler, but not measured by the real-time monitor. Therefore, as the particle size increases above 10 µm the real-time monitor will increasingly underestimate the inhalable concentration. The use of a portable TEOM [10] should help in deriving more accurate response factors for a realtime monitor. This is because its response is not subject to uncertainties in measurement caused by changes in particle size, colour, or shape and responds purely as a function of mass of dust sampled. The TEOM's response time can be quite long for VEM depending on the concentration of aerosol being measured, typically tens of seconds compared to seconds for photometers including the PDR. It is, however, much shorter than the gravimetric sampling period (of the order of an hour). The use of the TEOM should thus be able to confirm that the response factor of the photometer monitor remains essentially constant over the measurement period or, if not, provide more accurate values for the periods where the response factor is significantly different. Further work is necessary to validate this approach for various types of dust and workplace activity.
VEM output
The results from the VEM were processed (using Adobe Flash media tools) for the occupational hygienists as shown in figure 2. The activities were highlighted on the chart and the corresponding exposure and video are viewed by moving the cursor (highlighted in figure 2 by arrow) over the chart and exposure profile. This information is suitable for adapting and incorporating into guidance such as Toolbox talks [11] which are short talks focused around specific health and safety issues and allow workers, safety professionals and managers to explore risks and develop strategies for dealing with them. It is the intention that Toolbox talks can help to demystify health and safety and to show the relevance of specific topics to particular jobs.
Background
In the pharmaceutical industry, there is potential for exposure to active agents during powder transfer operations. Various control measures can be adopted to minimise exposure risk, in particular, the use of a curtain barrier when dispensing in a downflow booth. VEM, using a non-pharmaceutically active powder (xanthan gum), was undertaken to show how improved control further reduced personal exposure by monitoring with and without a curtain barrier. Moreover, this offered the opportunity for disseminating training information using the 'ELVis' software.
Materials and methods
The ultraclean environment required that all equipment be checked and wiped, and operators wore coveralls. Also, note that for clean applications such as this, separate instruments are used from those for 'dirty' applications as described in section 2. Both wireless instruments (radio telemetry -Satel Radiomodems) and wireless video (transmitter/receiver -Astrotel Communication Corp.) were employed. A dust lamp (HSE Tyndall beam dust lamp) was used to enhance the video footage by showing the suspended dust particles. Two cameras were employed for wide angle and close-up views. The real-time monitor (Casella Microdust) was chosen to allow periodic checks of the span as well as zero during the measurement period. A harness was adapted to fit the sampling probe (25 cm long) in the breathing zone of the worker. The monitor (based on the photometric light scattering principle) was factory calibrated with respect to total suspended particulate (TSP), which approximates to the inhalable fraction [4]. The activity to be monitored was simple and used as a test case for the technique. Powder was transferred in a downdraft booth from one receptacle to an adjacent one using a scoop (figure 3), when full the contents were bagged and transferred back and the process repeated. Monitoring was performed before and after installation of a curtain barrier -a clear plastic sheet fixed between opposite walls of the booth with two holes cut centrally for insertion of the operator's arms. The operator carried out the same activities but behind the barrier. These data were then summarized and displayed as bar graphs as shown in figure 5. Emptying the bag resulted in the highest exposure while scooping and bulk transfer exposures were roughly similar. The beneficial effect of the simple curtain barrier can be clearly seen and was demonstrated in a very short time -the exercise took a half-day with the actual measurement taking less than one hour.
Conclusions
Two contrasting examples of workplace environment have been used to illustrate how VEM with realtime dust monitors, calibrated with reference samplers, can transform task analysis data into useful information on the hazards of dust (poultry waste and pharmaceutical powder) and its control. The different environments placed different demands on the VEM measurement equipment, eg low light levels and very mobile workers at the poultry farm and the ultra-clean, sterile environment at the pharmaceutical production facility. The calibration of the real-time monitors, which are essentially responsive to the respirable fraction, such that they read in inhalable dust concentration has been highlighted as an area where further work may be needed to (a) test the validity of such calibration over the measurement period for various dusts and activities and (b) develop real-time, portable inhalable monitors, possibly based on the TEOM.
|
2019-04-13T13:02:52.326Z
|
2009-02-01T00:00:00.000
|
{
"year": 2009,
"sha1": "d5d0dd809a323ebd715c87d174d708901b2c2a12",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/151/1/012043",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "28fd495db5d2e3b371eb53e3c95a6910951987da",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
115731851
|
pes2o/s2orc
|
v3-fos-license
|
Development of an Exportable Modular Building System by Integrating Quality Function Deployment and TRIZ Method
Quality function deployment and TRIZ method are widely used to develop new products in the manufacturing industry. These methods are known to be extremely effective for cost reduction and quality improvement. However, unlike the general manufacturing process, the manufacturing of an exportable modular building system involves many sub-processes that proceed concurrently. Therefore, there is a limitation on the efficiency that can be achieved if either of these methods is directly applied to product development. In order to address this issue, the authors propose a new methodology wherein quality function deployment is integrated into TRIZ. The results of a case study show that application of the new method makes it is possible to reduce the volume of an exportable modular building system compatible with ISO container shipping by 48% and to decrease the weight of structural steel by 30%.
Introduction
Quality function deployment (QFD) is a customerdriven methodology in which customers' needs are systematically transformed into product specifications (Kim et al., 2015). Generally, QFD is applicable over a number of phases. The first phase is to derive the critical to quality (CTQ) through a correlation analysis between quality characteristics and the customers' needs. In the second phase, the key functions are determined through correlation between the quality characteristics and required functions. In the final phase, the product is designed through a correlation analysis between the functions and design factors. This methodology is widely used in the manufacturing (Akao, 1994) and construction industry (Pheng andHui, 2004, Chun andCho, 2015).
TRIZ (a Russian acronym for the theory of inventive problem solving) is an effective methodology to derive creative ideas on new product development; it was proposed by Altshuller et al. (2002) and the core concept of TRIZ is the resolving of contradictions. A number of technical inconsistencies and physical contradictions may arise during the application of QFD. For example, 'I' and 'II' are both important quality characteristics that reflect the customers' needs. It may happen that if the quality of 'I' increases, then that of 'II' decreases, and vice versa; thus, it may be difficult to satisfy the requirements for both 'I' and 'II' simultaneously. In order to solve this problem, several researchers developed a new methodology in which QFD is integrated into TRIZ to resolve the technical contradictions. This idea has been applied to the development of manufacturing products; examples of this method applied to the manufacture of a laptop computer and a washing machine are discussed by Yeh et al. (2011), and Yamashina et al. (2002), respectively.
In this study, the authors developed an exportable modular building system using a new methodology, which integrates TRIZ and two phases of QFDs. An exportable modular building system is a good option for situations in which the supply of sufficient labor and materials to foreign construction sites is difficult (Eom et al., 2014, Lawson et al., 2011. In general, owing to the large volume of exportable modules, the delivery cost of such systems makes up approximately 30% of the total cost. Hence, it is important to develop a system that will incur low delivery cost. The integrated TRIZ-QFD methodology was used to find a balance between the small volume and high manufacturing cost of the exportable modules.
Integration of TRIZ and Two-phases of QFD
When applying QFD methodology, the house of quality (HOQ) structure is used to convert the quality required by customers into the quality characteristics for design of products (Akao, 1994). Fig.1. shows two HOQs that represent the first and second phases of the QFD analysis, respectively. The first phase of the QFD methodology is usually applied to perform a correlation analysis between quality requirements and characteristics. If a physical contradiction arises in the correlation of quality characteristics represented by room A of Fig.1.(a), TRIZ is used in this step. A new procedure is proposed, as shown in Fig.2.; it involves integrating two phases of QFD and TRIZ and is called a functional HOQ (F-HOQ). In this procedure, rooms 6, 7, 8, and B of the second phase of the QFD are attached to the first phase of the QFD. By integrating these two phases, all the quality requirements (QRs), quality characteristics (QCs), and function requirements (FRs) can be directly related in a single diagram. Rooms A and B in the figure represent the correlations among the QCs and FRs, respectively. This diagram shows their contradictions, and thus, TRIZ can be easily applied in a single diagram.
Case Study of Integrated QFD and TRIZ
The developed F-HOQ was applied to the design of a representative example of exportable modular accommodation modules for construction workers. The plan view and shipped modules are shown in Fig.3.
Determination of CTQs
To derive the quality requirements, customers' opinions were obtained by surveying and interviewing potential customers; the opinions were converted into quality requirements. The obtained quality requirements are grouped into three levels as listed in Table 1. As presented in Table 2., critical customer requirements (CCRs) and CTQs can be determined through an analysis process corresponding to rooms 1 to 5 of F-HOQ. Through the correlation analysis between QRs and QCs, the priorities of the QCs can be evaluated. The quality requirements of level 3 are rearranged into room 1 in the F-HOQ. The QCs are assigned different weights, such as ◎ : 5 points, ○: 3 points, and △: 1 point, depending on their relation with the corresponding customer requirements. Among the five QCs, the top three are selected as CTQs and used to set up development targets. Table 3. lists the top three CTQs and their target levels.
Determination of Key Functions
After determining the CTQs through the correlation analysis, the key functions of the exportable modular building were derived. Fig.4. illustrates the process of the function analysis of the exportable modular building. The functions in Fig.4. correspond to room 6 of F-HOQ. The key functions are derived from a correlation analysis between the QCs and required functions. Room 8 of F-HOQ shows the priority ratios of the key functions. Among the functions considered in Fig.4., "Shipping," "form of modules," and "form outside the module" were finally selected as the key functions. This procedure is illustrated in Table 4.
Integration of TRIZ and QFD
Effective solutions can be obtained by resolving technical problems with contradictions. The contradictions generally occur if the improvement of a parameter or a characteristic of a technical system causes the deterioration of other parameters or characteristics. The contradiction matrix of TRIZ is based on knowledge and consists of 39 types of engineering parameters and 40 principles of invention to resolve conflicts, as presented in Table 5. The characteristics to be improved are placed along the vertical axis, and the horizontal axis is used for characteristics that deteriorate. The solutions of 40 principles can be found at the intersection between two parameters. negative correlation exists between two functions, they are also converted into two of the 39 parameters of TRIZ. Table 6. lists three pairs of converted parameters from among the QCs for which contradictions exist. Table 7. lists the converted parameters from room B of the F-HOQ. The parameters converted from the QCs and functions are rearranged in the contradiction matrix presented in Table 8.
Solution Derivation Process
Possible solutions can be derived by following the process described below: -Step 1: Find QCs that have negative correlations and convert these QCs into two of the 39 parameters of TRIZ.
-Step 2: Find functions that are strongly correlated with the QCs of step 1.
-Step 3: Refer to the contradiction matrix corresponding to the function and QCs in step 2 and find solutions. Fig.6. shows an example of the solution derivation procedure. QC1 "volume of modules for delivery" and QC3 "number of components per unit volume" have negative correlations. These QCs are converted into "volume of a stationary object" and "complexity of a device," respectively. These two QCs are strongly related with the function F01-02 "form outside the module." By referring to the contradiction matrix, two invention principles "principle 1: Segmentation" and "principle 31: Porous material" are recommended.
T h e c o n t r a d i c t i o n m a t r i x o f t h e q u a l i t y characteristics that have a correlation with F01-2 "form outside the module" is shown in Table 9. The 40 invention principles of TRIZ provide the solutions that can resolve contradictions. Table 10. presents the process of solution derivation for function F01-02. Two principles, namely, "Segmentation" and "Porous materials" are used for solving the contradiction. The same process can be applied to other functions. Fig.7. shows the design result of the developed modular unit. Corner casts that are compatible with ISO containers are used at the upper and lower corners of a unit. The upper corner cast is temporarily used during delivery; it is removed after the module is connected at the construction site. Fig.8. compares the total volumes of the existing system and design result. The development helped reduce the total volume of modules for delivery (622.9 m 3 ) by 48.0% as compared to the currently used system (1197.7 m 3 ). The new developed system is also compatible with ISO container ships. Fig.9. shows the final design of the exportable modular building. Table 11. lists the verification results of CTQs. The volume of the modules for delivery and weight of frames per unit area are reduced when compared to those in the existing system. The ratio of ceiling height to module height is 0.85, which is the same as that in the existing system.
Conclusions
In this paper, the authors propose a new product development process called the F-HOQ. This methodology integrates TRIZ and two phases of QFD. The F-HOQ provides a tool for designers to reflect customers' needs in the development of products systematically. By applying the proposed F-HOQ process to the development of an exportable modular building system, the volume of modules for delivery and weight of frames per unit area were significantly reduced as compared to the existing system. The proposed F-HOQ can be used as a customer-driven innovative methodology for developing new products in the field of construction. The results of a case study show that by applying the new method, the volume of an exportable modular building system compatible with ISO container shipping was reduced by 48% and the weight of structural steel was reduced by 30%.
|
2019-04-16T13:25:06.809Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "4d498b952396df5ad190ac25aadaf1bbb4a8c58e",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3130/jaabe.16.535?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4d498b952396df5ad190ac25aadaf1bbb4a8c58e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
248598624
|
pes2o/s2orc
|
v3-fos-license
|
Role of Early Assesment of Diuresis and Natriuresis in Detecting In-Hospital Diuretic Resistance in Acute Heart Failure
Background and Purpose: European Guidelines recommend early evaluation of diuresis and natriuresis after the first administration of diuretic to identify patients with insufficient diuretic response during acute heart failure. The aim of this work is to evaluate the prevalence and characteristics of patients with insufficient diuretic response according to this new algorithm. Methods: Prospective observational single centre study of consecutive patients with acute heart failure and congestive signs. Clinical evaluation, echocardiography and blood tests were performed. Diuretic naïve patients received 40 mg of intravenous furosemide. Patients on an oupatient diuretic regimen received 2 times the ambulatory dose. The diuresis volume was assessed 6 h after the first loop diuretic administration, and a spot urinary sample was taken after 2 h. Insufficient diuretic response was defined as natriuresis <70 mEq/L or diuresis volume <600 ml. Results: From January 2020 to December 2021, 73 patients were included (59% males, median age 76 years). Of these, 21 patients (28.8%, 95%CI 18.4; 39.2) had an insufficient diuretic response. Diuresis volume was <600 ml in 13 patients (18.1%), and 12 patients (16.4%) had urinary sodium <70 mEq/L. These patients had lower systolic blood pressure, worse glomerular filtration rate, and higher aldosterone levels. Ambulatory furosemide dose was also higher. These patients required more frequently thiazides and inotropes during admission. Conclusion: The diagnostic algorithm based on diuresis and natriuresis was able to detect up to 29% of patients with insufficient diuretic response, who showed some characteristics of more advanced disease.
INTRODUCTION
Signs and symptoms of congestion are usually the most common manifestations among patients with acute heart failure (HF) (Adams et al., 2005), and intravenous loop diuretics remain the most widely used therapy to achieve euvolaemia (Fonarow et al., 2004). Diuretic response is defined as the capacity of diuretics to induce natriuresis and diuresis (ter Maaten et al., 2015a).
Identification of patients who may have a poor diuretic response is one of the most important challenges in the field of HF, since a poor diuretic response is associated with a higher risk of rehospitalization and increased mortality (Metra et al., 2012;Neuberg et al., 2002;Valente et al., 2014;ter Maaten et al., 2015b;Testani et al., 2014;Voors et al., 2014). To date, no uniform and standard definition was available to allow the early identification of patients at risk of developing resistance to diuretic treatment during HF hospitalization.
The Position Statement from the Heart Failure Association of the European Society of Cardiology about the use of diuretics in heart failure with congestion (Mullens et al., 2019), and more recently the European Guidelines for the diagnosis and treatment of acute and chronic heart failure (McDonagh et al., 2021), have proposed an algorithm that includes the early assessment of diuresis and natriuresis after the first administration of loop diuretics in patients with acute HF, in order to detect patients with insufficient diuretic response who might benefit from diuretic intensification.
To date, data on the prevalence of early diuretic resistance according to these parameters have not yet been described.
The aim of this work is to evaluate the prevalence and features of acute HF patients who present an insufficient diuretic response according to this algorithm.
METHODS
From January 2020 to December 2021, we conducted a prospective, observational and single centre study on a sample of consecutive patients aged ≥18 years whose primary admission diagnosis was acute HF and were admitted to the cardiology department. The diagnosis of acute HF was based on the current ESCF HF guidelines. In addition, NTproBNP >300 pg/dl and the presence of at least two of the following congestion criteria were required: jugular venous pressure >10 cm, lower limb edema, ascites, or pleural effusion determined by chest x-ray or pulmonary ultrasound.
Patients in cardiogenic shock and/or on dialysis were excluded. Patients in whom urine output or natriuresis could not be recorded or were missed were also excluded.
Study Procedures and Statistical Analysis
Complete clinical evaluation, echocardiogram and laboratory tests were performed. Diuretic naïve patients received 40 mg of intravenous furosemide. Patients on an outpatient diuretic regimen received 2 times the home dose. The diuresis volume was assessed 6 h after the first loop diuretic administration, and a spot urinary sample was taken after 2 h. Urinary sodium was measured using a Siemens Dimension EXL chemistry analyzer. Insufficient diuretic response was defined as natriuresis <70 mEq/L or diuresis volume <600 ml.
Values of continuous variables are given as the median and interquartile range (IQR). Categorical variables are described in absolute and relative frequencies. The associations between clinical characteristics and diuretic response were analyzed by univariate analysis using the Chi square test for categorical variables and the Mann-Whitney U test for contiuous variables. A p-value <0.05 was considered significant. All analyses were performed using STATA v.13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, TX) and R software (R Foundation for Statistical Computing, version 3.6.0).
The present study conforms to the principles of the Declaration of Helsinki. Approval from the local ethics committee/internal review board was obtained at the participating centers and patients signed an informed consent.
RESULTS
From January 2020 to December 2021, 694 patients were admitted for acute HF. Nearly 50% of these patients did not meet the inclusion criteria as they presented predominant pulmonary congestion. About 30% could not be included as the treating physician didn't follow the ESC protocol, in part due to Covid-19 pandemic.
A final sample of 73 patients were included (59% males, median age 76 years [IQR: 70-85]). Four initially included patients were not finally analysed as urinary output was not correctly collected. Of the remaining sample (73/78), 21 patients (28.8%) met the definition of early insufficient diuretic response.
Compared with patients with an adequate diuretic response, these patients had lower systolic blood pressure (
DISCUSSION
To date, this is the first study to show the performance of the algorithm proposed by the HF European guidelines for the early assessment of diuretic response in a cohort of patients with acute HF.
This algorithm based on diuresis volume and natriuresis was able to detect up to 29% of patients with insufficient diuretic response who might benefit from enhanced diuretic treatment. These patients showed some characteristics traditionally described in patients with diuretic resistance in other settings.
Natriuresis and Diuresis in Acute Heart Failure
Sodium and fluid retention is a hallmark of HF. As effective diuretic response is produced by natriuresis, urinary sodium has emerged as a useful parameter to predict natriuretic response in patients with HF soon after diuretic administration , which can be measured from a urinary spot sample with good accuracy (Testani et al., 2016). In this line, several studies have reported the usefulness of natriuresis after the first dose of diuretic to predict long-term adverse events (Singh et al., 2014;Honda et al., 2018;Luk et al., 2018;Biegus et al., 2019;Hodson et al., 2019), and two studies have also suggested its usefulness in detecting the development of worsening HF during hospitalization (Collins et al., 2019;Cobo -Marcos et al., 2020).
Although a high diuresis volume following a first intravenous loop diuretic administration is usually associated with good diuretic response and a high urinary sodium (Testani et al., 2016;Singh et al., 2014), some data indicate that in patients with low to medium volume output, spot urinary sodium content offers independent prognostic information (Brinkley et al., 2018). Indeed, in our cohort only 4 patients (5.5%) had both low urinary sodium and a decreased urine output.
Therefore, a spot urine sodium content of <50-70 mEq/L after 2 h, and/or an hourly urine output <100-150 ml during the first 6 h, provide additional information and could identify patients with an insufficient diuretic response.
Characteristics of Patients With Insufficient Diuretic Response
The present study confirms findings from previous studies, that a poor response is associated with some features of more advanced disease (Metra et al., 2012;Neuberg et al., 2002;Valente et al., 2014;ter Maaten et al., 2015b;Testani et al., 2014;Voors et al., 2014;ter Maaten et al., 2015a). In our cohort 43% of the patients had a previous HF hospitalization, and the outpatient diuretic dose was high. Besides, compared with patients with an adequate diuretic response, these patients had lower systolic blood pressure at admission, worse glomerular filtration rate, and showed greater neurohormonal activation. It should be noted that variables such as age, left ventricular ejection fraction or natriuretic peptides are not usually associated with the diuretic response in different studies (Metra et al., 2012;Neuberg et al., 2002;Valente et al., 2014;ter Maaten et al., 2015b;Testani et al., 2014;Voors et al., 2014;ter Maaten et al., 2015a). Furthermore, in our cohort no other clinical (Charlson index, Everest score) or echocardiogram features (TAPSE, inferior cava vein) were different between both populations. These data highlight the role of this algorithm in the evaluation of diuretic response in this setting.
Finally, although this study didn't assess long term events, we showed that patients with a worse diuretic response required diuretic association and inotropes more frequently during admission.
Feasibility of the ESC Algorithm
At this time, two other studies are evaluating the performance of this diagnostic strategy, the ENACT-HF trial (Rationale and Design of the Efficacy of a Standardized Diuretic Protocol in Acute Heart Failure Study) (Dauw et al., 2021), and the PUSH-HF trial (Natriuresis-guided therapy in acute heart failure: rationale and design of the Pragmatic Urinary Sodium-based treatment algoritHm in Acute Heart) (Maaten et al., 2022).
It should be noted that this novel algorithm involves a more proactive approach and closer monitoring of the diuretic response.
This requires specific training and coordinated and continuos collaboration between the professionals involved in the management of HF patients, especially with emergency department staff, in order to extend the implementation of this diuretic protocol.
Limitations
Our cohort consisted of 73 patients from one academic institution so the findings may not be generalizable to the wider acute HF population.
In addition, there is a low percentage of patients included (11%) in terms of overall acute HF admissions. Patients with predominantly pulmonary congestion without other congestion signs were not included. Some patients didn't follow the protocol by decision of the responsible staff. Recruitment was also affected by the COVID-19 pandemic.
CONCLUSION
The diagnostic algorithm based on diuresis and natriuresis provided complementary information and was capable of early detection of up to 29% of patients with acute HF from this cohort who presented an insufficient diuretic response.
This finding may help to stratify patients who may benefit from more intense treatment for decongestion during hospital admission.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Comité de Ética e Investigación con Medicamentos (CEIm) del Hospital Puerta de Hierro Majadahonda. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
MC-M, FD, PG-P and JS contributed to conception and design of the study. AiM, MM, AS, and CG, contributed to
FUNDING
This work was partially supported by grants from the Instituto de Salud Carlos III (PI20/00689). (Co-funded by European Regional Development Fund/European Social Fund "A way to make Europe"/ "Investing in your future"). We acknowledge funding from a grant from the Spanish Society of Cardiology (Heart Failure Section, 2019).
|
2022-05-10T16:34:35.192Z
|
2022-05-02T00:00:00.000
|
{
"year": 2022,
"sha1": "23411c9675898e5dd42c61fac6efed051fae6ba9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2022.887734/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3410a36568cd52c1335adc376d3cd929a68cf4ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
125793398
|
pes2o/s2orc
|
v3-fos-license
|
An overview of PST for vibration based fault diagnostics in rotating machinery
In general, diagnostics can be defined as the procedure of mapping the information obtained in the measurement space to the presence and magnitude of faults in the fault space. These measurements, and especially their nonlinear features, have the potential to be exploited to detect changes in dynamics due to the faults. We have been developing some interesting techniques for fault diagnostics with gratifying results. These techniques are fundamentally based on extracting appropriate features of nonlinear dynamical behavior of dynamic systems. In particular, this paper provides an overview of a technique we have developed called Phase Space Topology (PST), which has so far displayed remarkable effectiveness in unearthing faults in machinery. Applications to bearing, gear and crack diagnostics are briefly discussed.
Introduction
Fault diagnostics of practical systems is a very important problem that needs to be solved robustly in order to be able to make giant leaps in reliability and safety. Diagnostics is essentially an epistemological problem that require us to make intelligent inferences based on data, that could be derived from empirical observations or computer models, which are often incomplete, noisy and uncertain. Although there is a rich and varied literature, we feel that many of the diagnostic techniques in use are quite ad hoc and heuristic, resulting in lack of general applicability 1 .
This paper presents innovative and rigorous techniques involving the nonlinear characteristics in a computational intelligence setting to diagnose changes in complex systems. Our approach consists of developing diagnostic methods using a combination of nonlinear dynamic analysis and computational intelligence techniques. In this paper, several applications are chosen with sufficient generality to be able to apply to a host of disciplines.
The theoretical approaches are validated using data from fault simulators at Villanova University and Case Western University; we also validate our algorithms using experimental data from practical machinery provided by United Technologies Research Center (UTRC, USA) and Federal University of Uberlândia (Brazil). [4] The rest of the paper is organized as follows. Section 2 describes a family of methods that were originated and derived by our team: Phase Space Topology (PST) and Extended Phase Space Topology (EPST). In Section 3, we present a recent investigation of EPST for bearing defect analysis. Section 4 summarizes some of the applications that were investigated in order to generalize the applicability of our developed methods. Finally, Section 5 concludes the paper.
Extended phase space topology method
We first developed the method of Phase Space Topology [1][2][3], which is based on the transformation of phase space into the density space, which is characterized with quantitative measures. It was shown that, depending on the geometry and shape of the phase space, the density diagram contains peaks of various heights and sharpness at multiple locations, an example of which is shown in Fig. 1. This stems from the fact that the dynamical system occupies more time at specific regions of the space causing higher densities in those regions. The properties of the peaks in the density diagrams including the location, height and sharpness of the peaks were used as features in the initial approach. Despite the success of this approach, the need to search for the peaks in the density diagrams made it difficult or sometimes even impractical to implement, especially for systems with noisy or more complex phase space patterns. This led to EPST, which is a continuation of our development of the PST family of algorithms. EPST is based on approximating the density distribution with Legendre polynomials, the details of which are described below.
Kernel density estimation
Let X=(x 1 , x 2 , ..., x n ) be an independent and identically distributed sample data drawn from a distribution with an unknown density function f . The shape of this function can be estimated by its kernel density estimator (the hat,ˆindicates that it is an estimate, and the subscript indicates that its value can depend on h). Here, h >0 is a smoothing parameter called the bandwidth, and K(.) is the kernel function which satisfies the following requirements.
There is a range of kernel functions that can be used, including uniform, triangular, biweight, triweight, Epanechnikov and normal. Due to its conventional and convenient mathematical properties, we use the normal density function in our approach, defined as the following:
Density distribution approximation
Let x be a state of the system and y d =f h (x), its density computed using the kernel density estimator. y d is then approximated with Legendre orthogonal polynomials. Legendre polynomials can directly be obtained from Rodrigues' formula which is given by: It can also be obtained using Bonnet's recursion formula: where the first two terms are given by: The coefficients of the Legendre polynomials are obtained by using the least squares method assuming the following linear regression model: Letting the estimated coefficients are given by: The coefficientsβ constitute the features in our approach that can be used in classification or regression problems. The approximated density using Legendre Polynomials is then calculated using the following: Root mean square error (RMSE) and Pearson's correlation coefficient (PCC) are calculated to compute the quality of the fit using the following equations: where, Z = (y d − f ) is the residual vector, N is the number of points in the density function,
Artificial neural network
Artificial neural networks (ANN) are a set of algorithms that are designed to recognize a relation or a pattern between inputs and outputs. ANN consists of an interconnected group of nodes called artificial neurons and each node has a corresponding weight that adjusts as learning proceeds. ANN can be used to solve regression problems for a continuous output or classification problems for a discrete output. One of the most used algorithms in training ANN is the backpropagation algorithm, which is a popular method for optimizing the weights of the ANN in order to correctly map inputs to outputs. It works by propagating an input forward through the network layers to the output layer where the calculated output is compared with the desired output. The error values of the calculated output and the desired output are computed and propagated backwards. These errors are traced back to each associated neuron in order to update the weights.
Example application: bearing diagnostics
The algorithm, a flowchart of which is illustrated later in Fig. 3, is best explained with an example application; in this case, we choose the bearing diagnostics problem. Many traditional bearing fault detection techniques involve pattern recognition, which is effective only at one operating speed and requires retraining the classifier each time the rotational shaft speed changes because of the dependence of the dynamic response of the system and the rotational speed. This limitation motivates the need for a new method that is effective under variable operating speeds. The current study investigates different bearing configurations under two operating conditions: (A) constant operating speed and (B) variable operating speed.
In order to achieve this goal, the classification problem was initially performed by training and testing the classifier on the same set of speeds. The classifier was trained at 19 rotational speeds, and then tested on the same set of rotational speeds. The second step involved generalizing this diagnostic approach to variable operating speeds. In this step, the classifier was trained on one set of speeds and then tested on another different set of speeds. The detailed description of the analysis of both of the above mentioned procedures is provided in the sequel.
Case A: Constant operating speed
A rotating fault simulator machine, shown in Fig. 2, is employed to study a variety of different bearing defects under various rotational speeds (300-3000 rpm). Four bearing conditions were investigated: healthy bearings (H), bearings with inner race defects (IR), outer race defects (OR) and ball defects (B). Proximity probe sensors were used to measure the vibration signal of the shafts in two orthogonal directions.
The density function of the horizontal vibration signal for every speed and bearing condition was approximated using Legendre Polynomials. The order of the polynomial was selected based on the best fit between the estimated density function and the approximated density function. Root mean square error and Pearson's correlation coefficient were calculated to compute the quality of the fit. Legendre polynomials of order 20 were used to approximate the estimated density functions. The coefficients of the Legendre polynomials were computed for each of the 760 sampled signals using the least squares method as shown in Eq.9. The computed coefficients for each case were saved in a vector of 21 arrays (using only the horizontal vibration signal), which was used as a feature input to train an ANN classifier. Since the rotation speed has a high impact on the response of the dynamic behavior, it was used as an additional feature, making the total number of features equal to 22. With The performance of the classification model is presented by means of confusion matrices. In general, in a confusion matrix, the predicted classes are compared with the actual classes. Each row of the matrix represents the results of prediction for the corresponding class at that row, while each column represents the actual class. The elements in the main diagonal of the matrix represent the correct classified prediction for each corresponding class. These elements are known as true positives. For a specific row, all elements excluding the element on the main diagonal are the misclassified prediction for the corresponding class, which are known as false positives. A false negative for a specific class is defined as the summation of elements on its corresponding column, excluding the element on the main diagonal.
The classifier performance can be analyzed using certain evaluation matrices derived from the confusion matrix, such as accuracy, sensitivity and precision. Table 1 shows the predictions for training and test data using the neural network classifier. As can be seen, the classifier has been able to predict all defects with 100% accuracy, 100% precision and 100% sensitivity with no misclassification. This result is remarkable for several reasons. Firstly, it shows that combining the EPST method with the proximity sensor data can resolve the challenges in identifying faults at low rotational speeds (below 10 Hz). Secondly, no a priori knowledge of the system was included in the features. This suggests that the EPST approach can be conveniently applied to diverse dynamical systems in an automated process, with minimal need for adaptation and reliance on expert knowledge about the system. Conventional bearing analyses search for specific characteristics of the system such as ball pass frequencies but this study did not require any additional analysis because the method functioned well without it. We note with caution that it may well be the case that other operating conditions require additional feature combinations. Finally, no feature ranking or feature selection al-
Target Class
gorithm [5] was employed to select the optimal feature set. Due to the fact that the effect of coefficients in function approximation decreases by increasing the number of orthogonal functions, the calculated coefficients are naturally ranked by their order of significance.
Case B: Variable operating speed
For this part of the study, horizontal and vertical vibration data were used for every speed and bearing condition to construct the density function. The estimated density functions were then approximated using Legendre polynomials of order 20. As in case A, the order of the approximated density function was selected based on root mean square error and Pearson's correlation coefficient. The first 15 Legendre polynomial coefficients in each direction were used as a feature set. The shaft rotational speed was added to the feature set to produce an input vector of 31 arrays for each sampled data. The feature vector was used as an input to train the artificial neural network classifier. The neural network was modeled with 10 neurons and the back propagation algorithm. The classifier was trained using the extracted features from vibration data for different bearing conditions and for four rotational speeds. The rotational speeds that were selected to train the classifier are: the machine operating range boundaries (300 and 3000 rpm) and two middle speeds (1200 and 2400 rpm). The available vibration data of 160 total samples for different bearing conditions at these speeds were used for training the artificial neural network. The remaining 600 samples obtained at the other speeds (e.g., 420, 600, . . ., 2820 rpm) were used for testing the trained classifier. Figure 3 shows a flowchart for the algorithm of case B. The classification results represented as a confusion matrix for the test data for four bearing conditions are shown in Table 2. As can be seen, the classifier has 96.7% overall accuracy with 20 misclassifications out of 600 predictions. These results indicate a high prediction rate of the classifier for the four bearing conditions. Most of the misclassified predictions are for bearings with ball defects. For a better understanding of the classifier performance, sensitivity and precision were calculated for each bearing condition and are shown in Table 2. These evaluation matrices represent a measure of the classification performance for each bearing condition.
Summary of applications
This section presents a summary of some of the applications that we investigated including bearing fault diagnostics, gear fault diagnostics and crack shaft diagnostics. In the following subsections a brief introduction to each system along with the main contributions are described.
Other studies in bearing diagnostics
Rotating machinery are probably among the most important components in industry. Rotating machines are composed of different sub-systems interacting with each other in a nonlinear fashion; changes in any of these components can significantly affect the overall performance. Rolling element bearing defects are one of the major sources of breakdown in rotating machinery. The rotating fault simulator machine shown in Fig. 2, that is mentioned in Section 3, is considered to study a variety of different machinery defects under various operating conditions such as rotational speed, load and unbalance. It basically consists of a motor-driven shaft mounted on two bearings. Shafts and bearings with different sizes and conditions can be used. Various vibration sensors can be used such as accelerometers and proximity probes.
We have performed various investigations on this setup in order to develop robust techniques to diagnose bearings. In [5][6][7][8][9], we have applied conventional methods such as fast Fourier transform, envelope spectrum, and discrete wavelet transform over a span of rotational speeds as well as nonlinear physics-based modeling [10][11][12][13]. Accelerometers data were used it to extract features to diagnose bearings with inner race, outer race and ball defects. Mutual information was then used as a ranking technique, and the optimal feature subset corresponding to the highest classification accuracy was determined. An overall accuracy of 97.0% was achieved using this procedure.
In [14,15], we introduced the mapped density method in order to discriminate simultaneous bearing faults under various rotational speeds. In this work we studied the use of the information provided by proximity probe sensors. The method has significant success in fault discrimination for a single and two bearing fault configurations (accuracy of 97% for a single bearing fault and 92% for two bearing faults). Moreover, the results indicate that this method has high performance in distinguishing between different bearing conditions signatures (accuracy of 88%).
In [4,16,17], the EPST method was introduced. In this work we perform bearing diagnostics on different rotational speeds domains and obtained very good results (overall accuracy of 96.7%). We have also applied other nonlinear techniques such as recurrence plots [18] , Gottwald and Melbourne's 0-1 test and the Hugichi fractal dimension [19].
Gear-train setup
Gear fault diagnostics is still a challenging task because of the highly nonlinear characteristics of faults and its complex nonstationary dynamics. Our work investigated gear fault diagnostics using vibration data of a helicopter gearbox mock-up system provided by UTRC. The experimental setup is shown in Fig. 4. This work studied multiple test gears with different health conditions such as healthy gears (H) and defective gears with root crack on one tooth (SCD), multiple cracks on five teeth (MCD) and missing tooth (MTD) are studied. The vibrational signals were recorded using a triaxial accelerometer installed on the test gearbox.
In [20,21], we presented the application of recurrence plots (RPs) and recurrence quantification analysis (RQA) in the diagnostics of various faults in a gear-train system. It also apply mutual information to rank the extracted features in order to obtain an optimal feature set. Results indicate that RQA parameters provide valuable information in characterizing the dynamics of various gear faults in order to discriminate the healthy gear condition from defective conditions. Also, outstanding performance was achieved using RQA parameters to identify various gear conditions with 100% accuracy, 100% recall and 100% precision in detecting multiple cracks and missing tooth conditions.
In [22], the EPST method was applied to detect anomaly behavior and to diagnose various gear defects. Results indicates 99% accuracy in classifying between different gear conditions.
Crack detection
Experiments were conducted on a Crack Propagation Simulator test rig, shown in flexible steel shaft mounted on two roller bearings. Two orthogonal proximity probes oriented in the horizontal and vertical directions were used to measure the vibration response of the shaft. The crack propagator was used over a period of 24 hours to produce a fatigue crack, which is the first damage condition. Then the second fatigue crack was produced by using the crack propagator for another 24 hour period.
In [23], the EPST method integrated with mutual information was applied to detect cracks and to identify the level of degradation. Mutual information was used to only select the most relevant EPST extracted features. Results show 100% performance by using this algorithm; it is notable that only three features were necessary to detect cracks and identify the crack level.
|
2019-04-22T13:12:45.947Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e2eb1962079499091824e061586af7a3be2651f0",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/70/matecconf_vetomacxiv2018_01004.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a17d4f500d1ddcc28b0d4a53b4ecf225ff5ccbc8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
118535031
|
pes2o/s2orc
|
v3-fos-license
|
Wide field polarimetry around the Perseus cluster at 350 MHz
This paper investigates the fascinating diffuse polarization structures at 350 MHz that have previously been tentatively attributed to the Perseus cluster and, more specifically, tries to find out whether the structures are located at (or near) the Perseus cluster, or in the Milky Way. A wide field, eight point Westerbork Synthesis Radio Telescope mosaic of the area around the Perseus cluster was observed in full polarization. The frequency range was 324 to 378 MHz and the resolution of the polarization maps was 2'x3'. The maps were processed using Faraday rotation measure synthesis to counter bandwidth depolarization. The RM-cube covers Faraday depths of -384 to +381 rad m^{-2} in steps of 3 rad m^{-2}. There is emission all over the field at Faraday depths between -50 and +100 rad m^{-2}. All previously observed structures were detected. However, no compelling evidence was found supporting association of those structures with either the Perseus cluster or large scale structure formation gas flows in the Perseus-Pisces super cluster. On the contrary, one of the structures is clearly associated with a Galactic depolarization canal at 1.41 GHz. Another large structure in polarized intensity, as well as Faraday depth at a Faraday depth of +30 rad m^{-2}, coincides with a dark object in WHAM H-alpha maps at a kinematic distance of 0.5 \pm 0.5 kpc. All diffuse polarized emission at 350 MHz towards the Perseus cluster is most likely located within 1 kpc from the Sun. The layers that emit the polarized radiation are less than 40 pc/B|| thick.
Introduction
De Bruyn & Brentjens (2005) discovered fascinating structures in sensitive 350 MHz polarimetric observations of the Perseus cluster (Abell 426) with the Westerbork Synthesis Radio Telescope (WSRT). These structures were discovered using a novel data reduction procedure called RM-synthesis (Brentjens & de Bruyn 2005), which extends the work of Burn (1966) to multiple lines of sight and arbitrary source spectra, and were tentatively attributed to the Perseus cluster. More information on Faraday depth and depolarization can be found elsewhere in the literature (Tribble 1991;Sokoloff et al. 1998;Vallee 1980). There were two classes of polarized emission: clearly Galactic, more or less uniform emission at φ = 0 to +12 rad m −2 , and distinct features that appeared at Faraday depths between +20 and +80 rad m −2 . The second class consisted of two straight features and three other distinct structures. The straight features were the "front" on the western side of the field and the "bar" at the northern edge. A lenticular feature, partially embedded in the front with its major axis parallel to the front, was called the "lens". Directly east of the lens was a very bright, shell-like object called the "doughnut". A patch of polarized emission north of the extended tail of NGC 1265 was called the "blob".
Two possible locations were considered for the second class of emission: our Galaxy, in particular the Perseus arm, and the Perseus cluster itself. We favoured the latter because there was a gap in Faraday depth between the large scale Galactic foreground and the distinct features; Send offprint requests to: brentjens@astron.nl the typical scales in both polarized intensity and Q and U of the features were considerably smaller than the low φ emission; higher Faraday depths appeared to occur closer to 3C 84; the largest structure, the front, was located in the direction of the interface between the Perseus-Pisces super cluster filament and the Perseus cluster; a mini survey of 11 polarized point sources within a few degrees of 3C 84 suggested an excess in Faraday depth of +40 to +50 rad m −2 of the emission with respect to these background sources, which was difficult to explain by a small Galactic Faraday rotating cloud.
If the objects indeed resided at or near the cluster, the "front" could be a large scale structure formation shock at the outskirts of the Perseus cluster, squashing a buoyant bubble (the "lens"). It was suggested that the "doughnut" and "blob" are bubbles that were released more recently into the cluster medium by AGN. The discovery of X-ray cavities (Fabian et al. 2003;Clarke et al. 2004) much closer to 3C 84, combined with simulation work on buoyant bubbles and radio relic sources (Enßlin et al. 1998;Enßlin & Gopal-Krishna 2001;Brüggen 2003;Enßlin & Brüggen 2002) reinforced this idea. Highly polarized relic sources have been observed in several galaxy clusters (Röttgering et al. 1997;Enßlin et al. 1998;Govoni et al. 2001Govoni et al. , 2005, but never in the Perseus cluster. A Galactic origin for the high φ structures could nevertheless not be ruled out, the main issue being that none of the structures had a counterpart in Stokes I. Because the noise in I is considerably higher than the noise in Q and U due to classical source confusion, a counterpart was only expected for the brightest polarized structures. It was nevertheless puzzling that not even the "doughnut" was detected in total intensity, although its polarized surface brightness is only slightly lower than the noise level in Stokes I. One possible explanation is that the Stokes I surface brightness is intrinsically low. This requires a fractional polarization close to the theoretical limit for a synchrotron emitting plasma with an isotropic distribution of the electron velocity vectors (≈70%, see e.g. Le Roux 1961; Rybicki & Lightman 1979). Another explanation is that the Stokes I surface brightness is only apparently low. This is a well known property of interferometric observations of Galactic synchrotron emission (Wieringa et al. 1993), which is extremely smooth in Stokes I and is therefore not picked up by the shortest WSRT baseline of 40λ. The Stokes Q and U structure, however, is detectable at much longer baselines due to small scale changes in the observed polarization angle. The apparent fractional polarization can therefore far exceed 100%. This effect could be important in the observations by de Bruyn & Brentjens (2005) because several observing sessions lacked the shortest spacing.
In this paper I present an eight point WSRT mosaic of the region around the Perseus cluster. Faraday rotation measure synthesis was used to map polarized intensity in the area where 2 h 58 m ≤ α ≤ 3 h 35 m and 39 • 20 ≤ δ ≤ 44 • (J2000) for −384 ≤ φ ≤ +381 rad m −2 . The primary goal is to assess whether the structures previously observed by de Bruyn & Brentjens (2005) are located near the Perseus cluster or in the Milky Way.
An angular distance of one degree corresponds to 1.5 Mpc at the distance of the Perseus cluster and 35 pc at the distance of the Perseus arm (≈ 2 kpc). The redshift of the Perseus cluster is z = 0.0167 (Struble & Rood 1999). I assume that H 0 = 72 ± 2 km s −1 Mpc −1 (Spergel et al. 2003).
Observations
The observations were conducted with the Westerbork Synthesis Radio Telescope (Baars & Hooghoudt 1974;de Bruyn 1996). The array consists of fourteen parallactic 25 m dishes on an eastwest baseline and uses earth rotation to fully synthesize the uvplane in 12 h. There are ten fixed dishes (RT0 -RT9) and four movable telescopes (RTA -RTD).
The distance between two adjacent fixed telescopes is 144 m. The distances between the movable dishes were kept constant (RTA -RTB = RTC -RTD = 72 m, RTB -RTC = 1224 m), while the distance RT9 -RTA was changed for every observing session. The uv-plane is therefore sampled at regular intervals of 12 m out to the longest baseline of 2760 m, lacking only the 0, 12, and 24 m spacings. The regular interval causes an elliptical grating ring with an east-west radius of 4 • and a north-south radius of 4 • / sin δ at 350 MHz. At this frequency the −5 dB and The observations were conducted in mosaic mode. The pointing centres are listed in Table 1. Each session began at a different field to improve the position angle distribution in the uv-plane (see also Table 2). The dwell time per pointing was 150 s and the total integration time after six observing sessions was 8 h 22 m per field.
The eight frequency bands are each 10 MHz wide and are centred at 319, 328, 337, 346, 355, 365, 374, and 383 MHz. The multi-frequency front ends (Tan 1991) of the WSRT have linearly polarized feeds for this frequency range. The x dipole is oriented east-west, the y dipole north-south. The correlator produced 64 channels in all four cross correlations for each band with an integration time of 10 s. The observations were using 180 • front end phase switching. The on-line system applied a Hanning (Harris 1978) lag-to-frequency taper, effectively halving the frequency resolution.
The observations were bracketed by two pairs of calibrators, each consisting of one polarized and one unpolarized source. 3C 345 and 3C 48 were observed before the mosaic and 3C 147 and the eastern hot spot of DA 240 afterwards.
Data reduction
Flagging, imaging, and self calibration were performed with the AIPS++ package (McMullin et al. 2004). Flux scale calibration, polarization calibration, ionospheric Faraday rotation corrections, and deconvolution were performed with a calibration package written by the author and based on the table, measures, and fitting modules of AIPS++/CASA. Channels and frequency bands are numbered from 1.
Data quality
Although the lowest and highest sub bands had to be discarded, the data quality was generally good and interference levels were low. The Sun was still up at the beginning of all but the first observing sessions (see Table 2). The system temperatures were usually between 130 K and 220 K with the median at 175 K. The expected thermal RMS image noise in a clean Hanning tapered channel after 8 h 22 m of integration is 2.6 mJy beam −1 (Thompson et al. 1998).
Because of the Hanning tapering, I processed only the odd numbered channels from 5 to 59 inclusive. Approximately 20% of the data in these channels were flagged, hence the expected thermal RMS image noise per field at full resolution is 0.22 mJy beam −1 after averaging all processed channel maps from the six usable spectral windows. The visibilities were time averaged to 30 s before calibration and imaging to reduce processing time.
Calibration
The flux scale, bandpass, and polarization leakages were calibrated simultaneously per individual channel by solving the Hamaker-Bregman-Sault Measurement Equation (Sault et al. 1996) for the unpolarized calibrator sources 3C 147 and 3C 48. The Perley & Taylor (1999) calibrator fluxes, which extend the Baars et al. (1977) flux scale to lower frequencies 1 , established the absolute flux scale.
The polarization leakages were solved per channel because of their strong 17 MHz semi-periodic frequency dependence (see e.g. de Bruyn & Brentjens 2005). The diagonal phases of the RT0 Jones matrix were fixed at 0 rad. The remaining xy phase difference was determined using the polarized sources.
The ionospheric Faraday rotation was corrected with the method from Brentjens (2008). All fields were subsequently individually self calibrated (Pearson & Readhead 1984) with a CLEAN component (Högbom 1974) based sky model in I and Q. The strongly frequency dependent polarization leakages required a separate CLEAN model per channel.
Fields A and B were calibrated with three phase-only iterations because the total flux in these fields was too low for amplitude self calibration. The remaining fields were calibrated with two phase-only iterations and one amplitude/phase iteration. Each 10 MHz band was self calibrated with a single Jones matrix per antenna at 30 s intervals.
Imaging
All fields were imaged and deconvolved separately. The point spread functions (PSFs) and dirty channel images in all Stokes parameters were created using AIPS++. The uv-plane was uniformly weighted. Because of a fractional bandwidth of 15%, the maps had to be convolved to a common resolution of 74 × 96 FWHM, elongated north-south, using a Gaussian uvplane taper. All maps are in north celestial pole (NCP) projection with the projection centre at 3C 84 (J2000: α = 3 h 19 m 48. s 1601, δ = +41 • 30 42. 106). The dirty maps have 2048×2048 pixels of 30 × 30 each.
The central 1024×1024 pixels of the dirty images were deconvolved using a Högbom CLEAN (Högbom 1974). The CLEAN mask consisted of all Stokes I pixels brighter than 6, 5, 10, 10, 15, 8, 10, and 10 mJy beam −1 for fields A -H, respectively. The deconvolution was stopped whenever the maximum residual in the masked area was below 0.5 mJy beam −1 or when 10 000 iterations were completed without reaching the threshold. The resulting model images were convolved with a 74 × 96 FWHM elliptical Gaussian and added back to the residual images. The deconvolution of the Q and U images was terminated after 10 000 iterations or if a threshold of 0.5 mJy beam −1 was reached.
The primary beam corrected images were combined into one mosaic image per channel per Stokes parameter. The restored Stokes Q and U mosaic maps were subsequently convolved to a 1 The flux scale of WSRT observations has since 1985 been based on a 325 MHz flux of 26.93 Jy for 3C 286 (the Baars et al. (1977) value). On that flux scale, the 325 MHz flux of 3C 295 is 64.5 Jy, which is almost 7% more than the value assumed at the VLA and in this paper (A. G. de Bruyn, private communication). resolution of 2. 0×3. 0 FWHM to enhance the signal to noise ratio of extended emission. The expected RMS thermal noise after averaging all low resolution images for one field is increased to 0.4 mJy beam −1 near the pointing centres and 0.3 mJy beam −1 in the areas surrounded by four pointings because the convolution suppresses data from long baselines. Although the theoretical Q and U noise is approached at the intersection of fields A, B, C, and D, the RMS image noise in most areas of the mosaic is a factor of two higher due to Stokes U dynamic range problems associated with 3C 84.
RM-synthesis
The 143 good quality polarization maps were processed using RM-synthesis (Brentjens & de Bruyn 2005) to avoid bandwidth depolarization. The RM-cube covers the range −384 ≤ φ ≤ +381 rad m −2 in steps of 3 rad m −2 . The absolute value of the corresponding RMSF is displayed in Fig. 1. The FWHM of the main peak is 16.4 rad m −2 and the side lobes are of the order of 15% -20%, requiring deconvolution in φ-space using an RM-CLEAN 2 similar to the work of Heald et al. (2009).
Images
The size of the mosaic is 4. • 5 in declination by 7 • in right ascension. Figure Overview of the mosaic in total intensity at a resolution of 74 × 96 FWHM. The contours are drawn at −9, +9, 18, 36 mJy beam −1 · · · 18 Jy beam −1 . The numbered crosses mark the locations of the Faraday spectra in Fig. 3. The dynamic range in this map is approximately 6000:1.
to 3 mJy beam −1 in field G. The difference between the expected thermal noise and the observed RMS noise is caused by ionospheric non-isoplanaticity. It is worse in the eastern fields because they contain more bright sources than the western fields. These problems are unpolarized and therefore do not limit the dynamic range of the Stokes Q and U images. The total dynamic range is approximately 6000:1. The labelled crosses indicate the positions of the Faraday dispersion spectra presented in Fig. 3. There is no significant Stokes I source at most of these locations.
The individual Faraday dispersion spectra (hereafter simply spectra) in Fig. 3 show that emission at multiple Faraday depths along the same line of sight are the norm, rather than the exception in this part of the sky. The Faraday depth of significant emission ranges from −60 rad m −2 (spectrum 43) to +100 rad m −2 (spectrum 60), or perhaps +140 rad m −2 if the complexity in spectrum 69 is real. Spectra 1, 6, and 37 show the quality of the RM-CLEAN: all have residuals of less than 2% of the main peak, which is comparable to the RMS noise of nearby pixels.
Spectra 24, 28, and 31 are lines of sight through the "lens" structure. They all show clear peaks at a Faraday depth of approximately +50 rad m −2 , in addition to peaks near +6 rad m −2 . Spectrum 30 goes through the centre of the "doughnut" and is triple valued. Spectrum 43 goes straight through the "blob" directly north of the extended tail of NGC 1265. The most complex spectra are located in the fields G and H. The highest absolute Faraday depths occur in fields E, G, and H. Figures A.1 through A.4 show the most interesting part of the RM-cube. The first few frames are devoid of significant emission. The arc between −72 and −60 rad m −2 is instrumental and is caused by a minor calibration error of unknown origin in Stokes U of field G. The first significant emission appears at a Faraday depth of −48 rad m −2 in the northern four fields, especially in fields A (north-west) and H (north-east). The patches have structure at scales of a few arc minutes. The emission increases particularly in the north-eastern part of the mosaic when the Faraday depth approaches 0 rad m −2 .
The entire mosaic is filled with emission with structure at typical scales of tens of arc minutes at Faraday depths between −6 and +12 rad m −2 . The peak brightness is almost 30 mJy beam −1 rmsf −1 in the north-west corner. This type of emission dissolves at φ ≈ +18 rad m −2 . At that point, a welldefined linear structure develops between α ≈ 3 h 12 m , δ ≈ +39. • 7 and the north-west corner of the mosaic. The following frames show that the emission slowly moves east with increasing Faraday depth. It also becomes less uniform. The thin straight line that runs from α = 3 h 16 m , δ ≈ +40 • to α = 3 h 6 m , δ ≈ +43. • 5 at φ = +42 rad m −2 is called the "front" in de Bruyn & Brentjens (2005). There are several highly significant structures in the area between α ≈ 3 h 8 mα ≈ 3 h 8 m , δ ≈ +41. • 5 and the northern edge of the mosaic. There are also small patches of emission across the rest of the map, particularly in the north-eastern area.
The "doughnut" and brightest parts of the "lens" (de Bruyn & Brentjens 2005) are visible at Faraday depths of +48 and +54 rad m −2 , along with several patches north of them. There is a blob of emission at φ = +60 rad m −2 around line of sight 43, north of 3C 84 and directly north of the extended tail of NGC 1265. The "bar" (de Bruyn & Brentjens 2005 are still several significant patches of polarized emission in fields E and H, which fade away at Faraday depths above 100 rad m −2 .
Discussion
The bright emission that spans the entire mosaic at a relatively uniform Faraday depth of 0 to +12 rad m −2 is evidently Galactic: its spatial structure, Faraday depth, and brightness temperature are typical for medium latitudes and comparable resolutions (Uyaniker et al. 1999;Haverkorn et al. 2003a,b;Schnitzeler et al. 2007). The brightness temperature of the polarized intensity is 5 to 10 K with a maximum of 14 K 3 . The Faraday depth range is consistent with observations by Haverkorn et al. (2003a) at similar l and |b|.
In the remainder of this section I argue that most, if not all, of the other extended polarized emission at both higher and lower Faraday depths is Galactic and is not associated with the Perseus cluster. I will do that by discussing the arguments mentioned in the introduction in the light of the new observations.
A special Faraday depth
In Fig. 3, spectra 24, 28, and 31 clearly show the separation between the emission at low φ and the "front" and the "doughnut". It is also evident in spectrum 43 (the "blob") and spectra 44, 46, and 49 (the "bar"). However, as can be seen in the images in Figs. A.1 to A.4 and the Faraday dispersion spectra in Fig. 3, there is significant emission at all Faraday depths between −48 rad m −2 and +100 rad m −2 .
When considering the entire mosaic, there is no trend of higher Faraday depths closer to 3C 84. Of course there is a west to east gradient between φ = +18 rad m −2 and φ = +60 rad m −2 , but higher and lower Faraday depths occur throughout the mosaic. The highest absolute Faraday depths and most complex Faraday dispersion spectra occur in fields E, G, and H in areas that can not be associated with the Perseus cluster.
The Faraday depths of the "front" (+42 to +48 rad m −2 ), "lens" (≈ +50 rad m −2 ), "doughnut" (≈ +50 rad m −2 ), "blob" (+60 rad m −2 ), and "bar" (+78 rad m −2 ) are neither extreme, nor special when compared to the range of Faraday depths observed in this mosaic. One can therefore not distinguish between Galactic polarized emission and cluster related polarized emission in this field based solely on the value of the Faraday depth if it is in the range from −48 rad m −2 to +100 rad m −2 . The area of the "lens", "doughnut", and "blob" at φ = +51 rad m −2 is shown in Fig. 4. The lens is difficult to recognize due to the lower signal to noise ratio compared to the observations by de Bruyn & Brentjens (2005). The position angles are fairly uniform in patches of the order of 15 across, changing abruptly at the borders between these patches. The polarized patches at the same Faraday depth in field D, north of the "lens", "doughnut", and "blob", are comparable, but are too far away from 3C 84 to be associated with the Perseus cluster. The polarized emission in fields G and H at φ = +84 rad m −2 has fairly uniform polarization angles across each emission patch. These patches are 3 to 10 × 20 large (see Fig. 5).
Smaller spatial scales at high |φ|
The polarization angle structure of a few representative images from the full RM-cube is shown in Figs. A.5 and A.6. At φ = +6 rad m −2 the position angles are fairly uniform at scales of 30 to 90 . At φ = +30 rad m −2 it changes at 20 to 30 scales. At φ = +42 rad m −2 the typical scale is 10 to 30 , and at higher Faraday depths scales range from 3 to 20 . These changes can be due to differences in intrinsic polarization, changes in Faraday rotation, or a combination of the two effects. Because of the uncertainty of the precise Faraday depth, it is not possible to discriminate between these possibilities.
The scale size at which the polarization angles change does decrease with increasing |φ|, but this is not limited to the area near 3C 84 and is therefore no argument in favour of nor against cluster association. Furthermore, the scales at which the polar- ization angles at 350.22 MHz change in the "lens", "doughnut", "front", and "blob" are comparable to other structures at similar Faraday depth that can not be linked to the Perseus cluster.
Fractional polarization
The fractional polarization at 351 MHz was estimated by dividing the polarized intensity, integrated over all Faraday depths, by the 408 MHz Haslam et al. (1982) total intensity map converted to 351 MHz using a Galactic synchrotron brightness temperature spectral index β = −2.8 (Reich & Reich 1988b,a;Platania et al. 1998). Between 10 MHz and 100 MHz, the spectral index is −2.55 (Cane 1979), hence the actual spectral index between 408 MHz and 351 MHz is probably closer to −2.7. The difference with −2.8 is negligible for the small extrapolation from 408 MHz to 351 MHz. The noise in the derotated Q and U maps is Gaussian: where P(n) dn is the probability of finding a noise value between n and n + dn, and µ and σ are the mean and standard deviation. If the Q and U noise distributions have equal σ, zero mean, and are uncorrelated, then the probability of finding a noise value of |F| between f and f + d f is: The RMS of |F| is equal to the RMS of Q and U, which is σ √ 2. The mean value of the noise in |F| is The polarized surface brightness of a low S/N line of sight, integrated over a range of equidistant Faraday depths φ 1 · · · φ n , MHz. The contours are 10%, 20%, 30%, 40%, and 50%. The grey scale runs linearly from 0 (white) to 70% (black). and corrected for the non-zero mean of the noise level, is therefore where B is the area under the restoring beam of the RM-CLEAN divided by ∆φ = |φ i+1 − φ i |. See Wardle & Kronberg (1974) for a more general treatment of uncertainties and biases in RM work. Wolleben et al. (2006) have conducted an absolutely calibrated survey of polarized emission north of declination −30 • at 1.41 GHz with the 26 m telescope at the DRAO site at a resolution of 36 FWHM. The integrated 351 MHz polarized intensity overlaid with the polarized intensity contours from Wolleben et al. (2006) is shown in Fig. 6. The noise level in the 351 MHz map is approximately 0.5 K. With a spectral index of −2.8, the brightness temperature at 351 MHz should be 50 times higher than at 1.41 GHz. This is approximately the case in most of the field, which implies that there is very little depolarization between 1.41 GHz and 351 MHz. In some places, the polarized intensity is even higher at 351 MHz than one would expect based on the low resolution polarized intensity at 1.41 GHz and a spectral index of −2.8. Examples are the area containing the "front", "lens", and "doughnut", and the highly polarized region in field A. This is probably caused by beam depolarization in the Wolleben et al. (2006) observations due to differences in intrinsic polarization angle at scales well below 36 that are resolved in the observations presented here. Figure 7 displays the fractional polarization. It is mostly between 10% and 20% with a maximum of 35% in field A. Although these values are well below the theoretical maximum of 70%, they are relatively high. The low fractional polarization between 3C 84 and NGC 1265 is an artifact caused by the low resolution (0. • 85) of the Haslam et al. (1982) map, which blends these powerful sources. Because there are no absolutely calibrated polarimetric single dish observations of the field near 351 MHz, my maps may lack Q and U features at scales 90 . The fractions are therefore strictly speaking lower limits.
The lack of depolarization implies that the synchrotron emitting areas have a Faraday thickness of less than 1 rad m −2 . This is remarkable because the range of Faraday depths is two orders of magnitude larger. Assuming a line of sight magnetic field of 1 µG and a local electron density of 0.03 cm −3 (Gómez et al. 2001;Cordes & Lazio 2002), a Faraday thickness of 1 rad m −2 corresponds to only 40 pc, which is difficult to reconcile with the smoothness of the Galactic synchrotron foreground unless the structures are close to the Sun. Assuming that the emitting patches are approximately as thick as they are wide, nondetection of Stokes I at 90 scales implies that the clouds are closer than 1.6 kpc. Polarization observations at lower frequencies are required to follow the depolarization and determine the exact Faraday thickness.
The "front" and the Perseus-Pisces super cluster
The "front" was tentatively interpreted by de Bruyn & Brentjens (2005) as a large scale structure formation shock at the interface between the Perseus cluster and the Perseus-Pisces super cluster. It was unclear at that time whether the "front" extended much beyond the primary beam of the WSRT. As can be seen in the Association with the Perseus cluster is therefore unlikely.
As can be seen in the 21 cm polarization map by Wolleben et al. (2006) (Fig. 8), the Perseus cluster is located behind the north-western tip of a large field of Galactic depolarization canals (see e.g. Fletcher & Shukurov (2006) and Haverkorn et al. (2000) for an in-depth treatment of depolarization canals). Interestingly, the "front" coincides with the centre of such a canal, hence the "front" is very likely Galactic. If the "lens" is associated with the "front", it must also be Galactic. The two structures may of course be unrelated, but their coincidence in Faraday depth, position, and position angle suggests otherwise.
φ with respect to background sources
The excess of +40 to +50 rad m −2 in φ of the structures observed by de Bruyn & Brentjens (2005) with respect to the background sources was based on a small number of polarized sources near the centre of the mosaic. Taylor et al. (2009) have since published a comprehensive RM catalogue based on a re-analysis of 37 543 NVSS sources, allowing a more detailed analysis. de Bruyn et al. (in prep.) have also conducted WSRT observations of more than 200 polarized sources in and around this area during the 2004/2005 winter season. Those data will be reported in a subsequent paper. Figure 9 illustrates the relation between the Taylor et al. (2009) sources, the de Bruyn & Brentjens (2005 sources, and the diffuse polarized emission. The logarithmic grey scale image represents the maximum |F(φ)| at a particular Faraday depth and horizontal position in a 1 • thick horizontal slab through the RM-cube, centred at α = 3 h 20 m and δ = +41 • 48 12 . The Taylor et al. (2009) sources are those within a 4 • thick horizontal slab centred at the same position. The sources from de Bruyn & Brentjens (2005) partly overlap with the Taylor et al. (2009) selection. Where they do, the rotation measures agree within the error bars. The α tick marks indicate the right ascension at the centre of the slabs. The dashed lines are the background points convolved with a Gaussian kernel of ∆α = 5 m FWHM.
Both dashed curves show a clear trend in the background RM. The same trend is visible in the polarized emission at φ > +12 and α < 3 h 17 m . This suggests that there is an area behind the emission at high φ with a relatively uniform Faraday The scatter in background RMs is relatively large, as is the scatter in the Faraday depth of polarized emission. Whether this scatter can be explained adequately by a turbulent magnetized ISM or by the IGM around the background sources needs to be investigated using numerical MHD simulations.
A crude model
The large scale magnetic field in the vicinity of the Sun is estimated at 1.4 µG and points towards l ≈ 80 • (Han & Qiao 1994;Sun et al. 2008). The Faraday depth at l ≈ 150 • should therefore be slightly negative, which is clearly not observed. The most prominent polarized features are instead observed at positive Faraday depth, indicating a magnetic anomaly in the direction of the Perseus cluster.
It is interesting to briefly explore the conditions in the ISM that are required to explain the observed Faraday depths. Figure 10 illustrates the emerging picture. Because of the generally high degree of polarization and lack of depolarization between L band and 350 MHz, it is likely that the areas with the most pronounced polarized emission are less than 40 pc thick.
The wide spread emission at φ ≈ +6 rad m −2 is probably the most nearby component because of its uniform Faraday depth and polarization angle structure at large spatial scales. The uniform Faraday depth also suggests that n e B towards the Perseus cluster near the Sun is rather uniform. The thickness of this Faraday rotating layer is therefore If the polarized emission at φ = +6 rad m −2 is located closely behind this area, this would place the emission near the edge of the local bubble, which is estimated to have a radius of approximately 200 pc (see e.g. Sun et al. 2008). This is consistent with the suggestion by Wolleben et al. (2006) that the Sun resides inside a synchrotron emitting region, provided that the emission lies beyond a non-emitting magnetized plasma. Beyond this emission follows an area with a Faraday thickness of approximately +6 in the west to +70 rad m −2 at the centre of the mosaic, as was discussed in the previous section. This layer is followed by the emission containing the western diagonal structures and the "front", "lens", "doughnut", and "bar" near the centre of the mosaic. The diagonal features in this area -including the "front" and "lens" -appear connected spatially as well as in Faraday depth. Figure 11 shows the integrated Hα surface brightness contours observed by the Wisconsin Hα Mapper (WHAM, Haffner et al. 2003) overlayed on the integrated polarized intensity map. There is an absorption feature in the integrated Hα map running from the centre of the southern edge of the field to the north-west corner. The minimum Hα brightness coincides with the bright, complex polarized patch around line of sight 32 that is visible between φ = −24 rad m −2 and +36 rad m −2 . Furthermore, the 60 µm and 100 µm IRAS (Neugebauer et al. 1984) infrared maps show enhanced infrared emission here, as well as diagonally towards the north west. The centre of the dark Hα feature runs along the diagonal structures in the western part of the field between φ = +24 rad m −2 and φ = +36 rad m −2 . The WHAM feature is visible between −20 and +20 km s −1 , but is most prominent between −20 and 0 km s −1 . Assuming a central velocity of −5 ± 5 km s −1 and a Galactic rotational velocity near the Sun of 200 ± 10 km s −1 (Merrifield 1992;Binney & Dehnen 1997), the kinematic distance to the cloud is 0.5 ± 0.5 kpc. The polarized emission at high Faraday depth is probably located behind the WHAM structure.
Another hint at the proximity of the high-φ emission comes from the −45 ± 5 rad m −2 offset in Faraday depth between the high-φ emission and the polarized background sources. Assuming that most of this offset is due to the Milky Way, one can estimate the Faraday thickness of this layer using models for the electron density and magnetic field along the line of sight, and a lower and upper integration limit. The electron model consists of the NE2001 spiral arms, thin disc, and thick disc (Cordes & Lazio 2002) modified according to Gaensler et al. (2008). The magnetic field model is the ASS+ring model from Sun et al. (2008). The upper integration limit was set at 20 kpc from the Sun. Simulations were performed with magnetic pitch angles of 12 • (Sun et al. 2008) and 8 • (Han & Qiao 1994) and with different total field strengths (once, twice, and four times the strength from Sun et al. (2008)). The results are shown in Fig. 12.
The points where the curves intersect with ∆φ = −45 rad m −2 mark the maximum distance to the slab -and therefore to the Model Faraday thickness of a slab along the line of sight towards the Perseus cluster as a function of the distance to the near side of the slab. The far side of the slab is held constant at 20 kpc distance from the Sun. The assumed magnetic pitch angle is 12 • (solid lines) or 8 • (dashed lines). The thinner sets of lines represent magnetic models where the total field strength was multiplied with a factor of two or four. The grey bar indicates the 1σ and 2σ uncertainty levels of the observed Faraday thickness of the slab between the high-φ polarized emission and the polarized background sources. most remote polarized emission -that can still build up the required Faraday depth when integrating out to 20 kpc. It is clear that the default models cannot explain the observed gap. It is necessary to either invoke a special area just beyond the most distant emission with strongly deviating electron densities and/or magnetic fields, or an increase in the large scale field strength or electron density, possibly combined with a decrease in pitch angle. In any case it is difficult to defend a distance of more than 1 kpc to the near side of the Faraday rotating area, hence it is likely that the high-φ polarized emission is located well within 1 kpc from the Sun. I consider the order of the structures in Fig. 10 accurate. The uncertainties in the distances to individual objects are of order a factor of two for each of the objects.
Concluding remarks
I have shown that the polarized Galactic radio synchrotron foreground near l ≈ 150 • , b ≈ −13 • is very complex. Most lines of sight show radio emitting screens at multiple Faraday depths between −50 and +100 rad m −2 . Because of the layer of negative Faraday depth behind the high-φ emission in this part of the sky, it is very difficult to distinguish between Galactic and cluster related polarized emission.
Although the "lens" could very well be associated with the "front", it remains a peculiar structure. If the "lens" is related to the Perseus cluster, it is not unlike the giant curved relic sources in Abell 3667 (Röttgering et al. 1997) and Abell 2744 (Orru' et al. 2007). However, it is uncertain how many are still highly polarized at 350 MHz. Although the Abell 2256 relic is highly polarized at 1.4 GHz (Clarke & Enßlin 2006), WSRT observations at 350 MHz (Brentjens 2008) showed it is completely depolarized due to internal Faraday dispersion (Burn 1966). Nor was there evidence for other polarized emission in or near Abell 2256 at 350 MHz. Abell 2255 is another cluster with large, shock related radio filaments at its outskirts (Govoni et al. 2005;Pizzo et al. submitted). Although these sources are highly polarized at 1.4 GHz, they are fully depolarized at 350 MHz and 150 MHz (Pizzo et al. submitted).
The arguments presented in the previous Section and the absence of polarized relic emission at low frequencies in two other clusters with clear evidence for merger shocks at high frequencies leads to the conclusion that all polarized diffuse emission described in de Bruyn & Brentjens (2005) and in this work is Galactic and resides within a few hundred parsecs from the Sun.
|
2010-11-03T11:55:41.000Z
|
2010-11-03T00:00:00.000
|
{
"year": 2011,
"sha1": "e3768007be36968dfbffa73a5069f71c113feff6",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2011/02/aa15319-10.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e3768007be36968dfbffa73a5069f71c113feff6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
232098532
|
pes2o/s2orc
|
v3-fos-license
|
xCT-Driven Expression of GPX4 Determines Sensitivity of Breast Cancer Cells to Ferroptosis Inducers
Inducers of ferroptosis such as the glutathione depleting agent Erastin and the GPX4 inhibitor Rsl-3 are being actively explored as potential therapeutics in various cancers, but the factors that determine their sensitivity are poorly understood. Here, we show that expression levels of both subunits of the cystine/glutamate antiporter xCT determine the expression of GPX4 in breast cancer, and that upregulation of the xCT/selenocysteine biosynthesis/GPX4 production axis paradoxically renders the cancer cells more sensitive to certain types of ferroptotic stimuli. We find that GPX4 is strongly upregulated in a subset of breast cancer tissues compared to matched normal samples, and that this is tightly correlated with the increased expression of the xCT subunits SLC7A11 and SLC3A2. Erastin depletes levels of the antioxidant selenoproteins GPX4 and GPX1 in breast cancer cells by inhibiting xCT-dependent extracellular reduction which is required for selenium uptake and selenocysteine biosynthesis. Unexpectedly, while breast cancer cells are resistant compared to nontransformed cells against oxidative stress inducing drugs, at the same time they are hypersensitive to lipid peroxidation and ferroptosis induced by Erastin or Rsl-3, indicating that they are ‘addicted’ to the xCT/GPX4 axis. Our findings provide a strategic basis for targeting the anti-ferroptotic machinery of breast cancer cells depending on their xCT status, which can be further explored.
Introduction
Ferroptosis is a form of cell death that involves an iron-dependent accumulation of lipid peroxides. Several cellular components have emerged as playing key roles in the regulation of ferroptosis. The cystine-glutamate antiporter xCT which is composed of the subunits SLC7A11 and SLC3A2, plays a protective role against ferroptosis by allowing the import of cystine, which is a rate-limiting step in the biosynthesis of the antioxidant molecule glutathione [1]. The enzyme glutathione peroxidase 4 (GPX4) utilizes glutathione as a substrate for reduction, and the activity of GPX4 in reducing lipid peroxidation has been shown to be a key step in the prevention of ferroptosis [2].
GPX4 is a selenoprotein, with a selenocysteine residue in its active site, and thus it requires selenocysteine biosynthesis for its expression and activity [3,4]. The selenocysteine biosynthesis pathway incorporates selenium, such as in the dietary form selenite, to produce selenocysteinyl-tRNA. The antioxidant function of selenium as a nutrient is mediated through selenoproteins such glutathione peroxidases and thioredoxins, including GPX4, thus implicating selenium in ferroptosis as well [5,6]. Furthermore, it was found that elevated SLC7A11 expression in cancer cells drives selenocysteine biosynthesis by promoting selenite reduction and uptake as an early step in selenocysteine biosynthesis [4].
Materials and Methods
All reagents used in this paper are listed in Supplementary Table S1.
Cell Lines and Cell Culture
All cell lines were cultured in a humidified incubator at 37 • C under 5% CO 2 . The cell line information used in this paper is provided in Supplementary Table S2.
Western Blot Analysis
Cells were washed with cold PBS and lysed in RIPA lysis buffer with protease inhibitor (Sigma Aldrich, St. Louis, MO, USA). After 20 min incubation on ice, lysates were centrifuged at 12,000× g at 4 • C for 10 min to collect supernatant. Protein concentration of each samples was determined by Bradford assay. Typically, 15-25 µg of total protein were denatured in 6× Lamelli buffer (Boston bioproducts, Boston, MA, USA), loaded per lane on SDS-PAGE gel (Biorad, Hercules, CA, USA), and analyzed by standard immunoblotting. The target proteins were detected by ECL, Pico or diluted Femto substrate (Thermo Fisher Scientific, Waltham, MA, USA).
Cell Viability Assay
The effect of GPX4 inhibition is affected by cell confluency (31341276). Thus, all viability assays were performed with consistently regarding cell seeding number and time points. For testing toxicity of hydrogen peroxide, Erastin and Rsl-3 in cell lines, 1500 cells were plated in a 96 well plate. Next day, cells were treated with the drugs. Cell viability was measured by CellTiter-Glo Luminescent Assay (Promega, Madison, WI, USA).
Conditioned Media Thiol Quantification (Ellman's Test)
10,000 cells were plated per well of a 96 well plate. The next day, the media was changed to 100 µL fresh media containing vehicle or Erastin. After 12 h conditioning, the conditioned media were collected and directly mixed with 50 µL of 10 mM DTNB (5,5dithiobis-(2-nitrobenzoic acid)) dissolved in DMSO in another 96 well plate. Absorbance at 450 nm was measured spectrophotometrically in 3 min (DTX880, Beckman Coulter, Indianapolis, IN, USA). The blank value was subtracted as noise, and all values were normalized to that of unconditioned media. The leftover cells in the 96 well plate were subjected to the CellTier-Glo Luminescent Assay after adding fresh media. As phenol red mask the changed color of DTNB, phenol red-free media was used for this assay.
Total Selenium Measurement by ICP-MS Analysis
3 million CAL120 and 4 million MDAMB231 cells were plated in 150 pi dish, and media were changed with that containing vehicle, 12 µM selenite, or 12 µM Erastin the next day. After 2 h of treatment, cells were harvested and washed three times with cold PBS, and weighed. The cell pellets were treated with 500 µM 1:1 H 2 O 2 /HNO 3 and stood in a safety hood for 12 h at room temperature with periodic venting. Samples were then sonicated for 1 h at 35 kHz and 40 • C with periodic venting. Contents of the Eppendorf tubes were then transferred to a glass digestion tube with ASTM type I water (2 × 1 mL). Tubes were sealed, heated at 140 • C for 2 h, then cooled and diluted to 5 mL with ASTM type I water for selenium analysis.
Selenium measurements were carried out with an Agilent 7500A ICP-MS system which had a standard concentric nebulizer, a Peltier-cooled and double-pass Scott-type spray chamber, torch shield, and standard Nickel interface cones. Between each analytical sample, the probe sample introduction system was rinsed with 10% nitric acid for 60 s to prevent carryover. Calibration curves were made with standard solutions of Se (Ultra Scientific, Kingstown, RI, USA) and QC samples using a multi-element standard (Environmental Express).
Detection of Lipid Peroxidation
Cells were collected by trypsinization and washed with HBSS. The cell pellets were resuspended with 150 µL of 5 µM BODIPY™ 581/591 C11 lipid peroxidation sensor (Thermo Fisher Scientific, Waltham, MA, USA) in HBSS and incubated at 37 • C. After 20 min incubation, 500 µL of HBSS was added to the stained cells and subjected to FACS analysis with BD LSR II flow cytometer immediately. Briefly, SSC-A and FSC-A gating strategy was applied to remove the cell debris. FSC-H and FSC-A subgating was performed to identify single cell population. Around 10,000 cells gated as single were used for analysis. BD FACS Diva program and FLOWJO10 were used for data collection and data analysis, respectively.
RNA Expression and Prognostic Value SLC7A11 and SLC3A2 in Normal and Tumor Tissues
Analyses of gene expression data in breast normal and cancer tissues, and disease free survival and overall survival in patients with breast cancer were performed using the web tool GEPIA [10]. Total numbers of samples for each analysis are not designated by the user; instead the GEPIA web tool designates the high and low from the median and provides the total numbers. Total numbers may differ between the different genes (SLC7A11 vs. SLC3A2) due to some samples not meeting the GEPIA criteria for either high or low designation for the given gene.
Processing of Human Breast Tissues
Human breast cancer samples and normal breast tissues were obtained with informed consent from the University of Massachusetts Medical School Biorepository and Tissue Bank using procedures which were conducted under an Institutional Review Board (IRB)approved protocol. After surgical removal, fresh tumor tissues or normal tissues were immediately snap-frozen in liquid nitrogen and stored at −80 • C. Later, tissues were homogenized in RIPA buffer with complete protease inhibitor cocktail, then centrifuged at 13,000× g at 4 • C for 10 min. Samples were normalized for protein content and western blots were performed. Protein bands were quantified using Image J program. Scanned films were inverted, and the intensity of each band was measured, and then the background value was subtracted. The intensity value of protein bands for each sample was normalized to the intensity value for Actin or GAPDH proteins.
Quantification and Statistical Analysis
Results of the viability assay, comparison of protein expressions between normal and cancer tissues, Ellman's test and total selenium measurement were analyzed using Student's t test. Disease free survival and overall survival in patients with breast cancer were compared between xCT high and low groups using the Kaplan-Meier method, and significant differences in curves were assessed using the log-rank test. Values of p < 0.05 were considered statistically significant, and data marked with a one (*), two (**) or three (***) asterisks indicate p values of <0.05, <0.01 and <0.001, respectively.
Expression of xCT, an Initial
Step of Selenocysteine Synthesis Pathway, Is Highly Upregulated in Breast Cancer Tissues and Expression of GPX4 Is High in xCT Positivie Tumors As xCT activity was shown to drive the expression of the selenoprotein GPX4 via the selenocysteine biosynthesis pathway (Figure 1a, [4]), we examined the levels of SLC7A11, SLC3A2, and GPX4 in breast normal and cancer tissues to confirm if this pathway is altered in cancer tissues and if GPX4 expression is associated with the expression of xCT. First, we analyzed normalized mRNA transcript levels of SLC7A11 and SLC3A2 from data from The Cancer Genome Atlas (TCGA) compared with normal tissues from (GTEx), using the web tool GEPIA [10], which demonstrated a trend towards increased mRNA levels which was not significant ( Figure 1b); similar non-significant trends for increase in both subunits was observed in data mining across multiple cancer types (Supplementary Figure S1a). When directly examining protein levels from patient-derived breast tumor samples and normal breast tissues, we found that the expression levels of the xCT subunits, SLC7A11 and SLC3A2, were significantly upregulated in breast tumor tissues compared with normal tissues (Figure 1c,d). xCT requires both these subunits [4], thus we tried to distinguish between tissues that express only one or both of the subunits. Interestingly, there were 6 cancer tissues out of 14 that expressed significant levels of both SLC7A11 and SLC3A2, which we designated xCT positive tissue (Figure 1e), while none of the normal breast tissues was xCT positive (Figure 1c). Interestingly, when comparing expression across all tumor samples versus all normal tissue samples, in contrast to SLC3A2 and SLC7A11, GPX4 was not significantly overexpressed in cancer tissues compared to normal tissues across the set ( Figure 1d). In contrast, when directly comparing the protein expression levels of GPX4 in xCT-positive tumors with their paired (i.e., from the same patient) normal breast tissue, or comparing all xCT positive tumors tissues with all xCT-negative tumor tissues, we saw statistically significant increases of GPX4 expression in the xCT positive group (Figure 1f). Furthermore, analyses of TCGA data indicated that high expression of both subunits of xCT was significantly associated with poor overall survival ( Figure 1g) and disease-free survival ( Figure 1h) in patients with breast cancer. In contrast, expression of either SLC7A11 or SLC3A2 alone did not have significant prognostic value (Supplementary Figure S2a,b). These findings support the notion that xCT drives the expression of GPX4 [4], and that the elevated expression of both SLC7A11 and SLC3A2 is a determinant for increased xCT function as indicated by GPX4 expression-And is associated with poor prognosis in breast cancer.
Erastin Targets the Selenium Uptake and Selenoprotein Expression Promoting Activity of xCT
Erastin is an established xCT inhibitor and inducer of ferroptosis, and is currently being explored as a potential breast cancer therapeutic agent. The ferroptosis inducing activity of Erastin and other xCT inhibitor compounds have been primarily attributed to the inhibition of cystine import leading to glutathione depletion, as cysteine which can be formed from the reduction of cystine is a rate-limiting precursor for glutathione biosynthesis [11]. The role of xCT in promoting selenium uptake and selenoprotein expression suggested to us that Erastin may also function to disrupt the selenium-dependent expression of GPX4, which given the role of GPX4 in ferroptosis may greatly affect a cancer cell's sensitivity or resistance to ferroptosis. xCT is thought to promote selenium entry into a cell because some of the imported cystine is intracellularly reduced to cysteine and exported, providing extracellular reduced thiol groups which allow selenite (SeO3) to be reduced to selenide to initiate the selenocysteine biosynthesis pathway (Figure 2a) [4,12]. We found that Erastin treatment in breast cancer cells diminished the levels of extracellular thiols (Figure 2b
Erastin Targets the Selenium Uptake and Selenoprotein Expression Promoting Activity of xCT
Erastin is an established xCT inhibitor and inducer of ferroptosis, and is currently being explored as a potential breast cancer therapeutic agent. The ferroptosis inducing activity of Erastin and other xCT inhibitor compounds have been primarily attributed to the inhibition of cystine import leading to glutathione depletion, as cysteine which can be formed from the reduction of cystine is a rate-limiting precursor for glutathione biosynthesis [11]. The role of xCT in promoting selenium uptake and selenoprotein expression suggested to us that Erastin may also function to disrupt the selenium-dependent expression of GPX4, which given the role of GPX4 in ferroptosis may greatly affect a cancer cell's sensitivity or resistance to ferroptosis. xCT is thought to promote selenium entry into a cell because some of the imported cystine is intracellularly reduced to cysteine and exported, providing extracellular reduced thiol groups which allow selenite (SeO 3 ) to be reduced to selenide to initiate the selenocysteine biosynthesis pathway (Figure 2a) [4,12]. We found that Erastin treatment in breast cancer cells diminished the levels of extracellular thiols (Figure 2b), eliminated their selenite uptake (Figure 2c), and reduced expression of the selenoprotein antioxidants GPX1 and GPX4 (Figure 2d,e), indicating that Erastin is disrupting xCT-mediated extracellular reduction which leads to selenium uptake and selenoprotein production in these cells. These findings support the model for xCT function in GPX4 expression which we previously proposed [4]: that xCT, by allowing cystine import which is in turn reduced and exported to provide extracellular thiols, results in selenite reduction and uptake, leading to production of selenoproteins such as GPX4. Erastin, by inhibiting the cystine import function of xCT, hinders this process and the production of GPX4. These doses of Erastin (3 or 6 µM) did not cause significant toxicity at the time point for these experiments (Supplementary Figure S3a,b), indicating that cellular toxicity is not a reason for reduced extracellular thiols, selenium uptake, and GPX expression. Furthermore, we observed that Erastin did not decrease, and actually slightly increased, expression of SLC3A2 and SLC7A11 subunits in these cells (Figure 2d,e), supporting that Erastin worked through direct inhibition of xCT activity rather than impairing xCT subunit expression. Collectively, these findings further support the role of xCT in driving the expression of antiferroptotic agent GPX4, and suggest that inhibition of the selenocysteine biosynthesis axis is an important consequence of xCT inhibition by Erastin.
Antioxidants 2021, 10, 317 6 of 12 disrupting xCT-mediated extracellular reduction which leads to selenium uptake and selenoprotein production in these cells. These findings support the model for xCT function in GPX4 expression which we previously proposed [4]: that xCT, by allowing cystine import which is in turn reduced and exported to provide extracellular thiols, results in selenite reduction and uptake, leading to production of selenoproteins such as GPX4. Erastin, by inhibiting the cystine import function of xCT, hinders this process and the production of GPX4. These doses of Erastin (3 or 6 μM) did not cause significant toxicity at the time point for these experiments (Supplementary Figure S3a,b), indicating that cellular toxicity is not a reason for reduced extracellular thiols, selenium uptake, and GPX expression. Furthermore, we observed that Erastin did not decrease, and actually slightly increased, expression of SLC3A2 and SLC7A11 subunits in these cells (Figure 2d,e), supporting that Erastin worked through direct inhibition of xCT activity rather than impairing xCT subunit expression. Collectively, these findings further support the role of xCT in driving the expression of antiferroptotic agent GPX4, and suggest that inhibition of the selenocysteine biosynthesis axis is an important consequence of xCT inhibition by Erastin.
Breast Cancer Cells Have Increased Resistance against Cell Death Induced by Reactive Oxygen Species Which Correlates with xCT Expression
The role of xCT in driving both glutathione production and the expression of glutathioneutilizing enzymes such as GPX4 suggests that its expression may promote survival under oxidative stress conditions for breast cancer cells that have elevation in xCT expression. We first identified two TNBC cell lines, MDAMB231 and CAL-120, as having increased expression of both SLC7A11 and SLC3A2, while two commonly used non-transformed mammary epithelial lines, MCF10A and MCF12A, were found to express only SLC3A2 but not SLC7A11, and are thus xCT-negative (Figure 3a,b). Next, we found that the breast cancer lines were highly resistant to death induced by hydrogen peroxide (Figure 3c). This suggests that xCT-positive breast cancer cells have a selective advantage against oxidative stress.
Breast Cancer Cells Have Increased Resistance against Cell Death Induced by Reactive Oxygen Species Which Correlates with xCT Expression
The role of xCT in driving both glutathione production and the expression of glutathione-utilizing enzymes such as GPX4 suggests that its expression may promote survival under oxidative stress conditions for breast cancer cells that have elevation in xCT expression. We first identified two TNBC cell lines, MDAMB231 and CAL-120, as having increased expression of both SLC7A11 and SLC3A2, while two commonly used non-transformed mammary epithelial lines, MCF10A and MCF12A, were found to express only SLC3A2 but not SLC7A11, and are thus xCT-negative (Figure 3a,b). Next, we found that the breast cancer lines were highly resistant to death induced by hydrogen peroxide (Figure 3c). This suggests that xCT-positive breast cancer cells have a selective advantage against oxidative stress.
TNBC Cells Are Paradoxically Hypersensitive to Targeting of Anti-Ferroptotic Machinery by Erastin and Rsl-3
Our results thus far suggest that xCT positive breast cancer cells-Those expressing high levels of both SLC7A11 and SLC3A2-Are able to drive GPX4 expression through the selenocysteine biosynthesis pathway, and also are able to resist lipid prooxidant-induced death. This is in line with the known function of both xCT and GPX4 in protecting against ferroptosis in various cell stress states.
We next examined the sensitivity of these cancer and noncancer cells to prolonged exposure to agents targeting the xCT and GPX4 anti-ferroptotic machinery, Erastin and Rsl-3 ( Figure 4a). Surprisingly, in contrast to their increased resistance against reactive oxygen species, we found that MDAMB231 and CAL120 cells were hypersensitive to both Erastin and Rsl-3 relative to the nontransformed lines. We observed that treatment of Erastin, even in the absence of a prooxidant insult such as hydrogen peroxide, induced a dramatic loss of cell viability (Figure 4b,c) and significant accumulation of lipid peroxidation species (Figure 4d). The loss of viability induced by Erastin or Rsl-3 treatment appeared to be caused by lipid peroxidation and ferroptosis, as they were rescued by the lipid antioxidant/ferroptosis inhibitor ferrostatin-1 or α-tocopherol (Figure 4e,f). Thus, while the TNBC cells had an augmented defense system mediated by xCT and GPX4 against oxida-
TNBC Cells Are Paradoxically Hypersensitive to Targeting of Anti-Ferroptotic Machinery by Erastin and Rsl-3
Our results thus far suggest that xCT positive breast cancer cells-Those expressing high levels of both SLC7A11 and SLC3A2-Are able to drive GPX4 expression through the selenocysteine biosynthesis pathway, and also are able to resist lipid prooxidant-induced death. This is in line with the known function of both xCT and GPX4 in protecting against ferroptosis in various cell stress states.
We next examined the sensitivity of these cancer and noncancer cells to prolonged exposure to agents targeting the xCT and GPX4 anti-ferroptotic machinery, Erastin and Rsl-3 ( Figure 4a). Surprisingly, in contrast to their increased resistance against reactive oxygen species, we found that MDAMB231 and CAL120 cells were hypersensitive to both Erastin and Rsl-3 relative to the nontransformed lines. We observed that treatment of Erastin, even in the absence of a prooxidant insult such as hydrogen peroxide, induced a dramatic loss of cell viability (Figure 4b,c) and significant accumulation of lipid peroxidation species (Figure 4d). The loss of viability induced by Erastin or Rsl-3 treatment appeared to be caused by lipid peroxidation and ferroptosis, as they were rescued by the lipid antioxidant/ferroptosis inhibitor ferrostatin-1 or α-tocopherol (Figure 4e,f). Thus, while the TNBC cells had an augmented defense system mediated by xCT and GPX4 against oxidative stress (Figure 3c), they were paradoxically more dependent on these pathways, compared to normal cells, to prevent ferroptosis. This suggests an "addiction scenario" that can be further explored in future studies for optimal therapeutic targeting in breast cancer.
Antioxidants 2021, 10, x FOR PEER REVIEW 9 of 13 tive stress (Figure 3c), they were paradoxically more dependent on these pathways, compared to normal cells, to prevent ferroptosis. This suggests an "addiction scenario" that can be further explored in future studies for optimal therapeutic targeting in breast cancer.
Discussion
The expression of both SLC7A11 and SLC3A2 subunits can be a marker for xCT function and selenoprotein production capacity of breast cancer cells. xCT consists of two subunits, SLC7A11 and SLC3A2. High expression of either SLC7A11 and SLC3A2 has been previously reported in several tumor types. SLC7A11 is highly expressed in many types of tumors such as acute myeloid leukemia, breast cancer, colorectal cancer, hepatocellular carcinoma, glioma, etc., and its high expression is associated with poor prognosis of patients with cancers [13,14]. SLC3A2 is highly expressed in breast cancer [15] and osteosarcoma [16]. Especially, the expression of SLC7A11 alone has been utilized to evaluate the
Discussion
The expression of both SLC7A11 and SLC3A2 subunits can be a marker for xCT function and selenoprotein production capacity of breast cancer cells. xCT consists of two subunits, SLC7A11 and SLC3A2. High expression of either SLC7A11 and SLC3A2 has been previously reported in several tumor types. SLC7A11 is highly expressed in many types of tumors such as acute myeloid leukemia, breast cancer, colorectal cancer, hepatocellular carcinoma, glioma, etc., and its high expression is associated with poor prognosis of patients with cancers [13,14]. SLC3A2 is highly expressed in breast cancer [15] and osteosarcoma [16]. Especially, the expression of SLC7A11 alone has been utilized to evaluate the expression of xCT [17][18][19][20][21][22]. To our knowledge, there has not been an approach to determine the expression of both SLC7A11 and SLC3A2 to evaluate the prognostic value of xCT. However, we previously showed that depletion of any subunits makes xCT nonfunctional [4], which means that confirming the expression of one subunit may not represent the existence of functional xCT in cells. In addition, SLC3A2 also comprises the heavy subunit of the large neutral amino acid transporter (LAT1) with a light subunit protein encoded by the SLC7A5 gene [23] which indicates that SLC3A2 may be expressed as a part of LAT1 complex in some cases. Thus, evaluating one subunit could not be an accurate marker for functional xCT. In this study, the expression of both subunits was evaluated side-by-side to evaluate a level of potentially functional xCT in breast normal and cancer tissues. Some tumors expressed only one subunit and others expressed both subunits which we defined as a xCT high tumor. The significantly elevated expression of GPX4 in xCT-positive tumors versus their paired normal tissues, or versus xCT-negative tumors, supports the notion that tumors expressing high levels of both subunits could be an indicator for the existence of the functional xCT which leads to high selenium uptake and upregulation of selenoproteins. However, this relationship between expression levels of xCT subunits and selenoproteins should be validated with larger sets of paired samples to further clarify the effect of the functional xCT on selenium uptake and its relationship with the expression of various selenoproteins in tumor pathophysiology.
Erastin and xCT inhibitors as a strategy to inhibit selenium uptake and GPX4 production in cancer cells. Erastin is currently being explored as a cancer therapy agent, in large part due to its role in inducing ferroptosis. Sulfasalazine and sorafenib are other compounds that have been similarly explored as xCT inhibitors, although they may have wider effects and target profile compared to Erastin [24,25]. Modified compounds based on Erastin with improved stability and/or bioavailability have also been developed [26]. Therefore, fully understanding the mechanism for Erastin and related compounds is of importance in further developing these as a therapeutic strategy. Here we show that Erastin drastically impairs extracellular thiol reduction, selenium uptake, and expression of the selenoprotein GPX4. This finding is significant, as the current model of Erastin mode of action only considers that of impairing glutathione biosynthesis. According to our findings, impaired production of GPX4, a key enzyme implicated in ferroptosis prevention which uses glutathione as substrate, is an additional important mechanism of action for Erastin/xCT inhibitors, one that by itself could have ferroptosis triggering properties even if glutathione production were to somehow be restored. A potential workaround for cancer cells around impaired GPX4 production is that aside from the common dietary selenium compound selenite, there are other independent routes by which selenium could be obtained, in particular recycling of selenocysteine obtained from selenium carriers such as SEPP1, or from selenomethionine, both of which would involve the action of selenocysteine lyase (SCLY) in forming selenide from selenocysteine, prior to its conversion to selenocysteinyl-tRNA. Thus, impairing this route of selenocysteinyl-tRNA formation in conjunction with xCT inhibition could have even more potent effects, and the potential functions of SCLY in cancer and ferroptosis should be explored in future studies.
'Addiction' of breast cancer cells to xCT/GPX4 anti-ferroptotic machinery. Conversely to the known effect of xCT inhibitors such as Erastin sensitizing or triggering ferroptosis, the increased expression or activity of xCT is implicated in the increased resistance of cancer cells against insults that can trigger ferroptosis, such as hypoxia or chemotherapy [27][28][29]. Along similar lines, the role of selenium in producing antioxidant selenoproteins forms the very basis of selenium being considered an antioxidant nutritional supplement [30,31]. Recently, selenocysteine delivered via synthetic peptide was shown to protect against ferroptosis in a stroke model [32]. We also recently showed that CRISPR disruption of selenocysteine biosynthesis machinery (SEPSECS, PSTK) hypersensitizes cancer cells against the lipid prooxidant tert-butyl hydroxide (TBH) [4]. Therefore, it was not surprising that the breast cancer lines MDAMB231 and CAL120, which have elevated expression of both xCT subunits, have increased resistance against hydrogen peroxide compared to the nontransformed immortalized lines MCF10A and MCF12A, which are xCT-negative ( Figure 3). What was highly unexpected was that at the same time, MDAMB231 and CAL120 are hypersensitive to the drugs which directly target the machinery that allowed them to resist hydrogen peroxide, namely xCT (Erastin) and GPX4 (Rsl-3). This suggests a scenario in which anti-ferroptotic machinery is upregulated in these cancer cells but at the same time they are 'addicted' or highly dependent on this machinery even in the absence of an insult such as hydrogen peroxide.
How are why are cancer lines addicted? One possibility involves selenide, which is formed by xCT activity in the process of selenocysteine biosynthesis (which ultimately results in GPX4 production). We have shown that the accumulation of hydrogen selenide gas formed during selenocysteine biosynthesis is toxic to cancer cells by increasing ROS levels [4]. As xCT-positive cancer cell lines are producing more selenide, they may be more dependent on the end product GPX4 to neutralize selenide-induced oxidative stress. As a second possibility, cancer cells might be more detrimental to GPX4 inhibition because they have a higher level of polyunsaturated fatty acids (PUFAs). Basal-like breast cancer cell lines are susceptible to ferroptosis due to the expression of acyl-CoA synthetase long-chain family member 4 (ACSL4) which enriches cellular membranes with long polyunsaturated fatty acids (PUFAs) [33]. Since ACSL4 is expressed higher in breast cancer compared to the adjacent normal tissue [34], and PUFAs are targets of lipid peroxidation, cancer cells might be more susceptible to lipid peroxidation and ferroptosis, thus GPX4 is more essential in cancer cells. As a third possibility, as the xCT-positive cancer cells are expected to have increased glutathione production and GPX4 expression, the resulting increased antioxidant capacity may be accompanied by a reduced selective requirement for other anti-ferroptotic or prosurvival signaling activities, meaning that when this xCT/GPX4-mediated protection is removed, these cancer cells are vulnerable compared to 'nonaddicted' cells.
Therapeutic Implications and Concluding Thoughts
We have shown that cancer cells that overexpress both SLC7A11 and SLC3A2 are the ones that should be considered xCT-positive, and that this pathway, through the selenocysteine biosynthesis pathway mediated production of GPX4, impacts a cancer cell's sensitivity to various ferroptotic inducers. While future studies should delineate the exact underlying mechanism, our finding that xCT positive cancer cells are simultaneously resistant to lipid peroxidation insult yet hypersensitive to xCT or GPX4 inhibitors raises important therapeutic implications. There are two classes of ferroptotis inducers: those which directly trigger peroxidation such as hydrogen peroxide, and those that target a cell's defense against lipid peroxidation. Our results imply that in xCT positive cancers, the latter is the desirable approach. xCT is widely reported to be upregulated in different subtypes of cancers, and has been linked with oncogenic mutations such as Keap1 loss, and associated with clinical parameters such as chemoresistance. Our findings provide a starting point and rationale for targeting anti-ferroptotic machinery of cancer cells depending on their xCT status, which can be further developed in future studies.
|
2021-03-04T05:45:07.477Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b7949ef918747832a076aedac11df00ae6721f7d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3921/10/2/317/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7949ef918747832a076aedac11df00ae6721f7d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17056755
|
pes2o/s2orc
|
v3-fos-license
|
Infanticide by a mother with untreated schizophrenia
Summary This case report describes a 30-year-old mother of four with a 6-year history of obvious paranoia and psychosis from a poor rural farming community in India. Her symptoms and social functioning deteriorated over time, but the family did not seek medical care until she killed her 3-month-old daughter while under the influence of command hallucinations. Subsequent treatment with antipsychotic medication resulted in control of her psychotic symptoms and greatly improved psychosocial functioning. This case is an example of one of the many negative consequences of a community’s failure to recognize and treat mental illnesses. The patient had severe symptoms that were obvious to all for 6 years prior to the infanticide, but the family’s lack of basic knowledge about mental illness, the lack of locally available mental health care, and the relatively high cost of care prevented family members from obtaining the treatment that almost certainly would have prevented the tragic death of her infant. Changing these three factors in poor rural communities of low- and middle-income countries is the challenge we must work together to address. Infanticide secondary to untreated mental illness is a glaring reminder of how urgent this task is.
Introduction
The prevalence of schizophrenic disorders is usually estimated as less than 1% of the population, but persons with schizophrenia account for between 5 to 20% of all homicides by persons with mental disorders. [13] The incidence of homicide by severely mentally ill individuals is approximately 0.13 per 100,000 per year in most countries, [2] but it is higher in countries with higher total homicide rates. [4] Few studies have attempted to estimate the rate of homicide by individuals with schizophrenia, but the figure of 1 in 3000 males with schizophrenia per year estimated by Wallace and colleagues [3,5] in 1998 is widely quoted. However, these figures do not distinguish between individuals who have never been treated, those who are not currently being treated, and those who are currently being treated; there may be significant differences in the homicide rates between these three groups of individuals with schizophrenia.
If the risk of homicide is greatest during the first episode of illness in schizophrenia, earlier recognition and treatment of persons with schizophrenia could reduce the risk of homicide. This possibility is supported by the findings of two recent ecological studies. The first found a lower rate of homicide during the first episode of psychosis in countries where the duration of untreated psychosis was shorter. [6] The second reported a dramatic decline in rates of homicide by people with mental illnesses in England and Wales that started at the time community-based primary psychiatric care became available, despite a rise in other forms of homicide over the same period. [4] This case report describes a tragic infanticide in India by a mother who had an untreated severe mental illness.
Case history
Mrs. X, a 30-year-old housewife from a poor rural household, was brought to the adult psychiatric outpatient department at King George's Medical University (KGMU) in Lucknow, Uttar Pradesh by her husband with a 6-year history of suspiciousness, muttering to self without obvious reason, and decreased sleep. One of the main reasons the family brought her to the city for professional psychiatric evaluation was that she had killed her 3-month-old daughter 8 months earlier.
• 312 • About 6 years previously (when she was 24 years old), she started to believe that her husband was involved in a conspiracy, the goal of which was to kill her. She gradually stopped eating with other family members because she was afraid that her husband would try to poison her. Three years previously she and her husband stopped living with the husband's parents and moved to a nearby house because she was suspicious about the intentions of her in-laws. She also believed that her children were involved in this conspiracy and would kill her when they grew up. Driven by this fear of being killed, she had attempted to kill her husband at night on four separate occasions.
During the course of her illness other changes in her behavior included muttering to herself without any obvious reason, smiling to herself, and gesturing in the air. Sometimes she reported hearing the voices of her dead parents and said that she would talk to them. She progressively decreased her interactions with family members, showed little initiative for work, did not engage in any pleasurable activities, and became apathetic and withdrawn. Gradually she stopped doing any household activities, including caring for her children and husband. Her self-care deteriorated; she would often go without bathing for weeks. She would not leave her room for days at a time and refuse any food offered to her or, alternatively, wander aimlessly in the neighborhood. Her sleep decreased to 1-2 hours per day; throughout the nighttime she would continually mutter to herself and pace about the room.
The intensity of the symptoms varied over time, including moderate exacerbations during the prenatal period of her pregnancies (she had four children aged 6 years, 3 years, 1 year, and 3 months at the time of the death of her daughter). When the symptoms were severe, her husband consulted the local faith healers who suggested that the symptoms were a 'supernatural spirit' trying to control her and recommended the use of various local herbs (which were not effective). They also recommended locking her up in her room when her symptoms were severe, so her husband frequently locked her up in the home. There was no change in the symptoms after the birth of her fourth child, so she was allowed to sleep with the baby in the belief that this would help reduce her suspicions about family members. There was no warning about what was to follow.
At 3 a.m. one morning she woke and, after checking that her husband and other children were asleep, took her 3-month-old daughter out of the house, smothered her, and concealed the body in a mesh net in a nearby pond. When her husband woke to find that his wife and daughter were not in the house, he went searching for them in the neighborhood. When he found her and asked her about the whereabouts of her daughter, she told him that she had killed the child and showed him the location of the body. According to her husband, she appeared unconcerned about the episode at the time and subsequently showed no remorse. When asked about the reason for her behavior she stated that "It needed to be done" and that her mother (who had died 5 years previously) asked her to do it.
Following this tragic incident, the villagers and in-laws came to the support of the family. No official complaint was lodged with the police, but her husband was advised by local villagers to consult a general medical practitioner. When he did this, the patient was diagnosed as 'psychosis' and treated with benzodiazepines for sleep. The medical practitioner also recommended taking her to a specialty psychiatric center in the city for formal diagnosis and treatment.
When she was brought to our outpatient department, the mental status examination revealed decreased psychomotor activity and poor personal hygiene. She was conscious of self and her surroundings and was oriented to person, time, and place. Her attention was alert, but her concentration was impaired. Her affect was fearful throughout the interview. Her thinking showed well systematized delusions of persecution, of reference, of infidelity by her husband, and of being controlled by others. She also reported having heard command auditory hallucinations instructing her to kill her infant daughter who she believed would otherwise kill her when she grew up. The voices, which she recognized as those of her dead parents, also commanded her to kill her husband and told her that her husband was responsible for their deaths five years previously. When asked about why she had not killed her other children, she replied that she had planned to kill them, but she did not do so because they were old enough to resist her.
Her immediate, recent, and remote memory was intact. She had limited general knowledge and poor math abilities. Her abstract thinking and judgment were impaired. She had no insight about her illness. All of her routine blood tests and X-ray examinations were within normal limits. Her IQ test revealed low-average intelligence (IQ=75-80, mental age approximately 12.5 years). The psychogram generated from a Rorschach test suggested schizophrenform psychosis. Her premorbid personality was reported by the husband to be normal, she did not use alcohol or other drugs, there was no history of epilepsy or serious head trauma, and there was no family history of mental illness.
On the basis of her history and mental status examination she was diagnosed as having paranoid schizophrenia and hospitalization was recommended. However, due to lack of financial and social support, her husband refused to hospitalize her. She was started on olanzapine 10 mg twice daily and lorazepam 2 mg twice daily on an outpatient basis and was scheduled for regular follow-up visits every 2 weeks for the next 2 months. When re-evaluated 6 months later, she was being maintained on olanzapine 10 mg twice daily, lorazepam 2 mg at bedtime, and 40 mg long-acting flupenthixol every 4 weeks. Her behavior toward her family had improved. She had started taking care of • 313 • her children and reported that she regretted having killed her daughter. The intensity of her delusions and hallucinations had lessened to the point where she only heard the voices occasionally. Her husband reported no bizarre or threatening behavior.
Discussion
This case highlights the need of identifying and helping mothers who are at-risk for harming their children. Mental health providers are one of the many stakeholders who need to participate in this effort; other key stakeholders are family members, teachers, different types of community workers, and general medical practitioners. Whenever a mental illness is present or suspected in a mother who is responsible for caring for her children, family members, service workers, and clinicians must sensitively inquire about and continually monitor the effect of the mother's illness on the children in terms of potential neglect, abuse, battering, or outright attack. This is usually approached by asking the mother (and other family members if present at the interview) about childrearing practices, parenting problems, and feelings of being overwhelmed. When a risk to the health or well-being of the children is identified, active interventions dictated by custom and (if relevant) legal measures need to be instituted to protect the children. In most rural communities of low-and middle-income counties without family protective services, this will involve mobilizing members of the extended family to help in the care of the at-risk children. At the same time treatment of the mentally ill mother must focus on improving her functioning to a level where she can safely resume responsibility (or partial responsibility) for caring for her children.
A recent study of Indian mothers with severe mental illness in the postpartum period found that mothers with delusions about their infant engaged in more abuse. [7] One report from western counties found that up to 4% of mothers with untreated postpartum psychosis will carry out infanticide. [8] Early screening and identification of mental illness, in both the antenatal and postnatal periods, is important; the Edinburgh Postnatal Depression Scale [9,10] is a validated tool that is often used to do this. Severe depression, suicidality, psychosis, and a prior history of child abuse in the mother are all associated with increased risk of infanticide. Psychotic mothers experiencing persecutory delusions with active hallucinations, aggressive behaviors, gross disorganization, or fear that their children may suffer a fate worse than death should either be hospitalized or separated from their children. These mothers may be reluctant to disclose their delusional ideas, but their delusions may sometimes be elicited through a sympathetic exploration of their concerns for the safety of their children. In this case the presence of gross psychotic symptoms was responsible for the tragic incident. Had there been timely evaluation and treatment, such an incident could have been avoided.
It is, however, important to remember than more infanticides occur due to fatal maltreatment by mothers without a mental illness than because of maternal psychiatric illness. The reasons for such infanticides include failure of the child to respond to maternal demands such as to stop crying, [11] an unwanted child (e.g., female infants in strongly paternalistic cultures), revenge on the husband (who may be having an affair), and so forth. Mothers who batter their children to death are likely to have abused their children more than once before the actual death, [11,12] so there is an opportunity for prevention if family members or other actors (teachers, doctors) take appropriate action when the initial episodes are identified. Mental health professionals who become involved in such cases need to try to understand the complex psychosocial issues affecting the various actors in the case and use this information to ensure the best possible outcomes for the children.
Conclusion
Prevention of infanticide by mothers with mental illnesses requires a) increasing basic knowledge about mental illness in the community, b) making mental health services locally available and affordable (preferably free-of-charge) for all, and c) decreasing the stigma of mental illness so individuals and their families are willing to seek mental health care. Achieving these goals, particularly in poor rural communities of lowand middle-income countries, is a major challenge that has not yet been prioritized by many local and national governments. Psychiatrists and other stakeholders interested in mental health need to become active and persistent advocates who continuously encourage their communities to allocate the intellectual manpower and financial resources needed to address this problem. Psychiatrists also have the additional role of identifying at-risk mothers [13] and, if an infanticide does occur, of providing services to the mothers, their families, and their communities to help resolve the long-term grief, guilt, and anger that often ensues.
Funding
No funding was received to prepare this case report.
Conflict of interest statement
The authors report no conflict of interest related to this case report.
Informed consent
The patient's husband provided written informed consent for the publication of this case report.
|
2016-05-12T22:15:10.714Z
|
2015-10-01T00:00:00.000
|
{
"year": 2015,
"sha1": "fa32785105eab0b3683ea0420916b9454cf3e34b",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fa32785105eab0b3683ea0420916b9454cf3e34b",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
39233968
|
pes2o/s2orc
|
v3-fos-license
|
Decay of solutions to the three-dimensional generalized Navier-Stokes equations
In this paper, we first obtain the temporal decay estimates for weak solutions to the three dimensional generalized Navier-Stokes equations. Then, with these estimates at disposal, we obtain the temporal decay estimates for higher order derivatives of the smooth solution with small initial data. The decay rates are optimal in the sense that they coincides with ones of the corresponding generalized heat equation. These results improve the previous known results to the classical Navier-Stokes equations.
Introduction
The incompressible Navier-Stokes equations can be written as u(x, 0) = u 0 (x), (1.1) where x ∈ R n , n ≥ 2, t > 0, the vector field u = u(x, t) denotes the velocity of the fluid, p = p(x, t) is the pressure of the fluid and the positive ν is the viscosity coefficient.
Whether or not weak solutions of (1.1) decay to zero in L 2 as time tends to infinity was posed by Leray in his pioneering paper [10,11]. Kato [7] gave the first affirmative answer to the strong solutions with small data to system (1.1). Algebraic decay rates for weak solutions to system (1.1) were first obtained by Schonbek [16], in which the Fourier splitting method was introduced to prove that there exists a Leray-Hopf weak solution of (1.1) in three space dimension with arbitrary data in L 1 ∩ L 2 , satisfying u(t) 2 ≤ C(t + 1) − 1 4 where the constant C depends only on the L 1 and L 2 norms of the initial data. Later the method in [16] was extended by Schonbek [17] (see also Kajikiya and Miyakawa [6], Wiegner [22] for the case R n (n=2, 3,4)) and it was proved that the decay rate for Leray-Hopf solutions of (1.1) in three space dimension with large data in L p ∩ L 2 with 1 ≤ p < 2 is same as those for the solution of the heat equation. That is, where the constant C only depends on the L p and L 2 norms of the initial data. On the decay of solutions to the Navier-Stokes equations, it is also referred to [2,3,5,9,13,21] and references therein.
In this paper, we are concerned with the asymptotic behavior of solution of (1.2) in the supercritical case α < 5 4 . Motivated by [16]- [18], we will show that the weak solutions to (1.2) subject to large initial data decay in L 2 at a uniform algebraic rate. The decay estimates for the higher order derivatives of the smooth solution with small initial data will also be established in L 2 . To prove our main results, the Fourier splitting method due to Schonbek [16] with appropriate modification will be applied. It should be noted that the decay rates obtained in this paper are optimal in the sense that they coincide with ones of the corresponding generalized heat equation v t + νΛ 2α v = 0 with the same initial data u 0 (see Lemma 3.1 in [15]). Therefore, our results improve ones obtained in [17] in which the classical Navier-Stokes equations (α = 1 in (1.2)) are investigated. For completeness, the proof of existence of weak solutions will be sketched in Appendix in the end of the paper.
Throughout the rest of the paper the L p -norm of a function f is denoted by f p and the H s -norm by f H s . We will also set ν = 1 for simplicity.
Our main results are listed as follows.
with 1 ≤ p < 2, the system (1.2) admits a weak solution such that where the constant C depends on α, the L p and L 2 norms of the initial data. 1 3−2α ≤ p < 2, the system (1.2) admits a weak solution such that where the constant C depends on α, the L p and L 2 norms of the initial data.
The following are decay estimates for the higher order derivatives of the smooth solution, of which global-in-time existence for sufficiently small initial data is guaranteed in [24]. Remark 1.1. The following cases can be dealt with in a similar fashion: To prove this result, we just modify the estimate (3.14) as To prove this result, we just modify the estimate (3.10) as Theorem 1.4. Let 1 ≤ α < 5 4 and u 0 ∈ L 2 (R 3 )∩L p (R 3 ) with div u 0 = 0 and 1 3−2α ≤ p < 2. Then, for m ∈ N (the set of positive integers), there exist T 0 > 0 and C > 0 such that the small global-in-time solution satisfies for all t > T 0 , where the constant C depends on m, α and u 0 L 2 ∩L p .
Remark 1.2. The decay rates for higher order of derivatives of the solutions was studied in [4] for the classical Navier-Stokes equations and in [18] for the Hall-magnetohydrodynamic equations.
The paper unfolds as follows: Section 2 is devoted to the proof of Theorem 1.1 and Theorem 1.2 whereas Section 3 deals with the proof of Theorem 1.3 and Theorem 1.4. The existence of weak solutions is given in the Appendix in the end of the paper.
Proof of Theorem 1.1 and Theorem 1.2
In this section, Theorem 1.1 and Theorem 1.2 will be proved. We start with two key lemmas.
Proof. Taking the Fourier transform of the first equation of (1.2) yields Multiplying (2.2) by e |ξ| 2α t gives Integrating with respect to time from 0 to t, we have To complete the proof we need to establish an estimate for H(ξ, s). Taking the divergence operator on the first equation of (1.2) yields Since the Fourier transform is a bounded map from L 1 into L ∞ , it follows that Similarly, for the convection term, using the divergence free condition, we have Combing the above two estimates, we obtain Inserting (2.5) into (2.4) and using the boundedness of the L 2 norm of the solution lead to The proof of the lemma is finished.
the constant C depends on γ and the L p norm of u 0 .
Proof. Denote F the Fourier transform. By Riesz theorem, if 1 ≤ p ≤ 2, the Fourier transform F : L p → L q is bounded, and Thanks to (2.8) and noting that the volume |S(t)| = Cg 3 (t), we get The proof of the lemma is finished.
In the rest of this section, we first present a formal argument by the Fourier splitting method (see [16]).
This, combined with (2.12), yields d dt Integrating with respect to time yields By choosing γ suitably large, we have Proof of Theorem 1.2. Two cases will be considered respectively.
Remark 2.1. The proof of Theorems 1.1 and 1.2 is formal and we assume that all the calculus in the proof make sense. To make it more rigorous, we apply the a prior estimates to the approximate solutions constructed in the Appendix. Let us recall that u N is a solution of the approximate equation where J N is the spectral cutoff defined by and P is the Leray projector over divergence-free vector-fields.
It is shown that the u N converges strongly in L 2 (0, T ; L 2 loc (R 3 )) to a weak solution of the generalized three-dimensional Navier-Stokes equation (1.2) in the Appendix. Hence the L 2 decay of u N will imply the L 2 decay of the weak solution of (1.2).
Proof of Theorem 1.and Theorem 1.4
In this section, we will give the proof of Theorem 1.3 and Theorem 1.4. Before that, we recall the following result established in [24].
The following are decay estimates for high order derivatives of the smooth solution.
where C depends on α and u 0 H s ∩L p .
where C depends on α and u 0 H s ∩L p .
Proof of Theorem 3.3 and 3.4. We adopt to the Fourier splitting method again. It follows from (3.1) that Similar to the proof of Theorem 1.1 and Theorem 1.2, we have d dt for any t > T 0 . The proof of Theorem 3.3 and 3.4 are finished.
To prove Theorem 1.3 and 1.4, we first present the following commutator estimate.
Lemma 3.5. Let s > 0 and 1 < p < ∞. Then The proof is referred to [8] and the details are omitted here .
By (3.5), we have (3.9) Using Theorem 1.1 and Theorem 3.3 yields for any t > T 0 and 1 2 ≤ α ≤ 1. Putting (3.10) into (3.9), one has (3.11) for 1 2 < α ≤ 1. In the case of 0 < α ≤ 1 2 , we can also establish the similar estimate as in (3.11). Indeed, by divergence free condition, (3.7) can be rewritten as Use the commutator estimate (3.6) to get It follows from Theorem 1.1 and Theorem 3.3 that, for any t > T 0 and 0 < α ≤ 1 2 , (3.14) Hence, we obtain that, for 0 < α ≤ 1, where i = 0, 1. Inserting (3.15) with i = 1 into (3.11), we get To complete the proof, we use the inductions for m. The case m = 0 has been proved in Theorem 1.1. Assume that Then, thanks to (3.16), we have d dt Integrating (3.17) in time from T 0 to t yields The proof of Theorems 1.3 is finished.
A Existence of weak solutions
In this section we show that the generalized Navier-Stokes equations with α > 0 have a global weak solution corresponding to any prescribed L 2 initial data.
We start with a definition of weak solutions for (1.2) with L 2 initial data u 0 . Let T > 0 be arbitrarily fixed.
Definition D.1. The function pair (u(x, t), p(x, t)) is called a weak solution of the problem (1.2) if the following conditions are satisfied: The following theorem states that there exists global-in-time weak solutions of (1.2).
We will use the Friedrichs method to prove Theorem (D.1). Before that, let us recall the following Picard theorem [14] and Bernstein inequality [1]. (ii) F is locally Lipschitz continuous, i.e., for any X ∈ O there exists L > 0 and an open neighborhood U X of X such that Then, for any X 0 ∈ O, there exists a time T such that the ODE Proof of Theorems D.1. For N ≥ 1, let J N be the spectral cutoff defined by Let P denote the Leray projector over divergence-free vector-fields. Consider the following ODE in the space We shall apply Picard Theorem to show the existence (local) and uniqueness of solution to (4.1). We write Then F satisfies the local Lipschitz condition. In fact, for any u, v ∈ L 2 N , by the Hölder inequality and the Bernstein inequality, we get By the Bernstein inequality, it follows that Consequently, Picard Theorem implies that (4.1) has a unique local (in time) solution u N ∈ C 1 ([0, T N ); L 2 N ). Recall that P 2 = P , J 2 N = J N and P J N = J N P , it is easy to check that P u N and J N u N are also solutions of (4.1). By the uniqueness, P u N = u N (i.e. divu N = 0) and J N u N = u N . Then (4.1) can be simplified as Multiplying the first equation of(4.2) by u N and integrating by parts, we obtain This implies that u N remains bounded in L 2 N for finite time, whence T N = T . Next, we will use Aubin-Lions lemma [20] to prove the strong convergence of u N (or its subsequence) in L 2 (0, T ; L 2 (Ω)) for any Ω ⊂ R 3 . In fact, for any h ∈ L 2 (0, T ; H 3 (R 3 )) and α ≤ 5 2 , we obtain Combining these estimates with the first equation of (4.2), we obtain ∂ t u N ∈ L 2 (0, T ; H −3 (R 3 )), (4.5) which together with (4.3) yields that u N → u in L 2 (0, T ; L 2 (Ω)) for any Ω ⊂ R 3 .
When α > 5 2 , it can be proved in a similar way that system (1.2) possess a weak solution obeying Definition D.1. The proof of the Theorem is finished.
|
2014-06-07T13:25:07.000Z
|
2014-06-07T00:00:00.000
|
{
"year": 2015,
"sha1": "bb0976235848434a66b786b8a4207c2bdc8bd059",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1406.1893",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bb0976235848434a66b786b8a4207c2bdc8bd059",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
11463900
|
pes2o/s2orc
|
v3-fos-license
|
Assessing Environmental Impact Indicators in Road Construction Projects in Developing Countries
Environmental pollution is considered to be one of the main concerns in the construction industry. Environmental pollution has become a major challenge to construction projects due to the huge amount of pollution caused by construction projects. There are different types of environmental impact indicators, such as the greenhouse gas (GHG) footprint, eutrophication potential (EP), acidification potential (AP), human health (HH) particulate, ozone depletion, and smog. Each of these environmental impact indicators can be linked to different phases of the construction projects. The overall environmental impact indicators can be divided into direct, indirect, and operational emissions. This paper presents a Building Information Modeling (BIM)-based methodology for the assessment of environmental impacts in road construction projects. The model takes into account the overall life cycle of the road construction project, which is divided into: manufacturing phase, transportation phase, construction phase, maintenance phase, operational phase, recycling phase, and deconstruction phase. A case study is presented to demonstrate the applicability of the proposed model. The proposed model solves a major problem for road construction project teams who want to assess the environmental impact indicators associated with their project prior to the start of the execution of their projects.
Introduction
Infrastructure construction projects in general and road construction projects specifically are associated with a huge amount of emissions that vary from the start of project execution until the demolition stage [1].This pollution can affect human health and the economic balance in a very severe matter [2].Therefore, the issue of sustainability development and Building Information Modeling (BIM) has emerged.It is vital to quantify these emissions to reduce the hazards.This article introduces six different types of road construction environmental impacts: impact on the greenhouse gas (GHG) footprint, impact on acidification potential (AP), human health (HH) particulate, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog.Different mathematical models exist in literature to assess environmental impact indicators in the construction industry.Abanda et al. [3] developed a review mathematical model of embodied energy, greenhouse gases, and wastes based on the time-cost parameters of building projects.Tsai et al. [4] proposed a mathematical programing approach in selecting green building projects.On the other hand, energy has played an important role in the economic growth over the decades.According to Hawken and Lovins [5], the more products are produced, the more natural resources are consumed.Moreover, economic activities require large amounts of energy and material to be consumed and produce more waste in the form of environmental emissions [6].Rani et al. [2] defined primary energy as "the energy in the form of Natural Gas, Wood, Wind, Hydropower, and Sunlight".Primary energy can be divided into renewable and non-renewable energy.Use of these natural resources during the construction, operation, and maintenance stages of the project is associated with environmental impact indicators.Considering the life cycle of roads, the consumption of primary energy is related to the consumption of electricity used for lighting roads, and the consumption of natural gas, diesel, and gasoline used for operational passenger cars and construction equipment.Different researchers have tackled the issue of sustainability in infrastructure construction projects.Umer et al. [7] developed a sustainability assessment hierarchal model for roadway projects under uncertainties using a green-based index approach to evaluate how well the project meets sustainability objectives in order to illustrate how well the roadway project is meeting its sustainability objectives.
Moreover, different researchers have tackled the issue of environmental analysis.Lim et al. [8] developed an optimization model to reduce environmental impacts and costs in urban water infrastructure projects.Park et al. [9] developed a qualitative assessment model to determine the environmental impacts on life cycle of highways.The model takes into account the four stages: manufacturing of construction materials, construction, maintenance/repair, and the demolition/recycling stage.They found that energy consumption during the maintenance and repair stage was the highest.However, Park et al. [9] did not demonstrate how to compute the environmental impact indicators during the project life cycle.Capiteo et al. [10] developed a model for the pavement materials using warm mix asphalt.Barandica et al. [11] developed a model to reduce the impact of greenhouse gas emissions resulting from road construction using life cycle assessment.They found that earthworks are the main activity involved, and they contribute 60-85% of the total emissions in the construction stage.They did not take into account the primary energy consumption resulting from road lighting, and fuel consumption resulting from passenger cars and construction equipment.
Furthermore, different researchers have tackled the issue of developing BIM.Marzouk and Abdel Aty [12] developed a model to maintain subway infrastructure using BIM.The model proposed the application of BIM in subways by modeling different components including structural, mechanical, electrical, and Heating, Ventilation, and Air Conditioning (HVAC).Marzouk and Hisham [13] developed a model to control the cost in bridge projects using Building Information Modelling.The model integrates BIM with the earned value (EV) concept to determine the project status at specific reporting date.Marzouk and Abdel Aty [14] developed a model to monitor thermal comfort in subways using BIM.The model presents an application that utilizes a wireless sensor network (WSN) and BIM in order to monitor thermal conditions within a subway.Jullien et al. [15] developed a specific tool called, which is dedicated to road life cycle assessment.The objective of this study is to decrease the amount of consumption of materials, water, and energy through computing their environmental impacts.However, none of the above researchers have assessed environmental impact indicators and primary energy.Moreover, none of above researchers integrated BIM, sustainability assessment, and environmental impact indicators in road construction projects.Therefore, there is a need to develop a model that integrates sustainability assessment, BIM, and environmental impact indicators in road construction projects in Egypt.
The main objective of this paper is to quantify the environmental impact indicators that are associated with road construction project using BIM.This is achieved through the development of Environmental Building Information Modeling (EBIM) model.The model solves a major problem for road construction project teams, who want to identify, and quantify the environmental impact indicators associated with their projects prior to the start of execution of their projects.
Research Methodology
The EBIM is composed of seven stages as depicted in Figure 1: (1) identifying environmental impact indicators; (2) identifying project assemblies and life cycle assessment boundaries; (3) developing the BIM module; (4) defining input for time module, cost, and environmental module; (5) applying environmental emission algorithms; (6) defining output of the proposed model and ( 7) conducting a comparative case study.(3) developing the BIM module; (4) defining input for time module, cost, and environmental module; (5) applying environmental emission algorithms; (6) defining output of the proposed model and ( 7) conducting a comparative case study.
Identifying Environmental Impact Indicators
Environmental impact indicators were identified through literature review, and interviews using the indirect method and two-step Delphi technique in order to ensure a consensus level with ten experts, each of them has more than twenty years of experience in road construction projects, and environmental impact assessment.Their status ranged from site engineer to general manager of road construction projects.Experts were asked to provide the following information: "based on your experience in road construction projects, and your experience in environmental impact assessment, please identify the environmental impact indicators encountered in your project".Experts agreed that environmental impact indicators can be divided into: impact on greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human heath (HH) particulate, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog [1,2].Experts also agreed that these environmental impact indicators should be assessed in terms of direct, indirect, and operational emissions.
Identifying Project Assemblies and Life Cycle Assessment Boundaries
Road construction project activities can be divided into eight activities: performing earthworks, performing fill embankment, placing sub base, placing curbstone, insulating prime coat, insulating tack coat, placing stabilized base coarse, and placing wearing coarse.Each of these activities will be assessed against time needed to execute the activity, life cycle cost, environmental impact indicators, and total primary energy consumed by each activity.The proposed model accounts for different project phases which are manufacturing phase, transportation on-site phases, construction phase, maintenance phase, recycling phase, and deconstruction, and demolition phase [16].The life cycle assessment system boundary identifies the inputs (materials, energy, and equipment) along with the output (emissions) from each step in the process of the life cycle (manufacturing, transportation on-
Identifying Environmental Impact Indicators
Environmental impact indicators were identified through literature review, and interviews using the indirect method and two-step Delphi technique in order to ensure a consensus level with ten experts, each of them has more than twenty years of experience in road construction projects, and environmental impact assessment.Their status ranged from site engineer to general manager of road construction projects.Experts were asked to provide the following information: "based on your experience in road construction projects, and your experience in environmental impact assessment, please identify the environmental impact indicators encountered in your project".Experts agreed that environmental impact indicators can be divided into: impact on greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human heath (HH) particulate, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog [1,2].Experts also agreed that these environmental impact indicators should be assessed in terms of direct, indirect, and operational emissions.
Identifying Project Assemblies and Life Cycle Assessment Boundaries
Road construction project activities can be divided into eight activities: performing earthworks, performing fill embankment, placing sub base, placing curbstone, insulating prime coat, insulating tack coat, placing stabilized base coarse, and placing wearing coarse.Each of these activities will be assessed against time needed to execute the activity, life cycle cost, environmental impact indicators, and total primary energy consumed by each activity.The proposed model accounts for different project phases which are manufacturing phase, transportation on-site phases, construction phase, maintenance phase, recycling phase, and deconstruction, and demolition phase [16].The life cycle assessment system boundary identifies the inputs (materials, energy, and equipment) along with the output (emissions) from each step in the process of the life cycle (manufacturing, transportation on-site, construction, maintenance, recycling, and deconstruction, and demolition).Therefore, the system controls the inputs and outputs [17].
Developing BIM Module
The third step of the proposed model is to develop the BIM module using the Autodesk Revit 2015 as an add-on in Revit, and to define systems in Copert 4, and Athena Impact Estimator.The 3D BIM modules constitute the data base that is used to compute the environmental impact indicators in terms time, life cycle cost, overall environmental impact indicators and primary energy associated with road construction processes.Different properties of the construction project should be defined in Copert 4, such as country name, country information, fuel information, vehicle information, input fleet data, and input circulation data.The interface of rule wizard of the Copert 4 software is depicted in Figure 2. The Copert 4 as mentioned above computes the different environmental impacts for the construction project, such as impact on greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human health (HH) particulate, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog with regard to different life cycles of the construction project.Copert 4 Output is in Excel Format.site, construction, maintenance, recycling, and deconstruction, and demolition).Therefore, the system controls the inputs and outputs [17].
Developing BIM Module
The third step of the proposed model is to develop the BIM module using the Autodesk Revit 2015 as an add-on in Revit, and to define systems in Copert 4, and Athena Impact Estimator.The 3D BIM modules constitute the data base that is used to compute the environmental impact indicators in terms time, life cycle cost, overall environmental impact indicators and primary energy associated with road construction processes.Different properties of the construction project should be defined in Copert 4, such as country name, country information, fuel information, vehicle information, input fleet data, and input circulation data.The interface of rule wizard of the Copert 4 software is depicted in Figure 2. The Copert 4 as mentioned above computes the different environmental impacts for the construction project, such as impact on greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human health (HH) particulate, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog with regard to different life cycles of the construction project.Copert 4 Output is in Excel Format.
Defining Input for Time Module, Cost, and Environmental Module
The proposed application computes time, life cycle cost, environmental impact, and primary energy for the construction project.The module is divided into three divisions, which are the time division, cost division, and environmental division.The user is asked to enter certain inputs in each division.The user is asked to enter number of crews, productivity of the crew, and nature of the crew (single-based crew, or ranged-based crew) for the time division.
The input for time division is the quantity of work to be performed, the productivity of the crew in performing the task, the efficiency of the crew in performing the task, and the number of the crews available for performing the task.Figure 3 illustrates an example of the time division.For the environmental division, the user is asked to enter relative weights of six different environmental impact indicators (W1, W2, W3, W4, W5, and W6). Figure 4 illustrates an example of the interface of the environmental division.
Defining Input for Time Module, Cost, and Environmental Module
The proposed application computes time, life cycle cost, environmental impact, and primary energy for the construction project.The module is divided into three divisions, which are the time division, cost division, and environmental division.The user is asked to enter certain inputs in each division.The user is asked to enter number of crews, productivity of the crew, and nature of the crew (single-based crew, or ranged-based crew) for the time division.
The input for time division is the quantity of work to be performed, the productivity of the crew in performing the task, the efficiency of the crew in performing the task, and the number of the crews available for performing the task.Figure 3 illustrates an example of the time division.For the environmental division, the user is asked to enter relative weights of six different environmental impact indicators (W1, W2, W3, W4, W5, and W6). Figure 4 illustrates an example of the interface of the environmental division.As illustrated in Figure 3, the interface asks the user to enter the productivity of road crew in performing each activity.The system asks the user to specify the number of crews in performing each activity, and whether it is a single crew type or a single based peer's crew.The "check values" button enables the user to check the entries before submitting to the system.For the cost division, the user is asked to enter some information regarding the cost in order to be able to compute the life cycle cost of the construction project, such as minimum attractive rate of return (MARR), maintenance cost per year (if applicable), maintenance cost per specific period of time (if applicable), and the life span, for example, 25, 50, and 100 years.Then, the user is asked to enter the maintenance cost at a specific year (if applicable.).As illustrated in Figure 3, the interface asks the user to enter the productivity of road crew in performing each activity.The system asks the user to specify the number of crews in performing each activity, and whether it is a single crew type or a single based peer's crew.The "check values" button enables the user to check the entries before submitting to the system.For the cost division, the user is asked to enter some information regarding the cost in order to be able to compute the life cycle cost of the construction project, such as minimum attractive rate of return (MARR), maintenance cost per year (if applicable), maintenance cost per specific period of time (if applicable), and the life span, for example, 25, 50, and 100 years.Then, the user is asked to enter the maintenance cost at a specific year (if applicable.).As illustrated in Figure 3, the interface asks the user to enter the productivity of road crew in performing each activity.The system asks the user to specify the number of crews in performing each activity, and whether it is a single crew type or a single based peer's crew.The "check values" button enables the user to check the entries before submitting to the system.For the cost division, the user is asked to enter some information regarding the cost in order to be able to compute the life cycle cost of the construction project, such as minimum attractive rate of return (MARR), maintenance cost per year (if applicable), maintenance cost per specific period of time (if applicable), and the life span, for example, 25, 50, and 100 years.Then, the user is asked to enter the maintenance cost at a specific year (if applicable.).As illustrated in Figure 4, the interface of the system asks the user to specify the relative weights of six different environmental impact indicators (W1, W2, W3, W4, W5, and W6).These weights were obtained from the database of a road construction company performing project 4 presented in the case study section.
Applying Environmental Emission Algorithms
The fifth step is to compute the environmental impacts.The proposed model computes time, life cycle cost, environmental impacts, and primary energy.Time is computed based on the quantity of work to be performed, the productivity of the crew in performing the task, the efficiency of the crew, and the number of the crews available for performing the task.
Computations of Environmental Impact Indicators
The overall environmental impacts of road construction project can be classified into three major categories: direct, indirect, and operational emissions.The overall environmental impacts equal the summation of the direct, indirect, and operational emissions.The direct emission can be defined as "the emissions that are directly related to on-site construction processes . . .computed based on the amount of fuel consumed from equipment during the construction process" [1].The direct emissions are equal to the construction emission in addition to transportation emissions, recycling, deconstruction emissions, and repair/maintenance emissions.The total direct emissions are computed based in Equation (1).(1) where; Ed refers to the total direct emissions, T1, T2, T3, T4, T5, and T6 refer to the modification index of impact on the greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.Each modification index is equal to the severity index multiplied by corresponding weighted percentage.Eghg, Eap, Ehh, Eep, Eod, and Es represent potentials produced from the construction, transportation-on site, maintenance, deconstruction, and demolition phases of the construction project, respectively.Eghgsum, Eapsum, Ehhsum, Eepsum, Eodsum, and Es sum represent the potentials sums for the construction project, including the direct and indirect emissions for impact on the greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.Six parameters weights (W1, W2, W3, W4, W5, and W6) are assigned to each type of the environmental impact indicators.These weighted percentages are the percentage of impact on greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.The sum of the weighted percentages should equal to 1. Table 1 lists the severity index of each environmental parameter on human health.The above-mentioned figures in Table 1 (severity index) will be used to compute the modification index (T) for different environmental impacts presented in Equation ( 1) by multiplying the severity index with the weighted percentage (W).The greenhouse gas (GHG) footprint produced from the construction site (Eghg c ), transportation (Eghg t ), deconstruction (Eghg d ), and maintenance (Eghg m ) is computed using Equations ( 2)- (5).
where j is the number of equipment used in construction for a specific construction element.I is the number of equipment used in transportation process in site.Cons AVG refers to the average consumption of certain equipment (liters/hour).Working hours are number of working hours of the equipment (typically 8 h/day).Act Work is the percentage of the equipment that will actually work, which is approximated to be 70% of the working hours of the equipment [18,19].Ɣ diesel is the density of diesel, which is 0.832 Kg/I.CEF is the carbon emission factor for diesel, which is 4 Kg CO2-Eq/Kg [19].T-tra refers to transportation time for certain equipment for diesel.T refers to the time for executing the task.Table 2 illustrates the average consumption (Cons AVG) of some equipment [20].
A conversion factor is used to convert from gallons to liters, where 1 gallon = 3.785 L. T is the time need to execute the construction activity, which is computed using Equation ( 6) [1].
T = Quantity of work to be performed (Productivity of single Equipment × efficiency × number items of Equipments) (6) where efficiency is assumed to be 80% [21,22].
Indirect emissions refer to "emissions that are produced off-site construction processes" [1].They include manufacturing, transportation off site emissions, and operation.The indirect emissions are computed using Equation ( 7): where Eghg i , Eap i , Ehh i , Eep i , Eod i , and Es i represent potentials produced from material production, and transportation off-site phases for impact on the greenhouse gas (GHG) footprint (equivalent carbon dioxide), impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.Operational emissions are the emissions produced from daily operation of the facility after being constructed till the end of its remaining life cycle [23].They are produced from four main sources: electricity, natural gas, diesel, and gasoline.The operational emissions resulted from the consumption of electricity used for lighting the road, and from the consumption of different types of fuels used by passenger cars, and construction equipment during the operational stage.The total where j is the number of equipment used in construction for a specific construction element.I is the number of equipment used in transportation process in site.Cons AVG refers to the average consumption of certain equipment (liters/hour).Working hours are number of working hours of the equipment (typically 8 h/day).Act Work is the percentage of the equipment that will actually work, which is approximated to be 70% of the working hours of the equipment [18,19].Ɣ diesel is the density of diesel, which is 0.832 Kg/I.CEF is the carbon emission factor for diesel, which is 4 Kg CO2-Eq/Kg [19].T-tra refers to transportation time for certain equipment for diesel.T refers to the time for executing the task.Table 2 illustrates the average consumption (Cons AVG) of some equipment [20].A conversion factor is used to convert from gallons to liters, where 1 gallon = 3.785 L. T is the time need to execute the construction activity, which is computed using Equation ( 6) [1].
T = Quantity of work to be performed (Productivity of single Equipment × efficiency × number items of Equipments) (6) where efficiency is assumed to be 80% [21,22].
Indirect emissions refer to "emissions that are produced off-site construction processes" [1].They include manufacturing, transportation off site emissions, and operation.The indirect emissions are computed using Equation ( 7 where Eghg i , Eap i , Ehh i , Eep i , Eod i , and Es i represent potentials produced from material production, and transportation off-site phases for impact on the greenhouse gas (GHG) footprint (equivalent carbon dioxide), impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.
Operational emissions are the emissions produced from daily operation of the facility after being constructed till the end of its remaining life cycle [23].They are produced from four main sources: electricity, natural gas, diesel, and gasoline.The operational emissions resulted from the consumption of electricity used for lighting the road, and from the consumption of different types of fuels used by passenger cars, and construction equipment during the operational stage.The total diesel × CEF × T tra(I) (3) Eghg = Cons AVG(j) × Working hours(j) × Act Work(j) × Ɣdiesel × CEF × T tra(I) (3) Eghg = Cons AVG(j) × Working hours(j) × Act Work(j) × Ɣdiesel × CEF × T(j) (5) where j is the number of equipment used in construction for a specific construction element.I is the number of equipment used in transportation process in site.Cons AVG refers to the average consumption of certain equipment (liters/hour).Working hours are number of working hours of the equipment (typically 8 h/day).Act Work is the percentage of the equipment that will actually work, which is approximated to be 70% of the working hours of the equipment [18,19].Ɣ diesel is the density of diesel, which is 0.832 Kg/I.CEF is the carbon emission factor for diesel, which is 4 Kg CO2-Eq/Kg [19].T-tra refers to transportation time for certain equipment for diesel.T refers to the time for executing the task.Table 2 illustrates the average consumption (Cons AVG) of some equipment [20].A conversion factor is used to convert from gallons to liters, where 1 gallon = 3.785 L. T is the time need to execute the construction activity, which is computed using Equation ( 6) [1].
T = Quantity of work to be performed (Productivity of single Equipment × efficiency × number items of Equipments) (6) where efficiency is assumed to be 80% [21,22].
Indirect emissions refer to "emissions that are produced off-site construction processes" [1].They include manufacturing, transportation off site emissions, and operation.The indirect emissions are computed using Equation ( 7 where Eghg i , Eap i , Ehh i , Eep i , Eod i , and Es i represent potentials produced from material production, and transportation off-site phases for impact on the greenhouse gas (GHG) footprint (equivalent carbon dioxide), impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.
Operational emissions are the emissions produced from daily operation of the facility after being constructed till the end of its remaining life cycle [23].They are produced from four main sources: electricity, natural gas, diesel, and gasoline.The operational emissions resulted from the consumption of electricity used for lighting the road, and from the consumption of different types of fuels used by passenger cars, and construction equipment during the operational stage.The total ) is computed using Equations ( 2)- (5).
Eghg = Cons AVG(j) × Working hours(j) × Act Work(j) × Ɣdiesel × CEF × T tra(I) (3) Eghg = Cons AVG(j) × Working hours(j) × Act Work(j) × Ɣdiesel × CEF × T(j) (5) where j is the number of equipment used in construction for a specific construction element.I is the number of equipment used in transportation process in site.Cons AVG refers to the average consumption of certain equipment (liters/hour).Working hours are number of working hours of the equipment (typically 8 h/day).Act Work is the percentage of the equipment that will actually work, which is approximated to be 70% of the working hours of the equipment [18,19].Ɣ diesel is the density of diesel, which is 0.832 Kg/I.CEF is the carbon emission factor for diesel, which is 4 Kg CO2-Eq/Kg [19].T-tra refers to transportation time for certain equipment for diesel.T refers to the time for executing the task.Table 2 illustrates the average consumption (Cons AVG) of some equipment [20].A conversion factor is used to convert from gallons to liters, where 1 gallon = 3.785 L. T is the time need to execute the construction activity, which is computed using Equation ( 6) [1].
T = Quantity of work to be performed (Productivity of single Equipment × efficiency × number items of Equipments) (6) where efficiency is assumed to be 80% [21,22].
Indirect emissions refer to "emissions that are produced off-site construction processes" [1].They include manufacturing, transportation off site emissions, and operation.The indirect emissions are computed using Equation (7): where Eghg i , Eap i , Ehh i , Eep i , Eod i , and Es i represent potentials produced from material production, and transportation off-site phases for impact on the greenhouse gas (GHG) footprint (equivalent carbon dioxide), impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.
Operational emissions are the emissions produced from daily operation of the facility after being constructed till the end of its remaining life cycle [23].They are produced from four main sources: electricity, natural gas, diesel, and gasoline.The operational emissions resulted from the consumption of electricity used for lighting the road, and from the consumption of different types of fuels used by passenger cars, and construction equipment during the operational stage.The total where j is the number of equipment used in construction for a specific construction element.I is the number of equipment used in transportation process in site.Cons AVG refers to the average consumption of certain equipment (liters/hour).Working hours are number of working hours of the equipment (typically 8 h/day).Act Work is the percentage of the equipment that will actually work, which is approximated to be 70% of the working hours of the equipment [18,19].) is computed using Equations ( 2)- (5).
Eghg = Cons AVG(j) × Working hours(j) × Act Work(j) × Ɣdiesel × CEF × T tra(I) (3) where j is the number of equipment used in construction for a specific construction element.I is the number of equipment used in transportation process in site.Cons AVG refers to the average consumption of certain equipment (liters/hour).Working hours are number of working hours of the equipment (typically 8 h/day).Act Work is the percentage of the equipment that will actually work, which is approximated to be 70% of the working hours of the equipment [18,19].Ɣ diesel is the density of diesel, which is 0.832 Kg/I.CEF is the carbon emission factor for diesel, which is 4 Kg CO2-Eq/Kg [19].T-tra refers to transportation time for certain equipment for diesel.T refers to the time for executing the task.Table 2 illustrates the average consumption (Cons AVG) of some equipment [20].A conversion factor is used to convert from gallons to liters, where 1 gallon = 3.785 L. T is the time need to execute the construction activity, which is computed using Equation ( 6) [1].
T = Quantity of work to be performed (Productivity of single Equipment × efficiency × number items of Equipments) (6) where efficiency is assumed to be 80% [21,22].
Indirect emissions refer to "emissions that are produced off-site construction processes" [1].They include manufacturing, transportation off site emissions, and operation.The indirect emissions are computed using Equation ( 7 where Eghg i , Eap i , Ehh i , Eep i , Eod i , and Es i represent potentials produced from material production, and transportation off-site phases for impact on the greenhouse gas (GHG) footprint (equivalent carbon dioxide), impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.
Operational emissions are the emissions produced from daily operation of the facility after being constructed till the end of its remaining life cycle [23].They are produced from four main sources: electricity, natural gas, diesel, and gasoline.The operational emissions resulted from the consumption of electricity used for lighting the road, and from the consumption of different types of fuels used by passenger cars, and construction equipment during the operational stage.The total diesel is the density of diesel, which is 0.832 Kg/I.CEF is the carbon emission factor for diesel, which is 4 Kg CO 2 -Eq/Kg [19].Ttra refers to transportation time for certain equipment for diesel.T refers to the time for executing the task.Table 2 illustrates the average consumption (Cons AVG) of some equipment [20].A conversion factor is used to convert from gallons to liters, where 1 gallon = 3.785 L. T is the time need to execute the construction activity, which is computed using Equation ( 6) [1].
T = Quantity of work to be performed (Productivity of sin gle Equipment × efficiency × number items of Equipments) (6) where efficiency is assumed to be 80% [21,22].Indirect emissions refer to "emissions that are produced off-site construction processes" [1].They include manufacturing, transportation off site emissions, and operation.The indirect emissions are computed using Equation (7): Es sum (7) where Eghg i , Eap i , Ehh i , Eep i , Eod i , and Es i represent potentials produced from material production, and transportation off-site phases for impact on the greenhouse gas (GHG) footprint (equivalent carbon dioxide), impact on acidification potential (AP), impact on human health (HH) particulates, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog, respectively.
Operational emissions are the emissions produced from daily operation of the facility after being constructed till the end of its remaining life cycle [23].They are produced from four main sources: electricity, natural gas, diesel, and gasoline.The operational emissions resulted from the consumption of electricity used for lighting the road, and from the consumption of different types of fuels used by passenger cars, and construction equipment during the operational stage.The total operational emissions are the sum of the operational emissions resulting from the impact on greenhouse gases (GHGs), impact on sulfur dioxide, impact on particulate matter, and impact on smog.The total quantity of carbon dioxide can be computed by multiplying quantity of each greenhouse gas by global warming (g) of the potential.Operational emissions can be calculated using Equation (8).Operational emissions of sulfur dioxide can be calculated using Equation (9).Operational emissions of particulate matter can be calculated using Equation (10).Operational emissions of smog can be calculated using Equation ( 11) [1].Table 3 lists the global warming potential over a 100-year period.
where Cons elec, and Cons nags are the total amount of electricity consumption and natural gas consumption, respectively, over the life span of the construction project, which is equal to the average consumption of electricity, and natural gas consumption multiplied by the area of the construction project and lifespan of the facility, which is assumed to be 50 years.EF ELEC (j) ghg , EF ELEC pm , EF ELEC ap , and EF ELEC s represent potential emissions factors produced from electricity consumption of impact on the greenhouse gas (GHG) footprint, impact on human health (HH) particulate matter, impact on acidification potential (AP), and impact on smog, respectively.EF NAGS (j) ghg , EF NAGS pm , EF NAGS ap , and EF NAGS s represent potential emissions factors produced from natural gas consumption with respect to the impact on the greenhouse gas (GHG) footprint, impact on human health (HH) particulate matter, impact on acidification potential (AP), and impact on smog, respectively.The total operational emissions are the sum of emissions produced from potential emissions of impact on the greenhouse gas (GHC) footprint, impact on human health (HH) particulate matter, impact on acidification potential (AP), and impact on smog.
Greenhouse Gases Global Warming Potential
Carbon dioxide 1 Methane 21 Nitrous oxide 310 Table 4. Emission factors of pollutants from electricity consumption (adapted from [25]).
Pollutant Emission Factor (g/Kwh)
Carbon In order to cope with the problem of electricity grid mix varies from one country to another, the International Energy Association IEA [27] stated that the energy balance of a country can be used to suggest a rough estimate of emission factors generated from energy consumption in conjunction with the share of electricity industry generating air pollutants.For example, the Misr State Environmental Association MSEA [28] reported that 60% of CO 2 emissions in Egypt (as an example of developing country) is generated by the electricity industry.Therefore, the energy balance can be used as a method to determine emission factors in the case that there is not sufficient data.The overall environmental impacts for each single phase in construction can be calculated using Equation (12).
where Ed, and Ed i represent the direct and indirect emissions, respectively.The global environmental impacts can be computed using Equation (13).
where Ed, Ed i , and Edo p represent the direct, indirect, and operational emissions, respectively.
Computations of Life Cycle Cost
The life cycle cost is an equivalent annual worth for different cost components, which is computed using minimum attractive rate of return (MARR).The life cycle cost is calculated using Equation ( 14): TLCC = LCC lab + LCC equip + LCC mat + Lcc main1 + LCC main2 + LCC sing (14) where TLCC refers to the total life cycle cost.LCC lab, LCC equip, LCC mat, LCC main1, LCC main2, and LCC sing refer to the equivalent annual worth for labor cost, equipment cost, material cost, maintenance cost/year, maintenance cost/period of time, and single payment, respectively.
Computations of Primary Energy
The primary energy is the sum of the primary energy consumed resulted from electricity, natural gas consumptions, and the consumption of oil during the different stages of the project, which is measured in megajoule (MJ) units.The overall primary energy is computed using Equation ( 15) [29]: where TPE refers to the total primary energy.PE manu, PE tra-off, PE cons, PE tra-on, PE oper, PE dec, and PE rec refer to primary energy consumed in manufacturing stage, transportation off-site stage, construction stage, transportation on-site stage, operation and maintenance stage, deconstruction stage, and recycling and reuse stage, respectively.There are two main sources for primary energy in the operational stage: electricity consumption, natural gas consumption, and the consumption of oil during different stages of the project.The total electricity consumption during operational stage can be computed using Equation ( 16) [2].The total natural gas consumption during the operational stage can be determined using Equation ( 17) [2].TEC = Cons elec × SA × number of years ( 16) TNGC = Cons nags × SA × number of years (17) where TEC, TNGC refer to the total electricity and natural gas consumption produced during the operational stage, respectively.SA refers to the total surface area of the construction project.The annual electricity consumption is assumed to be 200 KW/m 2 [29].The annual natural gas consumption is assumed to be 28 m 3 /m 2 (this amount was computed based on data on the amount of natural gas production in Egypt in 2013, the percentage of natural gas consumed in electricity generation in Egypt, and the total surface area of paved and unpaved roads) [30].TEC, and TNGC are measured in megajoules.A conversion factor is used to convert from m 3 /m 2 of natural gas to KWh/m 2 where 1 m 3 /m 2 of natural gas equals 10.55 KWh/m 2 [31,32].
Numerical Example
The following data was obtained during the site visit for project 1 (Asyut/Sohag/Red Sea) presented in Table 6 in the case study section.The length of the project is 180 Km with a total cost of 1.156 billion L.E., and the project is estimated to finish on 31 March 2016.In this case, W1 = 0.4, W2 = 0.1, W3 = 0.1, W4 = 0.2, W5 = 0.1, and W6 = 0.1.The severity index from Table 1 of T1 = 8 (high), the severity index of T2 = 6 (medium), the severity index of T3 = 4 (low), the severity index of T4 = 4 (low), the severity index of T5 = 2 (very low), and the severity index of T6 = 10 (very high).Therefore: The amount of Eghg produced from project 1 (construction, and transportation on-site phase) in performing the earthworks activity is 598.The amount of earthworks to be performed is 1000 m 3 /day, the productivity of the single excavator is 70 m 3 /day, and the number of items of equipment used is 6, applying Equation (6): ns AVG(j) × Working hours(j) × Act Work(j) × Ɣdiesel × CEF × T(j) (4) ns AVG(j) × Working hours(j) × Act Work(j) × Ɣdiesel × CEF × T(j) (5) uipment used in construction for a specific construction element.I is the d in transportation process in site.Cons AVG refers to the average ipment (liters/hour).Working hours are number of working hours of the y).Act Work is the percentage of the equipment that will actually work, 70% of the working hours of the equipment [18,19].Ɣ diesel is the density I. CEF is the carbon emission factor for diesel, which is 4 Kg CO2-Eq/Kg rtation time for certain equipment for diesel.T refers to the time for llustrates the average consumption (Cons AVG) of some equipment [20].to convert from gallons to liters, where 1 gallon = 3.785 L. ecute the construction activity, which is computed using Equation ( 6) [1].
Quantity of work to be performed tivity of single Equipment × efficiency × number items of Equipments) (6) to be 80% [21,22].r to "emissions that are produced off-site construction processes" [1]., transportation off site emissions, and operation.The indirect emissions n ( 7 i , Eod i , and Es i represent potentials produced from material production, phases for impact on the greenhouse gas (GHG) footprint (equivalent acidification potential (AP), impact on human health (HH) particulates, potential (EP), impact on ozone depletion, and impact on smog, are the emissions produced from daily operation of the facility after being ts remaining life cycle [23].They are produced from four main sources: iesel, and gasoline.The operational emissions resulted from the sed for lighting the road, and from the consumption of different types of rs, and construction equipment during the operational stage.The total diesel is 0.832 kg/I, and the carbon emission factor (CEF) = 4 kg CO 2 -Eq/Kg, Tis 6 days.By applying Equations ( 2) and ( 3 In order to compute the total in direct emissions, W1 = 0.4, W2 = 0.1, W3 = 0.1, W4 = 0.2, W5 = 0.1, and W6 = 0.1; the severity index from table of T1 = 8 (High), the severity index of T2 = 6 (medium), the severity index of T3 = 4 (low), the severity index of T4 = 4 (low), the severity index of T5 = 2 (very low), and the severity index of T6 = 10 (very high).Then: The amount of Eghg produced from project 1 (manufacturing, and transportation off-site phase) is 17,000 Kg CO 2 -Eq, Eghg sum is 80,140 Kg CO 2 -Eq, Eap is 7850 Kg SO 2 , Eap sum is 11,360 Kg SO 2 , Ehh is 7550 Kg PM 2.5 , Ehh sum is 9100 Kg PM 2.5 , Eep is 7950 kg N, Eep sum is 9520 Kg N, Eod is 8820 Kg CFC-11, Eod sum is 10,530 Kg CFC-11, and Es is 19,250 Kg O 3 , and Es sum is 37,885 Kg O 3 , then by applying Equation (7): Total of direct emissions: 11): The equivalent annual worth for labor cost, material cost, equipment cost, maintenance cost per year, maintenance cost per period of time, and number of single payments are 500,000 L.E/year, 2,000,000 L.E/ year, 10,000,000 L.E/year, 500,000 L.E/year, 40,000 L.E/year, and 1, respectively.Then, by applying Equation ( 14): TLCC = (total life cycle cost) = 500,000 + 2,000,000 + 10,000,000 + 500,000 + 40,000 + 1 = 1304,0001 L.E/year For project 1, the surface area of the project is 3240 km 2 , the life span of the project is 50 years, and the annual electricity consumption is 295.4 kwh/m 2 (from interviews).Then, by applying Equations ( 16) and ( 17 The total primary energy (PE) is computed based on Equation (15), where PE during manufacture, and transportation off-site phase is 77,890 megajoules, PE during the maintenance phase is 59,890 megajoules, PE during operational is 32,400 megajoules, PE during deconstruction, and demolition phase is 1628.2megajoules, PE during the recycling and reuse phase is 84.24 megajoules, PE during the construction, and transportation on-site phase is 1890.8megajoules (interviews, and AbdelKader [1]), then by applying Equation ( 15): TPE = 77,890 + 59,890 + 1890.8 + 32,400 + 1628.2 + 84.24 = 173,783.24Mega Joule
Defining Output of the Proposed Model
The sixth step is to compute time, life cycle, environmental impacts, and primary energy.The interface of the proposed model, which is used to compute the execution time for the asphalt construction, is depicted in Figure 5.The interface of life cycle cost assessment is depicted in Figure 6.
The interface of the environmental impact calculation is depicted in Figure 7.The interface of primary energy computation is depicted in Figure 8. TPE = 77,890 + 59,890 + 1890.8 + 32,400 + 1628.2 + 84.24 = 173,783.24Mega Joule
Defining Output of the Proposed Model
The sixth step is to compute time, life cycle, environmental impacts, and primary energy.The interface of the proposed model, which is used to compute the execution time for the asphalt construction, is depicted in Figure 5.The interface of life cycle cost assessment is depicted in Figure 6.The interface of the environmental impact calculation is depicted in Figure 7.The interface of primary energy computation is depicted in Figure 8.As illustrated in Figure 5, the interface of the time module asks the user to specify the type of the crew engaged in performing each certain road activity.The "next" button enables the user to transfer from one division to another.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.As illustrated in Figure 5, the interface of the time module asks the user to specify the type of the crew engaged in performing each certain road activity.The "next" button enables the user to transfer from one division to another.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.As illustrated in Figure 6, the interface of the system computes life cycle cost for performing each road activity.The "next" button enables the user to transfer from one division to another.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.As illustrated in Figure 6, the interface of the system computes life cycle cost for performing each road activity.The "next" button enables the user to transfer from one division to another.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.As illustrated in Figure 6, the interface of the system computes life cycle cost for performing each road activity.The "next" button enables the user to transfer from one division to another.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.As illustrated in Figure 7, the interface of the system computes the overall environmental impacts of performing each road activity.The "next" button enables the user to transfer from one division to another.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.As illustrated in Figure 7, the interface of the system computes the overall environmental impacts of performing each road activity.The "next" button enables the user to transfer from one division to another.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.As illustrated in Figure 8, the system computes the road energy in performing each road activity.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.
Conducting Comparative Case Study
In this section, a comparative case study is conducted to demonstrate the results obtained from the above described model, and to validate the results by comparing the results obtained from the software with the results published from previous researchers.Table 6 lists the description of road construction projects that were used in the case study based on interviews with ten experts, each of them has more than twenty years of experience in road construction projects.Table 7 lists number of crews in performing each road construction activity in the six projects.As illustrated in Figure 8, the system computes the road energy in performing each road activity.The "calculate" button enables the user to compute the parameter of specific division.The "convert" button enables the user to transform the results to Microsoft Excel.
Conducting Comparative Case Study
In this section, a comparative case study is conducted to demonstrate the results obtained from the above described model, and to validate the results by comparing the results obtained from the software with the results published from previous researchers.Table 6 lists the description of road construction projects that were used in the case study based on interviews with ten experts, each of them has more than twenty years of experience in road construction projects.Table 7 lists number of crews in performing each road construction activity in the six projects.As listed in Table 8, the amount of environmental impacts and consumption of energy during both the construction and transportation phase, and the operating phase for Projects 1, 4, 5, and 6 is higher than the amount of environmental impacts and consumption of energy for project 4, and 6 because of the characteristics of road construction and the number of crews.For example, project 1 has a length of 180 km and no of equipment working the project is 42 crews, while in project 4, the length of the project is 33 km and the number of equipment working in the project is 15.Thus, these characteristics of road project influence the amount of environmental impacts generated and the consumption of energy during the construction and transportation phase and the operating phase.On the other hand, other phases of road construction project for the six projects remain close to each other.Thus, mitigation strategies should be developed to overcome these situations.
As illustrated in Figure 9, the greenhouse gas emissions resulted from the excessive consumption of natural gas, oil, and coal substances.Due to the increase in urban development in Egypt nowadays, the Egyptian government has launched several new road projects.Thus, the usage of construction materials, and equipment has increased, which in turn lead to an increase in the amount of impact on green gas emissions.Greenhouse gas emissions may have a severe effect on the health of civilians indirectly [33].As illustrated in Figure 9, the greenhouse gas emissions resulted from the excessive consumption of natural gas, oil, and coal substances.Due to the increase in urban development in Egypt nowadays, the Egyptian government has launched several new road projects.Thus, the usage of construction materials, and equipment has increased, which in turn lead to an increase in the amount of impact on green gas emissions.Greenhouse gas emissions may have a severe effect on the health of civilians indirectly [33].As illustrated in Figure 10, the impact on acidification potential emissions resulted from the excessive usage of electricity, automobiles, and construction equipment.Because of the excessive usage of construction equipment (trucks, loaders, and excavators) during the process of road construction, an impact on acidification potential emissions has emerged.Impact on acidification potential emissions may have severe effect on the health of civilians (indirectly) through affecting water biota and terrestrial plants, animals, and plants.Impact on acidification potential can cause respiratory diseases, or can make these diseases worse.Respiratory diseases like asthma or chronic bronchitis make it hard for people to breathe.Also, ecological effects of acid rain are most clearly seen in aquatic environments, such as streams, lakes, and marshes where it can be harmful to fish and other wildlife.Dead or dying trees are a common sight in areas effected by acid rain.Acid rain leaches aluminum from the soil.That aluminum may be harmful to plants as well as animals.Acid rain also removes minerals and nutrients from the soil that trees need to grow.As illustrated in Figure 10, the impact on acidification potential emissions resulted from the excessive usage of electricity, automobiles, and construction equipment.Because of the excessive usage of construction equipment (trucks, loaders, and excavators) during the process of road construction, an impact on acidification potential emissions has emerged.Impact on acidification potential emissions may have severe effect on the health of civilians (indirectly) through affecting water biota and terrestrial plants, animals, and plants.Impact on acidification potential can cause respiratory diseases, or can make these diseases worse.Respiratory diseases like asthma or chronic bronchitis make it hard for people to breathe.Also, ecological effects of acid rain are most clearly seen in aquatic environments, such as streams, lakes, and marshes where it can be harmful to fish and other wildlife.Dead or dying trees are a common sight in areas effected by acid rain.Acid rain leaches aluminum from the soil.That aluminum may be harmful to plants as well as animals.Acid rain also removes minerals and nutrients from the soil that trees need to grow.As illustrated in Figure 11, the impact on particulate matter (HH) potential emissions resulted from the usage of motor vehicles, and coal combustion.Because of the increase in the usage of construction equipment (trucks, loaders, and excavators) during the process of road construction due to the huge number of road projects under construction nowadays in Egypt, the amount of particulate matter (HH) potential emissions has increased.Impact on acidification potential emissions may have severe effect on the health of civilians indirectly through polluting the air.Particulate matter (HH) can cause mortality, and respiratory hospitalizations [34].As illustrated in Figure 11, the impact on particulate matter (HH) potential emissions resulted from the usage of motor vehicles, and coal combustion.Because of the increase in the usage of construction equipment (trucks, loaders, and excavators) during the process of road construction due to the huge number of road projects under construction nowadays in Egypt, the amount of particulate matter (HH) potential emissions has increased.Impact on acidification potential emissions may have severe effect on the health of civilians indirectly through polluting the air.Particulate matter (HH) can cause mortality, and respiratory hospitalizations [34].As illustrated in Figure 11, the impact on particulate matter (HH) potential emissions resulted from the usage of motor vehicles, and coal combustion.Because of the increase in the usage of construction equipment (trucks, loaders, and excavators) during the process of road construction due to the huge number of road projects under construction nowadays in Egypt, the amount of particulate matter (HH) potential emissions has increased.Impact on acidification potential emissions may have severe effect on the health of civilians indirectly through polluting the air.Particulate matter (HH) can cause mortality, and respiratory hospitalizations [34].After that the difference in percentage was computed between the results obtained from the model and the results published by Park et al. [9].Table 9 lists the difference in percent between the model and Park et al. [9] regarding the impact on greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human health (HH) particulate, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog.Table 10 lists the difference in percent between the model and Park et al. [9] regarding energy consumption.After that the difference in percentage was computed between the results obtained from the model and the results published by Park et al. [9].Table 9 lists the difference in percent between the model and Park et al. [9] regarding the impact on greenhouse gas (GHG) footprint, impact on acidification potential (AP), impact on human health (HH) particulate, impact on eutrophication potential (EP), impact on ozone depletion, and impact on smog.Table 10 lists the difference in percent between the model and Park et al. [9] regarding energy consumption.Then interviews were held with the previously mentioned experts to justify why this average percentage error occurred.Experts replied that data collected for the case study is the approximately measuring the input variables, such as the number of crossing construction, and passenger cars roads per day.Also, the ratio of passengers' cars to construction trucks were approximately computed by average/day.Moreover, they pointed out that there should be official statistics about the number of cars crossing each road per day.Experts were then asked to propose a set of mitigation strategies in order to reduce the amount of environmental impact indicators resulted from operation in road construction projects.They finally pointed out nine mitigation strategies that should be implemented in order to reduce the amount of environmental impacts.These mitigation strategies will be used in another model that is capable of reducing the amount of construction wastes generated in road construction projects using system dynamics, which is currently under preparation.The nine mitigation strategies that should be implemented are listed below: 1.
Educate road construction projects participants about the importance of environmental impact indicators management and its drawbacks on the environment and the health of civilians.
2.
The infrastructure construction firms should adopt international standards, such as the environmental management systems (EMS), which allow the firm to identify opportunities for reducing the environmental impact indicators of its day-to-day operations.
3.
The infrastructure construction firms must adopt the latest environmental technologies in order to reduce the amount of environmental impact indicators, such as those proposed in environmental technology policies Environmental Protection Agency (EPA).The EPA program provides the verification process for the performance of innovative environmental technologies in a particular application.In the construction sector, the EPA program has largely been concerned with the technologies for emission reductions, such as after-treatment technologies, use of cleaner fuel, and emission-reducing fuel additives.The EPA rules for off-road diesel engines are the regulations with the biggest impact on emissions from construction equipment.Thus, equipment manufacturers are required to ensure their products comply with these regulations with a standardized certification test for their products.
4.
Environmental impact indicators incentives should be granted to construction firms that adopt best practices; these incentives include: grant programs, which provide direct funding to equipment owners to replace old equipment with new and cleaner equipment, and tax incentives, which offer tax exemptions, tax deduction, or tax credits to adopt the usage of technologies for reducing emissions.
5.
Increase the amount construction equipment operating with natural gas, rather than diesel.By applying this strategy, the amount of environmental impacts will be lessened.
6.
The government must adopt legislation that encourages the expand of biofuel and decreasing the number of equipment operating with natural gas and diesel.7.
Replacement of non-renewable natural aggregates by recycled aggregates and in particular secondary aggregates obtained from industrial wastes and by products.8.
Use of recycling techniques in road rehabilitation projects especially in-place recycling.9.
Use of cold asphalt mixes instead of hot asphalt mixes.
Conclusions
A three-dimensional BIM module was developed that is capable of computing time, life cycle cost, overall environmental impacts, and primary energy associated with road construction processes, using Revit 2015 software, the Athena Impact Estimator, and Copert software version4.The results obtained from the model demonstrated that the environmental impact indicators have negative consequences for both the environment and individuals.A set of mitigation strategies were developed to overcome these negative consequences.Thus, the government should adapt strong legislations to encourage waste management procedures, and reduction in the overall environmental impacts.The model can be applied to any other type of construction project, and any developing country by changing the environmental impact indicators contributors, data, and experts' judgment.The results of the proposed
Figure 1 .
Figure 1.Environmental Building Information Modeling (EBIM) methodology and model development.
Figure 1 .
Figure 1.Environmental Building Information Modeling (EBIM) methodology and model development.
Figure 3 .
Figure 3. User input for the time module interface.
Figure 4 .
Figure 4. User input for the environmental module interface.
Figure 3 .
Figure 3. User input for the time module interface.
Figure 3 .
Figure 3. User input for the time module interface.
Figure 4 .
Figure 4. User input for the environmental module interface.
Figure 4 .
Figure 4. User input for the environmental module interface.
T
(time needed to execute a construction activity) = 1000/(70 × 0.8 × 6) = 2.98 days = 3 days If the quantity of waste that will be dumped is 2000 m 3 /day, the productivity of single equipment is 70 m 3 /day, efficiency = 0.8, and number of items of equipment = 5, then, T tra = 2000/(70 × 0.8 × 5) = 7.143 = 8 days Assume that the average consumption of fuel (gallon/ hour) fromTable 2 is 3.5 gallons/hour and this amount is converted to 13.25 L. The number of working hours is 8 h/ day, and the actual work hours represent 70% of this.
Figure 5 .
Figure 5.The time module calculations interface.Figure 5.The time module calculations interface.
Figure 5 .
Figure 5.The time module calculations interface.Figure 5.The time module calculations interface.
Figure 9 .
Figure 9. Contribution of different activities to the greenhouse gas (GHG) footprint.
Figure 9 .
Figure 9. Contribution of different activities to the greenhouse gas (GHG) footprint.
Figure 10 .
Figure 10.Contribution of different activities to acidification potential (AP).
Figure 10 .
Figure 10.Contribution of different activities to acidification potential (AP).
Figure 11 .
Figure 11.Contribution of different activities to particulate matter (HH).
Figure 11 .
Figure 11.Contribution of different activities to particulate matter (HH).
Table 2 .
Average fuel consumption of construction equipment.
Table 2 .
Average fuel consumption of construction equipment. ):
Table 2 .
Average fuel consumption of construction equipment. ):
Table 2 .
Average fuel consumption of construction equipment.
Table 2 .
Average fuel consumption of construction equipment. ):
Table 2 .
Average fuel consumption of construction equipment.
Table 5 .
Emission factors of pollutants from natural gas consumption (adapted from[26]).
Table 6 .
Road construction project characteristics.
Table 6 .
Road construction project characteristics.
Table 7 .
Road construction project activities and their equipment.
Table 8
lists the contribution of life cycles in producing different environmental impact indicators, and primary energy.
Table 8 .
Total life cycles in producing different environmental impact indicators and primary energy.
Table 9 .
[9] difference in environmental impacts between the model and Park et al.[9].
Notes: All values are expressed in percent.
Table 10 .
[9] difference in energy consumption between the model and Park et al.[9].
Notes: All values are expressed in percent.
|
2017-07-28T14:02:55.025Z
|
2017-05-17T00:00:00.000
|
{
"year": 2017,
"sha1": "c18bb0627ceed9ae8bc4c74492b3187c0f93125c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/9/5/843/pdf?version=1506688746",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c18bb0627ceed9ae8bc4c74492b3187c0f93125c",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Economics"
]
}
|
266582434
|
pes2o/s2orc
|
v3-fos-license
|
Effect of 3-Dimensional Robotic Therapy Combined with Electromyography-Triggered Neuromuscular Electrical Stimulation on Upper Limb Function and Cerebral Cortex Activation in Stroke Patients: A Randomized Controlled Trial
(1) Background: This study investigated the effect of 3-dimensional robotic therapy (RT) combined with electromyography-triggered neuromuscular electrical stimulation (RT–ENMES) on stroke patients’ upper-limb function and cerebral cortex activation. (2) Methods: Sixty-one stroke patients were assigned randomly to one of three groups. The stroke patients were in the subacute stage between 2 and 6 months after onset. The three groups received 20 min of RT and 20 min of electromyography-triggered neuromuscular electrical stimulation (ENMES) in the RT–ENMES group (n = 21), 40 min of RT in the RT group (n = 20), and 40 min of ENMES in the ENMES group (n = 20). The treatments were for 40 min, 5 days per week, and for 8 weeks. Upper-extremity function was evaluated using the Fugl–Meyer assessment for upper extremity (FMA-UE), Wolf motor function test, and action research arm test (ARAT); cerebral cortex activation and motor-evoked potential (MEP) amplitude were evaluated before and after the study. (3) Results: The analysis showed significant changes in all evaluation items for all three groups in the before-and-after comparisons. Significant changes were observed in the FMA-UE, ARAT, and MEP; in the posttest, the RT–ENMES group showed more significant changes in the FMA-UE, ARAT, and MEP than the other two groups. (4) Conclusions: The study analysis suggests that RT–ENMES effectively improves upper-limb function and cerebral cortex activation in patients with stroke.
Introduction
Patients with stroke generally show hemiplegia on the damaged hemisphere's contralateral side and complex functional impairments, including spasticity, motor dysfunction, cognitive impairment, visual-perceptual impairment, and aphagia [1].These disorders cause motor-control problems and are accompanied by upper-extremity muscle strength, stiffness, and sensory impairment [2].More than 85% of patients with stroke experience hemiplegia, and >70% have upper-limb function impairment [3].Among patients with damage to upper-extremity function, approximately 5% show normal recovery, and 20% recover some upper-extremity function.Functional recovery of the upper extremities becomes more difficult as patients with stroke enter the chronic stage; therefore, upperextremity recovery is an important goal in treating patients with stroke [4].
Impaired upper-extremity function in patients with stroke limits the ability to use the arm or hold and manipulate objects, providing a significant barrier to the patient's independent daily life and return to society, ultimately lowering their quality of life [5].
Therapeutic approaches to improve upper-extremity function in patients with stroke are being implemented in various ways; these interventions are based on neuroplasticity [6].The recovery of upper-limb function in patients with stroke is closely related to intensive upper-limb practice with active neuromuscular activation through one's own efforts [7].Among the various treatment techniques used to restore upper-limb function in patients with stroke, electromyography-triggered neuromuscular electrical stimulation (ENMES) is the most common.ENMES stimulates muscles through an electric current, activating specific muscles to generate upper-limb movements, restore motor function, trigger sensory feedback to the brain during muscle contraction, and promote motor relearning.It also contributes to improved muscle strength [8,9].ENMES may also limit the problem of "learned non-use" in which patients with stroke gradually become accustomed to managing daily activities without using specific muscles, considered an important barrier to maximizing motor-function recovery after stroke [10].It was reported that a single ENMES treatment was effective in improving upper-limb function in patients with subacute stroke hemiplegia [11].ENMES is an effective treatment that improves activities of daily living by improving the stretching and grasping functions of the paralyzed upper extremities.However, other studies have highlighted disadvantages of ENMES [12].Difficulties may arise when NMES is used alone to activate multiple muscle groups for functional activity.NMES makes it difficult to control the contraction rate of individual muscles for upper-limb movements with the desired kinematic properties, including speed, trajectory, and movement smoothness, primarily because of muscle contractions evoked during electrical stimulation [13].In addition, it may not be effective in patients who lack concentration and are interested in participating in the treatment.Recent research has addressed these shortcomings using 3-dimensional (3D)-based robotic therapy (RT) as a new treatment for patients with stroke.RT can improve concentration and motivation for treatment, and it is widely used in patients with limited upper-extremity movement [14].RT can provide external auxiliary support for the upper extremities and help patients experience preprogrammed upper-extremity movements on the paretic side to improve the associated sensorimotor functions through repetitive practice [15].RT is an innovative movement-based therapy that implements highly repetitive, intensive, adaptive, quantifiable, and task-specific arm training with feedback and motivation to enhance brain neuroplasticity [16][17][18].Unlike humans, robotic devices programmed to perform in multiple functional modes ease the burden on rehabilitation providers and resource shortages without causing fatigue [19].A 2018 Cochrane review found that electromechanical and robot-assisted arm training improved arm strength, arm function, and the performance of activities of daily living without increasing dropout rates or intervention-related adverse events compared with a variety of traditional treatment interventions [20].
Rehabilitation treatment using robots is an ideal tool for evaluating the movement patterns of each joint of the shoulder, elbow, wrist, and hand through dynamic measurements; it is controllable, repeatable, and quantifiable [21].However, robotic systems use motors to provide external assistive torque to the limbs and do not have the same effect as ENMES, which generates movement by directly activating an individual's specific muscles.In addition, activating specific muscle groups involved in the detailed joint movements of the upper extremities is limited.If the patient relies only on the robot's movements, the individual may not make an effort to participate in the movements [22].Currently, ENMES and RT are used separately in most rehabilitation treatments.Their combined effect on post-stroke paralyzed neuromuscular systems and rehabilitation has not been well evaluated.Treatment plans combining ENMES and RT must be justified to achieve optimized training effects because of each technique's advantages and disadvantages [23].This study aimed to quantify the complex effects of 3D-based upper-limb RT combined with ENMES on upper-limb function and cerebral cortex activation.In addition, we present evidence for a new treatment method for improving upper-extremity function in patients with hemiplegia after stroke.
Participants
The study participants were 69 subacute patients in the recovery stage within 6 months of stroke onset hospitalized at H Rehabilitation Hospital in Gyeonggi-do between January 2023 and June 2023.The subjects were patients diagnosed with stroke hemiplegia by a rehabilitation medicine doctor and were in the subacute phase 2 to 6 months after the onset of the disease.The evaluation and interviews in the process of selecting subjects to participate in the study were conducted by two occupational therapists with more than 10 years of experience.This study targeted patients who understood the purpose and content of this study and showed an active willingness to participate; informed consent was obtained from all patients.The sample size was set to 69 participants for the mean comparison (F-test) of the three groups using G-Power 3.1 with a significance level of 0.05, power of 0.9, and effect size of 0.25 [24].To minimize selection bias, 23 people were randomly divided into three groups, the experimental group and control groups 1 and 2, using a computer random number table program.Figure 1 Bioengineering 2024, 11, x FOR PEER REVIEW 3 of 13 addition, we present evidence for a new treatment method for improving upper-extremity function in patients with hemiplegia after stroke.
Participants
The study participants were 69 subacute patients in the recovery stage within 6 months of stroke onset hospitalized at H Rehabilitation Hospital in Gyeonggi-do between January 2023 and June 2023.The subjects were patients diagnosed with stroke hemiplegia by a rehabilitation medicine doctor and were in the subacute phase 2 to 6 months after the onset of the disease.The evaluation and interviews in the process of selecting subjects to participate in the study were conducted by two occupational therapists with more than 10 years of experience.This study targeted patients who understood the purpose and content of this study and showed an active willingness to participate; informed consent was obtained from all patients.The sample size was set to 69 participants for the mean comparison (F-test) of the three groups using G-Power 3.1 with a significance level of 0.05, power of 0.9, and effect size of 0.25 [24].To minimize selection bias, 23 people were randomly divided into three groups, the experimental group and control groups 1 and 2, using a computer random number table program.Figure 1 The inclusion criteria were (1) adults >19 years of age, (2) patients with subacute hemiparesis <6 months after stroke onset, (3) patients capable of following instructions with a Mini-Mental State Test-Korea version score of ≥24, (4) patients with wrist extensor manual muscle test grade ≤3 (F), and (5) patients whose stiffness in the upper extremity on the affected side is grade ≤2 on the modified Ashworth scale.The exclusion criteria were (1) attachment of an artificial pacemaker, (2) patients with aphasia who have difficulty communicating, (3) patients with severe pain in the upper extremity on the paralyzed side (visual analog scale score of ≥5), (4) cases of peripheral nerve damage, skin lesions, or The inclusion criteria were (1) adults >19 years of age, (2) patients with subacute hemiparesis <6 months after stroke onset, (3) patients capable of following instructions with a Mini-Mental State Test-Korea version score of ≥24, (4) patients with wrist extensor manual muscle test grade ≤3 (F), and (5) patients whose stiffness in the upper extremity on the affected side is grade ≤2 on the modified Ashworth scale.The exclusion criteria were (1) attachment of an artificial pacemaker, (2) patients with aphasia who have difficulty communicating, (3) patients with severe pain in the upper extremity on the paralyzed side (visual analog scale score of ≥5), (4) cases of peripheral nerve damage, skin lesions, or electrical hypersensitivity of the wrist extensor muscles on the affected side, and (5) because this study targeted patients with stroke, other vulnerable patients were excluded, including pregnant women and infants/children.
Study Procedure
This study was a single-blind, randomized, controlled trial using a three-group pretest-posttest design.All experiments and evaluations were conducted by two occupational therapists.The experiment for all three groups was conducted by an occupational therapist with >10 years of clinical experience.All evaluations were conducted by another occupational therapist with >10 years of clinical experience.This study divided 69 hospitalized patients randomly into three groups according to the order of visits using a computer-based random number table.The three groups received traditional rehabilitation treatment for 30 min a day, 5 times a week, and for 8 weeks.During the same period, the experimental group received ENMES and 3D-based upper RT for 20 min each (40 min total); control group 1 received 3D-based upper RT for 40 min, and control group 2 underwent an additional 40 min of ENMES treatment.The improvement of upper-extremity function was evaluated using the Fugl-Meyer assessment for upper extremity (FMA-UE), Wolf motor function test (WMFT), and action research arm test (ARAT).Cerebral cortex activation was evaluated by using the motor-evoked potential (MEP) amplitude, measured using transcranial magnetic stimulation.
Three surface electrodes were placed on the wrist extensor muscles, extensor pollicis brevis, and extensor pollicis longus (Figures 2 and 3).First, voluntary wrist extension was induced, and a reference threshold was set according to the level of action potential due to muscle contraction.When the action potential reached the reference threshold and electrical stimulation was induced, 0.1 s rise-phase, 5 s contraction-phase, and 2 s fall-phase processes were applied using 35 Hz, a pulse width of 200 µs, and a symmetric rectangular biphasic signal.The stimulation intensity was set between 15 and 30 mA.If the action potential generated through muscle contraction did not reach the reference threshold, electrical stimulation was set to appear automatically after 20 s; the reference threshold setting was reset for each treatment session [25].
electrical hypersensitivity of the wrist extensor muscles on the affected side, and (5) because this study targeted patients with stroke, other vulnerable patients were excluded, including pregnant women and infants/children.
Study Procedure
This study was a single-blind, randomized, controlled trial using a three-group pretest-posttest design.All experiments and evaluations were conducted by two occupational therapists.The experiment for all three groups was conducted by an occupational therapist with >10 years of clinical experience.All evaluations were conducted by another occupational therapist with >10 years of clinical experience.This study divided 69 hospitalized patients randomly into three groups according to the order of visits using a computer-based random number table.The three groups received traditional rehabilitation treatment for 30 min a day, 5 times a week, and for 8 weeks.During the same period, the experimental group received ENMES and 3D-based upper RT for 20 min each (40 min total); control group 1 received 3D-based upper RT for 40 min, and control group 2 underwent an additional 40 min of ENMES treatment.The improvement of upper-extremity function was evaluated using the Fugl-Meyer assessment for upper extremity (FMA-UE), Wolf motor function test (WMFT), and action research arm test (ARAT).Cerebral cortex activation was evaluated by using the motor-evoked potential (MEP) amplitude, measured using transcranial magnetic stimulation.
Electromyography-Triggered Neuromuscular Electrical Stimulation (ENMES)
This study used an EMG FES 2000 (Walking Man II, Iksan, Republic of Korea) as the ENMES.
Three surface electrodes were placed on the wrist extensor muscles, extensor pollicis brevis, and extensor pollicis longus (Figures 2 and 3).First, voluntary wrist extension was induced, and a reference threshold was set according to the level of action potential due to muscle contraction.When the action potential reached the reference threshold and electrical stimulation was induced, 0.1 s rise-phase, 5 s contraction-phase, and 2 s fall-phase processes were applied using 35 Hz, a pulse width of 200 µs, and a symmetric rectangular biphasic signal.The stimulation intensity was set between 15 and 30 mA.If the action potential generated through muscle contraction did not reach the reference threshold, electrical stimulation was set to appear automatically after 20 s; the reference threshold setting was reset for each treatment session [25].
3D-Based Robotic Therapy (RT)
The 3D-based upper-limb RT used in this study was the ReoGo-J (ReoGoTM; Motorika Medical Ltd., Caesaria, Israel).This end-effector robotic system activates moments in the paralyzed shoulder, elbow, and forearm.During robotic training, the patients performed several tasks at an assistance level appropriate for their functional level, including forward reaching, abduction, and external rotation.Using a secondary controller, such as an active secondary controller, ReoGo-J allows for patients with stroke to move their damaged upper extremities independently [26].The accuracy of the performance was aided by visual feedback from the ReoGo-J to the patient through a front-facing monitor.The mobility of the shoulder, elbow joints, and forearm allows for specific treatment of the upper limbs.Robots enable the execution of movements in 3D and spatial planes.Exercises can be performed variously using the forearm, wrist, or handgrips.Thus, the system allows for users to perform different exercises to reach their goals through visual and auditory feedback on a connected computer screen [27].Movement modes can vary from passive to active with different levels of intervention that the patient exerts on the robotic arm.The movement's range of motion can be adjusted according to the unique characteristics of each participant.The range of motion was measured and set according to the patient's personal upper-extremity function level; training was then conducted to improve movement through assistance in areas outside the range.Of 71 performed tasks, 10 were selected and applied in this study.The tasks involved forward reaching (2D) and forward reaching (3D).Abduction reaching, radial reaching (2D), radial reaching (3D), reaching in eight directions, reaching for the mouth, reaching for the head, and game mode (puzzle, kitchen) were selected and performed according to the patient's functional level [28].The experimental group applied it for 20 min a day; control group 1 applied it for 40 min, 5 days a week, and for 8 weeks (Figure 4).
3D-Based Robotic Therapy (RT)
The 3D-based upper-limb RT used in this study was the ReoGo-J (ReoGoTM; Motorika Medical Ltd., Caesaria, Israel).This end-effector robotic system activates moments in the paralyzed shoulder, elbow, and forearm.During robotic training, the patients performed several tasks at an assistance level appropriate for their functional level, including forward reaching, abduction, and external rotation.Using a secondary controller, such as an active secondary controller, ReoGo-J allows for patients with stroke to move their damaged upper extremities independently [26].The accuracy of the performance was aided by visual feedback from the ReoGo-J to the patient through a front-facing monitor.The mobility of the shoulder, elbow joints, and forearm allows for specific treatment of the upper limbs.Robots enable the execution of movements in 3D and spatial planes.Exercises can be performed variously using the forearm, wrist, or handgrips.Thus, the system allows for users to perform different exercises to reach their goals through visual and auditory feedback on a connected computer screen [27].Movement modes can vary from passive to active with different levels of intervention that the patient exerts on the robotic arm.The movement's range of motion can be adjusted according to the unique characteristics of each participant.The range of motion was measured and set according to the patient's personal upper-extremity function level; training was then conducted to improve movement through assistance in areas outside the range.Of 71 performed tasks, 10 were selected and applied in this study.The tasks involved forward reaching (2D) and forward reaching (3D).Abduction reaching, radial reaching (2D), radial reaching (3D), reaching in eight directions, reaching for the mouth, reaching for the head, and game mode (puzzle, kitchen) were selected and performed according to the patient's functional level [28].The experimental group applied it for 20 min a day; control group 1 applied it for 40 min, 5 days a week, and for 8 weeks (Figure 4).
Fugl-Meyer Assessment for Upper Extremity (FMA-UE)
The Fugl-Meyer assessment (FMA) evaluates motor function on the paralyzed side of patients with stroke based on Brunnstrom's six-step recovery process.This study evaluated only the upper-extremity items of the FMA (FMA-UE) comprising 33 items, including 18 items from the shoulder, elbow, and forearm; 5 items from the wrist; 7 items from the hand and fingers; and 3 items measuring coordination.The score is on a 3-point scale from 0 to 2; points are awarded depending on whether the performance is completed.A score of 0 indicates impossible to perform, 1 indicates partial performance, and 2 indicates complete performance.The mean total score for upper-extremity function is 66 points.The inter-and intra-rater reliability of the FMA upper-extremity test was very high (0.97) [29].
Wolf Motor Function Test (WMFT)
The Wolf motor function test (WMFT) was developed in 1989 to evaluate upper-extremity motor function in patients with hemiplegia.The test measures each activity's exercise performance and performance time and consists of 17 movement tasks that range from simple to complex.The score is on a 6-point scale ranging from 0 to 5 with 0 indicating no performance and 5 indicating normal movement; lower scores indicate worse motor performance.The inter-rater reliability of this tool's function score was 0.88; the performance time was 0.97 [30].
Outcome Measures 2.4.1. Fugl-Meyer Assessment for Upper Extremity (FMA-UE)
The Fugl-Meyer assessment (FMA) evaluates motor function on the paralyzed side of patients with stroke based on Brunnstrom's six-step recovery process.This study evaluated only the upper-extremity items of the FMA (FMA-UE) comprising 33 items, including 18 items from the shoulder, elbow, and forearm; 5 items from the wrist; 7 items from the hand and fingers; and 3 items measuring coordination.The score is on a 3-point scale from 0 to 2; points are awarded depending on whether the performance is completed.A score of 0 indicates impossible to perform, 1 indicates partial performance, and 2 indicates complete performance.The mean total score for upper-extremity function is 66 points.The inter-and intra-rater reliability of the FMA upper-extremity test was very high (0.97) [29].
Wolf Motor Function Test (WMFT)
The Wolf motor function test (WMFT) was developed in 1989 to evaluate upper-extremity motor function in patients with hemiplegia.The test measures each activity's exercise performance and performance time and consists of 17 movement tasks that range from simple to complex.The score is on a 6-point scale ranging from 0 to 5 with 0 indicating no performance and 5 indicating normal movement; lower scores indicate worse motor performance.The inter-rater reliability of this tool's function score was 0.88; the performance time was 0.97 [30].
Action Research Arm Test (ARAT)
The action research arm test (ARAT) assesses the ability to perform gross movements of the upper extremities and grasp, move, and release objects of various sizes, weights, and shapes.ARAT is an evaluation tool that evaluates upper-extremity function and release ability, and its development is based on Carroll's upper-extremity function test [31,32].It consists of four sub-items with a total of 19 items, including grasp (6 items), grip (4 items), pinch (6 items), and gross movements (3 items).On a 4-point scale (0-3), impossible to perform is 0 points, partial performance is 1 point, performing the test fully but taking a long time or showing difficulties is 2 points, and performing the test normally and completely is 3 points.The total score is 57 points: 0 points for no movement and 57 points for performing all movements without difficulty.The intra-tester reliability of the ARAT was 0.99; the test-retest reliability was 0.98 [32].
Motor-Evoked Potential (MEP) Amplitude
The motor-evoked potential (MEP) amplitude was measured using the Nicolet Viasys Viking Select EMG EP system (San Diego, CA, USA).The MEP is an objective electrodiagnostic evaluation tool that induces specific peripheral muscle responses through transcranial magnetic stimulation of the cerebral motor cortex.For magnetic stimulation, the International Electroencephalograph 10-20 recording method was applied; the central part of the coil stimulator was placed at the Cz position.The subjects were placed in the supine position in an isolated space with the center of the coil contacting the cerebral hemisphere on the unaffected side.The MEP evaluation was conducted by a rehabilitation medicine doctor to ensure safety, and the subject's vital signs were monitored during the evaluation.The first dorsal interosseous (FDI) muscle was located in the motor cortex at a 45 • angle from the centerline and was moved gradually to determine the point of maximum response.The maximum magnetic field strength was 2.0 Tesla; the stimulation time was 0.1 ms [33].The stimulation intensity was increased gradually from 80% to 100%, and the stimulation was performed multiple times.EMG values were measured by attaching a silver-silver chloride electrode to the FDI muscle on the affected side using the belly-tendon method and a ground electrode to the arm [34].The resting motor threshold was defined as the minimum stimulation intensity at which MEPs > 50 µV were recorded at least 5 times during 10 stimulations.The MEP amplitude was determined by measuring the amplitude 12 times after 120% stimulation [35].The peak-to-peak amplitudes of the evoked MEPs from the contralateral target muscles were obtained.The inter-stimulus interval in our study was approximately 5 s to minimize carry-over effects of the previous stimuli.EMG values were recorded using the mobile Viking Select software 19.1; signals were amplified at 100 ms/div and filtered from 2 Hz to 10 KHz.
Statistical Analysis
The data collected in this study were statistically analyzed using SPSS (version 22.0; SPSS Inc., Chicago, IL, USA).Baseline variables were compared between groups using one-way analysis of variance (ANOVA) and the Kruskal-Wallis or Fisher's exact tests, depending on the characteristics of the variables.A paired t-test was used to compare the average changes in upper-extremity function and cerebral cortex activation before and after the intervention in the three groups.One-way ANOVA was used to compare the average changes in upper-limb function and cerebral cortex activation before and after the experiment and the amount of change among the three groups.A post hoc test was performed (assuming equal variance) using the Scheffe method; if an equal variance was not assumed, Dunnett's T3 method was used.All statistical significance levels were set at α = 0.05.
Participant Characteristics
The general characteristics of the participants are presented in Table 1.A homogeneity test was conducted on all items among the three groups; no significant differences were observed (Table 1).
Comparison between the Experimental and Control Groups
In the before-and-after comparison of the three groups, all groups showed significant changes in the FMA-UE, WMFT, and ARAT, which are evaluations of upper-extremity function, and MEP, which is an evaluation of cerebral cortical activation.The pre-and postcomparison between the three groups showed significant changes in the FMA-UE, ARAT, and MEP, and the post hoc test of the three evaluation items showed significant results in the comparisons of the RT-ENMES and RT groups and the RT-ENMES and ENMES groups (Table 2).
Changes in the Groups before and after Intervention
The comparison between the three groups showed significant changes in the FMA-UE, ARAT, and MEP and the post-hoc test of the three evaluation items.Significant results were also found in the comparison of the RT-ENMES and RT groups and the RT-ENMES and ENMES groups (Table 3, Figure 5).were also found in the comparison of the RT-ENMES and RT groups and the RT-ENMES and ENMES groups (Table 3, Figure 5).
Discussion
Recovery of motor function after stroke is slower in the upper extremities than in the lower extremities; hand function recovery is among the slowest [36].Therefore, the recov-
Discussion
Recovery of motor function after stroke is slower in the upper extremities than in the lower extremities; hand function recovery is among the slowest [36].Therefore, the recovery of upper-extremity function is an important goal in rehabilitation treatment, and many therapeutic methods and approaches are being attempted in clinical practice for this purpose.This study combined ENMES and 3D-based upper-limb RT to investigate the effect on the recovery of upper-limb function and cerebral cortex activation in patients with stroke.A before-and-after comparison of the three groups in this study showed significant changes in upper-extremity function and cerebral cortex activation.This finding is consistent with many studies that showed positive effects from RT and ENMES interventions applied singly or combined on upper-limb function and brain activation in patients with stroke [10-12,14-18].However, a comparison of the three groups revealed differences.A significant change was found in the pre-and post-average comparisons among the three groups in the FMA-UE and ARAT, which are upper-extremity function evaluations, but no significant change in the WMFT.In the post hoc tests of the FMA-UE and ARAT evaluations, the RT-ENMES group showed significant changes compared to the RT and ENMES groups.In addition, when comparing the change in the upper-extremity function evaluation between the three groups, the same significant change was shown in the FMA-UE and ARAT evaluations.In the post hoc test, the RT-ENMES group showed a significant change compared to the RT and ENMES groups.
RT and ENMES are more effective in improving upper-extremity function in patients with stroke when combined as a single intervention than when administered alone.Combining the two interventions improved upper-extremity function effectively, meaning that patients could make precise movements by controlling the specific muscle groups necessary for functional use.This factor appears important for ensuring that patients have a kinesthetic experience with the movements to be learned [37].RT repeatedly assisted upper-limb movements through external power; ENMES improved the kinesthetic experience of stimulated wrist extensor muscles.Therefore, the combined approach of RT and ENMES may bring additional benefits to upper-limb recovery [22].Important factors in improving upper-extremity function in patients with stroke include the willingness to participate in treatment, motivation, and interest.Parallel RT and ENMES interventions were related directly to these factors.ENMES is more effective than general NMES as an active treatment in which the patient participates through voluntary effort and motivation [38,39].Repetitive activities for voluntary motivation and afferent stimulation are effective for the neurological recovery of the paralyzed upper extremities [40].The 3D-based RT provides real-time feedback on upper-limb movements through a 3D computer screen, effectively improving concentration and movement coordination.The performance tasks were also continuous goal-oriented tasks; the participants could voluntarily participate in the intervention in a more interesting way because they chose the tasks they wanted and performed them in game mode among various tasks [41].The advantages of these two interventions are believed to combine.
Three assessment tools were used to evaluate the changes in upper-limb function.However, no significant difference was found between the groups on the WMFT; the results appeared to differ depending on the difficulty of performing the evaluation tool.Compared with the FMA-UE, the WMFT places more weight on items evaluating detailed hand movements and manipulative abilities.Because the study participants comprised patients with moderate impairment, the WMFT evaluation partially confirmed differences in upper-extremity function [42].In addition, the FMA-UE and ARAT correlate highly in the evaluation of upper-extremity function in patients with moderately impaired stroke [43].
MEP, which evaluates brain activation, changed significantly among the three groups; a significant change was confirmed in the post hoc test when the RT-ENMES group was compared with the RT and ENMES groups.This change is believed to have affected brain neuroplasticity and reorganization of the areas related to upper-limb function in the RT-ENMES group and may have contributed to the positive response to upper-limb functional use.Both RT and ENMES affect brain neuroplasticity.ENMES activates the motor nerve pathway from the peripheral nervous system to the central nervous system through muscle contraction on the paretic side.Fujiwara et al. showed reciprocal inhibitory modulation of short intracortical inhibition and finger extensor muscles resulting from a single NMES intervention, supporting the results of the present study [44].RT is thought to enhance motor-nerve activation by providing additional systematic and repetitive movements [45].Therefore, the parallel intervention of the two treatments was effective in recovering upperlimb movement through a positive synergistic effect on brain neuroplasticity in patients with stroke.The 3D-based RT provides visual feedback and immersion, allowing for patients to participate more effectively in rehabilitation.ENMES provides direct motor feedback by inducing muscle contraction.A recent NMES study showed that NMES training targeting upper-extremity function in chronic stroke patients induced modulation of somatosensory-evoked potentials accompanying sensory recovery [46].Combining these two feedbacks results in the interaction of the sensory-motor system, leading to an overall improvement in the MEP [20].One limitation of this study is that it targeted patients in the subacute stage of stroke; therefore, a natural recovery effect is expected.The treatment effect may vary depending on factors that include the severity of the stroke, age, sex, side of the injury, and disease-onset period; therefore, additional research is necessary.
Although improvements in cerebral cortex activation and upper-extremity function have been reported, actual changes in daily living activities have not been evaluated.The research period was short at eight weeks; since no lasting effects were confirmed after eight weeks of research, this short period should be considered in future research.
Conclusions
This study showed that the combined intervention of ENMES and 3D-based upperlimb RT effectively improved upper-limb function and cerebral cortex activation in patients with stroke.This study provides a scientific basis for proposing a new concurrent intervention method to improve upper-limb function in patients with stroke.
shows the Consolidated Standards of Reporting Trials (CONSORT) diagram for participant recruitment.This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Chosun University (2-1041055-AB-N-01-2023-35).
shows the Consolidated Standards of Reporting Trials (CONSORT) diagram for participant recruitment.This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Chosun University (2-1041055-AB-N-01-2023-35).
5 of 13 13 Figure 3 .
Figure 3.The attached surface electrodes of ENMES.A and C: active electrode and the reference electrode at the origin and insertion sites of the extensor pollicis brevis and extensor pollicis longus.B: EMG electrode.
Figure 3 .
Figure 3.The attached surface electrodes of ENMES.A and C: active electrode and the reference electrode at the origin and insertion sites of the extensor pollicis brevis and extensor pollicis longus.B: EMG electrode.
Table 1 .
Characteristics of participants.
Table 2 .
Comparison between the experimental and control groups.
Table 3 .
Changes in the groups before and after intervention.
Table 3 .
Changes in the groups before and after intervention. RT-
|
2023-12-29T16:21:05.548Z
|
2023-12-22T00:00:00.000
|
{
"year": 2023,
"sha1": "bdf6ce58d152428c2483b269a62a58c20f634246",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-5354/11/1/12/pdf?version=1703258118",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2c3691d6ffc39f96e3fe3ed480acb7653e5e8da",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3380960
|
pes2o/s2orc
|
v3-fos-license
|
Resolution of ranking hierarchies in directed networks
Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit.
Detailed proofs
In this Supporting material we present details and extended formulae for the propositions.
To start, we consider the values of agony for general d depending on the choice of the alternative rankings.
• No inversion and splitting. When b < 0, each class is divided into 2 −b classes. As for the affinity matrix, the only part affected by the change in the ranking is the one above the diagonal, which has no impact on the computation of E[A d (G, r (b) )]. Hence one has • No inversion and merging. When b ≥ 0, for any pair (i, j) it holds: which gives • Inversion and merging. When b ≥ 0 the expression for agony of the inverted ranking becomes Then, we present the proofs of the propositions.
Proof of Proposition 1
We explicitly show that in the d = 1 case there exists critical values for s at which the planted ranking ceases to maximize hierarchy both for Twitter-like and Military-like hierarchies.
To determine the optimal number of classes we first treat b as a continuous variable and compute the derivative ohh 1 with respect to it. The unique critical point is denoted by b * and it is given by Note that it must hold 0 ≤ b ≤ a and we want to avoid the continuous relaxation at the boundaries so we consider the extreme values separately. When p ≥ q > s (Twitter-like hierarchy), we first notice that , that is the trivial ranking is never better than that with two classes.
Moreover, we denote with s 2 the value of s such that the rankings with two and three classes have the same value of hierarchy, i.e.
Similarly, one can find the critical value s m such that the ranking with of R − 1 classes shares the value of hierarchy with the planted one, Finally, we can combine the results to obtain the optimal number of classes for the direct ranking in the region p ≥ q > s: where With a reasoning similar to the one carried before, one gets that when p ≥ q > s the optimal number of classes for the inverted ranking is such that One can conclude that the optimal ranking for the twitter-like hierarchy is the direct one with a number of classes which depends on s, according to (??).
When q = 0 (Military-like hierarchy), when it is defined, we have ∂ 2h 1 ∂b 2 | b=b * > 0 , so, to obtain the optimal directed ranking we only need to check the extreme values for b, i.e. b = 0, b = a. The optimal number of classes for the direct ranking is given bỹ Then, one can consider the inverted ranking. It easy to verify that that is, also for the inverted ranking splitting is never optimal on average.
As for merging, the optimal choice for b is given by which is well defined when s > 2 4 a p and satisfies a 2 ≤ b i, * ≤ a. The optimal number of classes fro the inverted ranking is given byR When s ≤ s 1 , the planted ranking is optimal and non zero and decreasing, and Denote by s i the value of s such that One gets and when s > s i the optimal inverted ranking has a higher value of hierarchy than the planted, which is the optimal directed one. Finally, one can write the expression for the estimate of the optimal value of h in proposition ??.
Proof of Proposition 2
We here proceed to show that in the d = 0 case (FAS), both for Twitter-like and Military-like hierarchies, agony is minimized by the ranking where nodes are partitioned in singletons. When b > 0, the derivative of h with respect to b is negative hence the planted ranking is better that any other with a fewer number of classes. Instead, when b < 0 one has So the optimal ranking is obtained for the limit value of Similar computations give that any inverted ranking (i.e ∀ b) has never a higher value of hierarchy than the the ranking we just discussed.
One get the formula in proposition 2 For the case d = 2 one can follow the same procedure we showed for d = 1 and find the critical values for resolution threshold.
When p ≥ q > s, the optimal number of classes is given bỹ is the unique zero of the first order derivative ofh 2 with respect to b, and s 2,m = 6 2 1−a (q − p) + 2p − q −3 2 a + 2 3a+1 + 4 a + 4 with s 2,1 being the value of s such that h 2 (b = a − 1) =h 2 (b = a) = 0 .
For the inverted ranking instead one can compute the optimal choice for the number of classes, that isR For any choice of p and a, it holds s 2,1 < s i 2,2 , so the inverted ranking is optimal for s > s i 2,2 .
|
2017-07-04T18:00:43.000Z
|
2016-08-22T00:00:00.000
|
{
"year": 2018,
"sha1": "37aae3571a24c3b496b3f1ff482e8d4a1a14be58",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0191604&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3038e49b382f37d7bd67a123400ae0888ee43b84",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Medicine"
]
}
|
14456684
|
pes2o/s2orc
|
v3-fos-license
|
Functional Brain Networks Develop from a “Local to Distributed” Organization
The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward ‘segregation’ (a general decrease in correlation strength) between regions close in anatomical space and ‘integration’ (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more “distributed” architecture in young adults. We argue that this “local to distributed” developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing “small-world”-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways.
Introduction
The mature human brain is both structurally and functionally specialized, such that discrete areas of the cerebral cortex perform distinct types of information processing. These areas are organized into functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. [1][2][3][4][5][6].
A major portion of the work investigating the nature of functional human brain development is based on results from functional magnetic resonance imaging (fMRI) studies. By examining the differences in the fMRI activation profile of a particular brain region between children, adolescents, and adults, the developmental trajectory of that region's involvement in a cognitive task can be outlined [3,5,[7][8][9][10]. These experiments have been crucial to our current understanding of typical and atypical brain development.
In addition to fMRI activation studies, the relatively new and increasingly utilized method of resting state functional connectivity MRI (rs-fcMRI) allows for a complementary examination of the functional relationships between regions across development.
Resting-state fcMRI identifies separable brain networks in adults
In previous work regarding task-level control in adults, we applied rs-fcMRI to a set of regions derived from an fMRI metaanalysis that included studies of control-demanding tasks. This analysis revealed that brain regions exhibiting different combinations of control signals across many tasks are grouped into distinct ''frontoparietal'' and ''cingulo-opercular'' functional networks [21,36] (see Table 1 and Figure 1). Based on functional activation profiles of these regions characterized in the previous fMRI study, the frontoparietal network appears to act on a shorter timescale, initiating and adjusting top-down control. In contrast, the cingulo-opercular network operates on a longer timescale providing ''set-initiation'' and stable ''set-maintenance'' for the duration of task blocks [37]. Along with these two task control networks [21,36], a set of cerebellar regions showing error-related activity across tasks [36] formed a separate cerebellar network ( Figure 1). In adults, the cerebellar network is functionally connected with both the frontoparietal and cingulo-opercular networks [21,22]. These functional connections may represent the pathways involved in task level control that provide feedback information to both control networks [22,36].
Another functional network, and one of the most prominent sets of regions to be examined with rs-fcMRI, is the ''default mode network''. The default mode network (frequently described as being composed of the bilateral posterior cingulate/precuneus, inferior parietal cortex, and ventromedial prefrontal cortex) was first characterized by a consistent decrease in activity during goaldirected tasks compared to baseline [38,39]. Resting-state fcMRI analyses have repeatedly shown that these regions, along with associated medial temporal regions, are correlated at rest in adults [15,16,32,40]. While the distinct function of the default mode network is often linked to internally directed mental activity [39], this notion continues to be debated [25,32,[41][42][43][44].
Spontaneous correlated activity within brain networks develops over age In two prior developmental studies, we used rs-fcMRI to examine the development of the task control and cerebellar functional networks [22] and, separately, the default mode network [32]. The first study, addressing functional connectivity changes within and between the two task control networks and the cerebellar network [22], showed that the structure of these networks differed between children and adults in several ways (see [22]). In general, many of the specific changes showed trends of decreases in shortrange functional connections (i.e., correlations between regions close in space) and increases in long-range functional connections (i.e., correlations between regions more distant in space). We suggested that these global developmental processes support the maturation of a dual-control system and its functional connections with the cerebellar network [22]. These results have now been replicated in a developmental resting connectivity study targeting sub-regions of the anterior cingulate [34].
The development of the default mode network was independently examined in a separate analysis [32]. In children, the default mode network was only sparsely functionally connected. Many regions were relatively isolated with few or no functional connections to other default mode regions. Over age, correlations within the default mode network increased and by adulthood it had matured into a fully integrated system. Interestingly, as opposed to the task-control and cerebellar networks, very few short-range functional connections involving the default mode network regions existed in children. Hence the numerous strong short-range functional connections that decreased with age when investigating the dual control networks were not seen within the default network. In fact, some connections such as the functional connection between the ventromedial prefrontal cortex (vmPFC; 23,39,22) and anterior medial prefrontal cortex (amPFC; 1, 54, 21) regions, which are fairly close in space (i.e., short-range at ,2.7 cm), had a substantial increase in correlation strength over development [32].
The observation that different analyses suggested different developmental features suggests a need for a more nuanced and integrated characterization of the development of functional networks. The goal of this manuscript is to employ several different network analysis tools to provide such a characterization. Visualization techniques such as spring embedding, and quantitative measures, including 'small world' metrics and community detection algorithms, will be applied to these networks in an attempt to identify principles for the changes observed across development.
Because of the overlapping and sometimes inconsistent use of terminology between neuroscience and the computational sciences, we will briefly define two terms for the purposes of this paper. The term ''networks'' will be used in the typical cognitive neuroscience formulation: a group of functionally related brain regions (as described above). The overall collection of regions (encompassing all four ''networks'') will be referred to as the ''graph.''
Results
Spring-embedded visualization in combination with functional connectivity suggests that regions are linked more locally in childhood and are more distributed in adulthood Graph theory analyses were applied to 210 subjects, aged 7-31, to investigate the emergence of temporal correlations in sponta-
Author Summary
The first two decades of life represent a period of extraordinary developmental change in sensory, motor, and cognitive abilities. One of the ultimate goals of developmental cognitive neuroscience is to link the complex behavioral milestones that occur throughout this time period with the equally intricate functional and structural changes of the underlying neural substrate. Achieving this goal would not only give us a deeper understanding of normal development but also a richer insight into the nature of developmental disorders. In this report, we use computational analyses, in combination with a recently developed MRI technique that measures spontaneous brain activity, to help us to understand the principles that guide the maturation of the human brain. We find that brain regions in children communicate with other regions more locally but that over age communication becomes more distributed. Interestingly, the efficiency of communication in children (measured as a 'small world' network) is comparable to that of the adult. We argue that these findings have important implications for understanding both the maturation and the function of neural systems in typical and atypical development.
neous BOLD activity between regions of the default mode, cerebellar, and two task-control networks. For this initial analysis, average age-group matrices were created using a sliding boxcar grouping of subjects in age-order (i.e., group1: subjects 1-60, group2: subjects 2-61, group3: subjects 3-62, etc.). This generated a series of groups with average ages ranging from 8.48 years to 25.58 years. Each of the groups' average correlation matrices was converted into a graph, with correlations between regions greater than or equal to 0.1 considered as functionally connected.
In a first analysis, we used a visualization algorithm commonly used in graph theoretic analyses known as spring embedding that aids in the qualitative interpretation of graphs ( Figure 2 and Video S1) [45]. In spring embedding, the positions of the nodes (i.e., regions) in a graph are based solely on the strength and pattern of functional connections instead of their anatomical locations. In this procedure, each functional connection between a pair of nodes is treated as a spring with a spring constant related to the strength of the specific correlation. The entire system of pair-wise regional functional connections is then iteratively allowed to relax to the lowest global energetic state, i.e., groups of nodes that are strongly interconnected will be placed close together even if anatomically distant.
By creating spring embedded graphs for each of the sliding boxcar groups in age-order, a movie representation can be made that shows the development of the network relationships (from average age 8.48 to 25.48 years) (Video S1). The panels in Figure 2 provide snapshots from child, adolescent, and adult average ages in this movie. In both Figure 2 and Video S1, each node is colorcoded in two ways: the outer border represents the general anatomical location (i.e., cerebral lobe) of the node; the inner core color represents the coding by ''function'' as defined by a large number of fMRI studies. One of the primary observations from the movie relates to this anatomical-functional distinction. In children, regions appear to be largely arranged by anatomical proximity. This arrangement can be seen in Figure 2 and Video S1 where, in children, regions can be readily grouped by cerebral lobe (outline colors of spheres in Figure 2 and Video S1). Over age, as functional connections mature, the node arrangements change such that anatomically close regions are now largely distributed across the graph layout, in a pattern more aligned with the mature networks' functional properties (core colors of spheres in Figure 2) [21,[36][37][38][39]. Thus, across development, local clusters of regions ''segregate'' from one another and ''integrate'' into more distributed adult functional relationships with more distant regions.
A group of regions in the frontal cortex provides a particularly salient example of segregation. Frontal cortex contains regions that, in adults, are members of each of the task-control networks (e.g., dlPFC, frontal, dACC/msFC) and the default network (e.g., vmPFC, amPFC). As can be seen in Figure 2A (and Video S1), extensive correlations exist between most of these frontal regions in childhood (see blue cloud Figure 2A). Over the developmental window afforded by the current dataset, some of these strong ''frontal-frontal'' correlations begin to weaken. With increasing age, regions in the frontal cluster segregate into 3 separate functional networks.
Accompanying this segregation is strong integration within the functional networks. The default mode network provides the clearest example. As illustrated in Figure 2B (and in Video S1), correlations between regions of the default mode network are weak (or absent) in children (red cloud, Figure 2B). Just as functional connections between the set of frontal regions are related to their anatomical proximity in children, the regions of the default mode network are each functionally connected to anatomical neighbors, and not to other members of the anatomically dispersed default mode network. Over age, however, the functional connections between default mode network regions mature and the network integrates into a highly correlated system in adults ( Figure 2B and Video S1) (also see [32]). We note that these results were not specific to the 60-subject boxcar, and persist with smaller subject boxcars as well (see Video S2).
Quantitative modularity analysis confirms the qualitative observations
The qualitative observations noted above can be quantified using community structure detection tools. Using such an approach is particularly important because of the bias inherent in relying on qualitative methods for deciding whether groups of regions that appear to be clustered are indeed clustered, and because of the a priori definitions of each network. As stated by Newman: ''A good division of a graph into communities is not merely one in which there are few edges between communities; it is one in which there are fewer than expected edges between communities. If the number of edges between two groups is only what one would expect on the basis of random chance, then few thoughtful observers would claim this constitutes evidence of meaningful community structure. On the other hand, if the number of edges between groups is significantly less than we expect by chance, or equivalently if the number within groups is significantly more, then it is reasonable to conclude that something interesting is going on [46].'' Among the many methods used to detect communities in graphs, the modularity optimization algorithm of Newman is one of the most efficient and accurate to date [46]. This method uses modularity, a quantitative measure of the observed versus expected intra-community connections, as a means to guide Table 1. Regions are colored by network membership (red -default mode network; blackcingulo-opercular network; yellow -fronto-parietal network; blue -cerebellar network) and shown on an inflated cortical surface represention. doi:10.1371/journal.pcbi.1000381.g001 assignments of nodes into communities. We applied the modularity optimization algorithm to the group connectivity matrices derived from the sliding boxcars described above.
Measures of modularity (Q) were high, and did not show large changes across the age range ( Figure 3A and Figure S1 and Figure S2). This result was not dependent on any particular threshold ( Figure S1). Although comparable community structure was detected at all ages examined, the components of the communities varied by age. As per our qualitative approach described above, in children, region clusters were largely arranged by cerebral lobe; while in adults, regions were largely clustered by their adult functional properties ( Figure 4A). Again, this result was not unique to any particular threshold ( Figure 4B and 4C) or size of boxcar ( Figure S3). We do note, however, that limited data points (i.e., subjects) are available between the ages of 16 and 19 years (see Materials and Methods) and that our estimate of the specific transitions within this period should be interpreted with care.
Over development, functional connections seem to evolve progressively along a ''local to distributed'' organizational axis As previously reported [22,34], the segregation of closely apposed regions and the integration of distributed functional networks is associated with a general decrease in correlation strength between regions close in space and an increase in correlation strength between many regions distant in space. This trend is shown in Figure 5 and also Figure S4. Long-range functional connections tend to be weak, but increase over time (warm colors above the diagonal in Figure 5C and 5D and Figure S4C and S4D), integrating distant regions into functional networks. Short-range functional connections tend to be stronger (i.e., higher correlation strength) in children, yet those regions that do change predominantly become weaker over age (cool colors below the diagonal in Figure 5A and 5B and Figure S4A and S4B). Over age the graph architecture matures from a ''local'' organization to a ''distributed'' organization. In this figure we show the dynamic development and interaction of positive correlations between the two task control networks, the default network, and cerebellar network using spring embedding. The figure highlights the segregation of local, anatomically clustered regions and the integration of functional networks over development. A and B represent individual screen shots (at average ages 8.48, 13.21, and 25.48 years) of dynamic movies (Video S1) of the transition in the network architecture from child to adult ages. Nodes are color coded by their adult network profile (core of the nodes) and also by their anatomical location (node outlines). Black -cingulo-opercular network; Yellow -fronto-parietal network; Red -default network; Bluecerebellar network; Light blue -frontal cortex; Grey -parietal cortex; Green -temporal cortex, Pink -cerebellum, Light pink -thalamus. Connections with r$0.1 were considered connected. (A) In children regions are largely organized by their anatomical location, but over age anatomically clustered regions segregate. The cluster of frontal regions (highlighted in light blue) best demonstrates this segregation. (B) In children the more distributed adult functional networks are in many ways disconnected. Over development the functional networks integrate. The isolated regions of the default mode network in childhood (highlighted in light red) that coalesce into a highly correlated network best illustrate this integration. Over age node organization shifts from the ''local'' arrangement in children to the ''distributed'' organization commonly observed in adults. doi:10.1371/journal.pcbi.1000381.g002 However, there are some interesting nuances to this trend that deserve mention. For instance, not all short-range functional connections decrease in strength over age ( Figure 5A and 5B and Figure S4A and S4B). While few, some of the short-range functional connections, typically those in the same network, increase in strength over age ( Figure 5A and Figure S4A). Similarly, although many longrange functional connections increase in strength, many others do not statistically change across development ( Figure 5C and,5D and Figure S4C and S4D, grey connections).
'Small world' network properties are present in both children and adults
In a seminal 1998 paper, Watts and Strogatz noted that the topology of many complex systems can be described as ''small world'', a type of graph architecture that efficiently permits both local and distributed processing. Graphs with a regular, lattice-like structure have abundant short-range connections, but no longrange connections. Local interactions are thus efficient, but distributed processes involving distant nodes require the traversal of many intermediate connections. Conversely, completely randomly connected graphs are fairly efficient at transferring distant or long-range signals across a network, but they are poor at local, short-range information transfer.
Watts and Strogatz, and others, often describe ''small world'' properties with two metrics: the average clustering coefficient and average path length of a graph. The clustering coefficient measures how well connected the neighbors of a node are to one another. The average path length measures the average minimum number of steps needed to go between any two nodes. Lattices, optimized for local processes, have high average clustering coefficients but long average path lengths. Conversely, random graphs, which have no preference for short-range connections, have low average clustering coefficients and short average path lengths, making them well suited for communication between distant nodes. One of Watts & Strogatz's key insights was that by randomly rewiring a relatively small number of connections in a lattice graph (i.e., introducing a few long-range connections), a graph could retain its high average clustering coefficient, but dramatically reduce its average path length, thereby enabling efficient short-and long-range processes. It is this hybrid graph topology (i.e., high clustering coefficients and short path lengths) that matches the observed ''small world'' networks in many complex systems [47].
As previously reported [21,48,49], relative to comparable lattice and completely random graphs, the adult graph architecture showed high clustering coefficients and short path lengths, consistent with the 'small world' architecture ( Figure 3B and 3C). Interestingly for these networks, in children (i.e., as early as age 8), these metrics were quite similar to adults ( Figure 3B and 3C), and over age there was very little change in path lengths and Figure S1. (Note: All age graphs to the right the asterisk show 100% graph connectedness, meaning there is a path between every node in the network. Graphs to the left of the asterisk are 78% graph connected, on average. For details see Materials and Methods and Figure S1). (B) Relative to equivalent lattice and random networks, average clustering coefficients remain high across age and do not appear to be different between children and adults. (C) Relative to equivalent lattice and random networks, average path lengths remain low across age and do not appear to be different between children and adults. High clustering coefficients and short path lengths suggest a 'small world' organization that does not change across the age range studied here. 95% confidence intervals are also plotted for clustering coefficients and path lengths for the generated random graphs. doi:10.1371/journal.pcbi.1000381.g003 clustering coefficients relative to comparable random and lattice graphs. It was originally anticipated that path lengths would decrease over age as long-range anatomical connections were added. Yet even at the youngest ages examined, path length was already quite short, near those of random graphs. Importantly, these results were not dependent on any particular threshold ( Figure S5). We note that while the results shown here are largely descriptive, the error bars provided in Figure 3B and 3C constructed from random graphs underscores the difference between random configurations and the observed trends. . Despite high modularity in both children and adults, community assignments change over age. As in Figure 3, a modularity algorithm was applied to each matrix of the sliding boxcar across age (A) and with varying thresholds (B, C). Region legends are color coded by anatomy on the left and by adult functional network on the right (colors match Figure 2). (A) The left side of the box represents the community assignments for the youngest subjects (i.e., subjects 1-60), and the right side of the box represents the community assignments for the oldest subjects (i.e., subjects 151-210) -an age scale is presented at the top. As can be seen in the left of panel A, the modularity algorithm divided regions into communities arranged by anatomical proximity. Over age this organization transitions into modules arranged by adult functional properties. For this central panel a threshold of r$0.1 was used to denote connected versus non-connected region pairs. (B) Community assignments of the youngest boxcar (subjects 1-60), at thresholds ranging from 0 to 0.20. Regardless of threshold regions are largely organized by anatomical proximity in this youngest age group. (C) Community assignments of the oldest boxcar (subjects 151-210), at thresholds ranging from 0 to 0.20. Regardless of threshold regions are largely organized by adult function in this oldest group. doi:10.1371/journal.pcbi.1000381.g004 Figure 5. The ''local to distributed'' maturation is supported by a general decrease in functional connections between regions close in space, an increase in functional connection between regions distant in space, and the maintenance of several short and longrange connections that do not change with age. In this figure, functional connections are divided based on distance. Short-range functional connections are in (A,B), long-range functional connections (C,D) (y-axis, adult r-values; x-axis child r-values). Warm colors represent functional connections that are significantly greater in adults than children. Cool colors represent functional connections that are significantly greater in children than adults. Functional connections that do not significantly change with age are plotted in grey. As can be seen in (A,B), the majority of short-range functional connections that significantly change with age tend to decrease. The majority of long-range functional connections (C,D) that significantly change with age increase over time. However, many long and short-range functional connections do not significantly change over age (grey). In addition, while few, some long and short-range functional connections go against the general trend of short-range connections ''growing down'' and long-range functional connections ''growing up.'' See Figure S2 for an extended version of this figure, which includes a visualization of these functional connections on a semi-transparent brain. doi:10.1371/journal.pcbi.1000381.g005
Discussion
The combination of graph theoretic analyses and rs-fcMRI allowed for the examination of the dynamic relationships between multiple networks over development. In the current manuscript, we examined four networks -the cingulo-opercular, fronto-parietal, cerebellar, and default mode networks. As illustrated by qualitative observations in Figure 2 (and Video S1) and modularity analysis in Figure 4, locally organized groups of regions ''segregate'' over development into multiple distributed adult functional networks, while the functional networks themselves ''integrate.'' These results support the hypothesis that functional brain development proceeds from a ''local'' to ''distributed'' organization. However, despite the ''local to distributed'' developmental trend, 'small world' organizational properties are present in both 7-9 year old child and adult graph architecture.
In the following section, these results are discussed considering two postulates: (1) the temporal pattern of spontaneous activity measured by rs-fcMRI represents a history of repeated coactivation between regions, and (2) the brain attempts to use the most efficient processing pathways available when faced with specific processing demands.
rs-fcMRI may reflect an interaction between the maturing neural substrate and the use of efficient pathways for general task completion As early as 1875 spontaneous synchronized neural activity has been used to study various aspects of adult brain organization [50][51][52][53]. However, despite the passing of over 130 years since its initial use, there remains uncertainty as to the role of intrinsic spontaneous brain activity in brain function. In adults, spontaneous correlated activity has been suggested to be important for gating information flow [54], building internal representations [43,44,54], and maintaining mature network relationships [43,44,54]. Much less work has been done in regards to development, but there are suggestions that spontaneous activity is important for the establishment of early cortical patterns (e.g., ocular dominance columns) [55][56][57][58] and may over time represent (in a Hebbian sense) a history of repeated co-activation between regions [21,22,27,32,34,59,60]. Within this framework, the changes in the correlation structure of spontaneous activity over development seen in this report may provide insight regarding the arrangement by which brain regions are communicating in children compared to adults.
If we consider the previously mentioned postulates, our results suggest that, typically, the most efficient way for children to respond to processing demands is to utilize more ''local'' level interactions as compared to adulthood. That is, in childhood there is, relatively greater co-activation of anatomically proximal regions than for adults with similar processing demands. A clear example of this is seen in Brown et al. [3], where identical task performance on lexical processing tests strongly activates a large set of visual regions in children, but strong visual activation is much more restricted in adults. These relationships may be reflected in correlated spontaneous activity measured via rs-fcMRI. The correlations in our youngest children would then represent the anatomical and spontaneous activity-defined initial regional relationships plus 7 years of experience-dependent Hebbian processes tuning these developing connections.
Changes in the neural substrate occur concurrently with changes in resting state functional connectivity. If the correlations we find in children already represent 7 years of experience-driven tuning, why should additional experience lead to a distributed solution? Under the current proposal, it is not clear then why resting state functional connectivity would change so dramatically over the reported age range. One could argue that the general experiential environment and processing demands systematically change to encourage increasing use of long-range, distributed processing relationships. We believe, however, that at least part of the explanation lies in the interaction of these ''environmental demands'' with maturational changes of the neural substrate.
By approximately 9 months of age the elaboration of most, if not all, short and long-range axonal connections between brain regions is thought to be complete [61]. However, synapse formation, the tuning of synaptic weights, synaptic pruning, and myelination all have unique developmental timecourses that extend further into development. For instance, from approximately 30 weeks gestation through the first two postnatal years there is substantial growth in the number of synaptic contacts throughout the cortex [62]. This growth is followed by a protracted period of synaptic pruning that reaches adult levels in the late second decade of life [63][64][65]. Importantly, pruning is selective, not random. Pruning is also largely activity dependent, and is considered critical in the differentiation of distinct functional areas [56,66,67].
Another commonly referenced postnatal event is myelination. As with synaptic pruning, myelination continues to occur through young adulthood. Increased myelination is thought to proceed from primary sensory and motor regions to association areas [68][69][70][71], roughly following the hierarchical organization introduced by Felleman and Van Essen [72] (Note that while the most frequently referenced neuroanatomical changes that occur throughout development have been highlighted here, there are several others that deserve consideration [62,[73][74][75]).
Changes in the neural substrate over development may lead to more efficient neural pathways for general task completion. Considering the continually changing nature of the neural substrate over development, a context for changes in rs-fcMRI can be created. For instance, as previously mentioned, increased signal propagation through the addition of a myelin sheath likely allows for more efficient communication between distant regions [22,32,34,76]. Such facilitated communication may promote interactions between brain regions that, previously, had substantially less efficient communication, allowing for a more effective ''solution'' to any particular set of processing demands. In addition, as new, more efficient, pathways become prominent, older inefficient connections likely decrease in use, leading to experience/activity-dependent decreases of specific area-area connection strengths.
In other words, as myelination continues through development and allows for more effective long-distance neural pathways, repeated co-activation becomes more prevalent between many distant regions, and less so between many locally aligned regions, thus changing synaptic efficiencies. The statistical histories of such interactions, stored as relative synaptic weights, are then revealed via rs-fcMRI, and would lead to the ''local to distributed'' organization principle seen here.
It is important to note, however, that improved communication between distant regions (via myelination) would not necessarily cause a wholesale decrease in connections that were originally organized locally. Many of these local connections likely continue to contribute to the most efficient ''solution'' for any particular task and remain in use. In fact, the change in dynamics may actually contribute to distinct local connections increasing with time. This possibility may underlie the increases in strength of specific shortrange connections seen in Figure 5 and Figure S2.
Along the same lines, as Fuster [77] has pointed out, we note that myelination is not an indispensable property of utilized axons. Unmyelinated axonal connections are still quite capable of transmitting information. For this reason, the first 7 years of experience dependent statistical learning may indeed result in increases in long distance functional connections well before mature myelination is in place, an idea consistent with the short average path lengths found in even the youngest networks we examined ( Figure 3). Thus, it is not surprising that some longdistant functional connections are present in children and do not statistically change with age ( Figure 5 and Figure S2).
We note that recent results in the aging literature suggest that many of the trajectories observed in the current manuscript continue inversely with advancing age [24,78]. That is, with aging, the functional organization, revealed via rs-fcMRI, becomes less distributed and more local. Thus the dynamic interactions we describe here likely continue as part of normal senescence [78].
The results presented here are consistent with other views of functional brain development The ''local to distributed'' organizing principle resonates with recent suggestions that perceptual and cognitive development involve the simultaneous segregation and integration of information processing streams [1,22,76,79,80]. For instance, the ''interactive specialization'' hypothesis advanced by Johnson and colleagues, is consistent with these findings [1,[81][82][83]. Johnson points out that cortical regions and pathways have biased information processing properties at birth due to anatomic connectivity, yet they are much less selective than in adults (i.e., they are ''broadly tuned'').
Interactive specialization predicts that shortly after birth, large sets of regions and pathways will be partially active during specific task conditions, However, as these pathways interact and compete with each other throughout development, selected regions will come online, be maintained, or become selectively activated or ''tuned'' as particular pathways dominate for specific task demands. Thus, regional specialization relies on the evolving and continuous interactions with other brain regions over development. If one extends this framework to the network level, the increases, decreases, and maintenance of correlation strengths seen between regions may reflect ''specialization'' of specific neural pathways to form the functional networks seen in adults.
Graph analysis suggests that small world properties are present in late childhood
The ''local to distributed'' developmental trajectory, discussed above, seems to be driven by an abundance of local, short range connections that generally decrease in strength over age as well as distant, long range connections that generally increase in strength over age. Given the more prevalent short-range connections in children, we expected a more lattice-like structure, with high clustering coefficients and relatively high path lengths. The results, however, clearly indicated that path lengths were near those of equivalent random graphs, and that the child functional networks are already organized as small world networks.
This result can be explained in the context of the re-wiring procedure discussed by Watts and Strogatz [47]. Randomly rewiring a small percentage of local connections in a lattice has a mild linear effect on clustering coefficients, but a highly non-linear effect on path lengths. This is to say, that by rewiring a small fraction of a lattice's connections, substantial drops in path lengths can be seen, with almost no change in the clustering coefficient. In late childhood, as shown in Figure 5 and Figure S2, there are already a significant number of long-range short cuts present. These long-range functional connections are likely responsible for the relatively short path lengths in the child group. We anticipate that if the developmental trajectory of short and long-range functional connections were extended to younger ages, fewer longrange 'short-cut' functional connections would be present, and more short-range functional connections would exist. Hence, the path lengths at these younger ages (,7 years old) would likely be longer. Nevertheless, by 8 years old, the networks already display 'small world' properties similar to those of adult networks, indicating that efficient graph structures are already in place for both local and distant processing, though they are organized differently than in later development.
While we identified small world properties in both child and adult graphs, the size of the graph is relatively small with only 34 nodes. Therefore, it is possible that with an increased number of nodes the specific results identified here will change, a possibility that will be addressed in further studies.
Need for generalization to other regions and modalities
The regions used in the present analyses were all derived from adult imaging studies. It seems likely that additional regions may be included in one or more of these networks in childhood. In addition, individual differences with regards to the regions and networks chosen likely exist. Future work that includes regions derived from studies using a child population and obtaining the functional connections within subjects from individually defined functional areas may refine the networks and developmental timecourses presented here [84].
Of note, resting-state functional connectivity has been reported to be constrained by anatomical distance (i.e., correlations between regions decrease as a function of distance following an inverse square law) [85]. Thus, if a shift in this general bias occurred with development, then it is feasible that some of the changes seen here could be related to such a shift. With this said, the specificity of the connection changes observed over age, the number of connections that run opposite to the general trends, and the similarity of the distance relationship in connectivity between children and adults when plotting all possible connections (see Figure S6), all suggest that the majority of changes observed here are not related to changes in this bias. In addition, while there are now reports suggesting that changes observed over development with blood oxygen level dependent (BOLD) fMRI are not the product of changes in hemodynamic response mechanisms over age [86,87], differences in the hemodynamic response function between children and adults could conceivably affect our results [88].
A limitation of rs-fcMRI in general is the restricted frequency distribution that can be examined. rs-fcMRI is used to measure correlations in a very low frequency range, typically below 0.1 Hz. Dynamic changes in correlations in other frequency distributions could exist (for example see [89]). It is also possible that there are undetected developmental changes in power across frequency bands orthogonal to the changes visualized here. The combination of other imaging and psychometric techniques with rs-fcMRI will likely help address these considerations. Characterizing additional networks and how these changes map onto behavior will also help further characterize functional brain development. Specifically, future work that demonstrates a direct relationship between behavior and the developmental trajectory seen here with rs-fcMRI, is presently needed to confirm (or reject) many of the theories presented here and elsewhere. Importantly, consideration of these issues need not be limited to developmental studies, but should be considered whenever investigators compare groups with rs-fcMRI.
Nonetheless, the general results presented here represent a strong set of hypotheses to be tested in broader domains and larger-scale brain graphs. First, that by age 8 years, regional relationships, as defined by rs-fcMRI, are organized as smallworld-like networks, which, relative to adults, emphasize local connections. Second, that for the same regions, adult networks show similar network metrics but with regional relationships that have a longer-range, more distributed structure reflecting adult functional histories. In other words, the modular structure of largescale brain networks will change with age, but even school age children will show relatively efficient processing architecture.
Subjects
Subjects were recruited from Washington University and the local community. Participants were screened with a questionnaire to ensure that they had no history of neurological/psychiatric diagnoses or drug abuse. Informed consent was obtained from all subjects in accordance with the guidelines and approval of the Washington University Human Studies Committee.
Data acquisition and pre-processing fMRI data were acquired on a Siemens 1.5 Tesla MAGNE-TOM Vision system (Erlangen, Germany). Structural images were obtained using a sagittal magnetization-prepared rapid gradient echo (MP-RAGE) three-dimensional T1-weighted sequence (TE = 4 ms, TR = 9.7 ms, TI = 300 ms, flip angle = 12 deg, 128 slices with 1.256161 mm voxels). Functional images were obtained using an asymmetric spin echo echo-planar sequence sensitive to blood oxygen level-dependent (BOLD) contrast (volume TR = 2.5 sec, T2* evolution time = 50 ms, a = 90u, inplane resolution 3.7563.75 mm). Whole brain coverage was obtained with 16 contiguous interleaved 8 mm axial slices acquired parallel to the plane transecting the anterior and posterior commissure (AC-PC plane). Steady state magnetization was assumed after 4 frames (,10 s).
Functional images were first processed to reduce artifacts [23,90]. These steps included: (i) removal of a central spike caused by MR signal offset, (ii) correction of odd vs. even slice intensity differences attributable to interleaved acquisition without gaps, (iii) correction for head movement within and across runs and (iv) within run intensity normalization to a whole brain mode value of 1000. Atlas transformation of the functional data was computed for each individual via the MP-RAGE scan. Each run then was resampled in atlas space (Talairach and Tournoux, 1988) on an isotropic 3 mm grid combining movement correction and atlas transformation in one interpolation [91,92]. All subsequent operations were performed on the atlas-transformed volumetric timeseries.
rs-fcMRI pre-processing
For rs-fcMRI analyses as previously described [16,23], several additional preprocessing steps were used to reduce spurious variance unlikely to reflect neuronal activity (e.g., heart rate and respiration). These steps included: (1) a temporal band-pass filter (0.009 Hz,f,0.08 Hz) and spatial smoothing (6 mm full width at half maximum), (2) regression of six parameters obtained by rigid body head motion correction, (3) regression of the whole brain signal averaged over the whole brain, (4) regression of ventricular signal averaged from ventricular regions of interest (ROIs), and (5) regression of white matter signal averaged from white matter ROIs. Regression of first order derivative terms for the whole brain, ventricular, and white matter signals were also included in the correlation preprocessing. These pre-processing steps likely decrease or remove developmental changes in correlations driven by changes in respiration and heart rate over age.
Extraction of resting state timeseries
Resting state (fixation) data from 210 subjects (66 aged 7-9; 53 aged 10-15; 91 aged 19-31) were included in the analyses. For each subject at least 555 seconds (9.25 minutes) of resting state BOLD data were collected. 34 previously published regions comprising 4 functional networks (i.e., cingulo-opercular, frontoparietal, cerebellar, and default networks; see Table 1 and Figure 1) were used in this analysis [16,21,22,37]. For each region, a resting state timeseries was extracted separately for each individual. For 10 adult subjects, resting data was continuous. For the remaining 200 subjects, resting periods were extracted from between task periods in blocked or mixed blocked/event-related design studies [22]. These concatenated-extracted rest periods were shown to be equivalent to continuous resting data in a recent study describing this method [23]. In addition, several previous findings using this technique [21,22,32] have now been replicated using continuous resting blocks [27,33,34] and other continuous resting data [89].
Generation of average group correlation matrices across development
To examine the functional connections within and between the large set of regions used in this manuscript we chose to use graph theory. Graph theory is particularly well suited to study large-scale systems organization across development, but requires the data be organized into specific correlation matrices. To do this, for each of the 210 subjects, the resting state BOLD timeseries from each region was correlated with the timeseries from every other region, creating 210 square correlation matrices (34634). Average group matrices were then created using a sliding boxcar grouping of subjects in age-order (i.e., group1: subjects 1-60, group2: subjects 2-61, group3: subjects 3-62, … group151: subjects 151-210), thus generating a series of groups with average ages ranging from 8.48 years old to 25.48 years old with each group composed of 60 subjects. Average correlation coefficients (r) for each group were generated from the subjects' individual matrices using the Schmidt-Hunter method for meta-analyses of r-values [21,85,93]. In cases when the terms ''child'' or ''adult'' are used, the matrices or results referred to are the first and last of the sliding boxcar groups respectively, i.e., the child group is the youngest 60 subjects, with an average age of 8.48 years old, and the adult group is the oldest 60 subjects, with an average age of 25.48 years old.
Spring-embedded graph theoretic layout and visualization
To generate a dynamic representation of the functional connections between regions across development, each of the groups' correlation matrices was converted into a thresholded graph, such that correlations higher than r$0.1 were considered connections, while correlations lower than the threshold were not connections.
For our initial analyses [21,22,32] graphs in child and adult groups were presented in either a pseudo-anatomical fashion or in their actual 3D positions (in Talairach space). Here we add another representation often used in graph theory -spring embedding. In this procedure, a spring constant is added to all of the connections in the network allowing for the pairwise regional connections to relax to their lowest energetic state. The algorithm applied in the present analysis is known as Kamada-Kawai [45] -one of the most commonly used strategies for displaying graph network data. In brief, each functional connection between a pair of nodes is treated as a spring with a spring constant related to the strength of the specific correlation. The nodes are then randomly placed in a plane, which places high strain on the ''spring-loaded'' connections. The algorithm then iteratively adjusts the positions of each node to reduce the total energy of the system to a minimum. As the pair-wise connections relax to their lowest energetic states the ''natural'' configuration of the network is revealed. By observing multiple ''spring embedded'' graphs across the subjects in age-order, approximately representing a 6 month temporal sliding box car (i.e., group1: subjects 1-60, group2: subjects 2-61, etc.), a movie representation can be made that shows the development of the full system (see Video S1). The interpolations, algorithm application, and movie production were performed using MATLAB (The Mathworks, Natick, MA) and SoNIA (Social Network Image Animator) [94].
Modularity analysis
Communities for our graph were detected with the modularity optimization method of Newman [46]. The modularity, or Q, of a graph is a quantitative measure of the number of edges found within communities versus the number predicted in a random graph with equivalent degree distribution. A positive Q indicates that the number of intra-community edges exceeds those predicted statistically. A wide range of Q may be found for a graph, depending on how nodes are assigned to communities. The set of node assignments that returns the highest Q is the optimal community structure sought by the modularity optimization algorithm, which follows a recursive two-step process. First, a modularity matrix similar to a Laplacian is constructed from the nodes in question, comparing observed versus expected edges. If this matrix has a positive eigenvalue, the eigenvector of the largest eigenvalue is used to split the nodes into two subgraphs, and Q is calculated. Second, nodes are swapped individually between the two subgraphs to see if an increase in Q can be found. Once a maximal Q is found from these swaps, the process is repeated on the subgraphs. At any point in this process, if the matrix has no positive eigenvalues, or if a proposed split does not increase Q, the subgraph is set aside, and defines a community. To detect communities in our networks over a range of ages, we used the sliding boxcar group average correlation matrices described above in ''Generation of average group correlation matrices across development.'' With weights retained, the modularity optimization algorithm was applied to each matrix along the sliding boxcar. A range of thresholds was explored to define connections for these calculations (see Figure 4 and Figure S1). Any particular threshold did not change the conclusions presented in the main manuscript. A threshold of 0.10 was chosen to display in the main manuscript because it balances two principles: (1) eliminating a multitude of weak correlations, which may obscure more physiologically relevant correlations, and (2) retaining high graph connectedness, so that communities arise from partitioning and not thresholding. Graph connectedness captures the extent of nodes fragmented from the main graph due to increasing thresholds. It is defined for a graph of N nodes as the mean of an NxN matrix, where cell i,j is 1 if a path exists between node i and node j (self-connections are allowed), and is 0 otherwise. A graph in which all nodes can reach each other has 100% graph connectedness, whereas a fragmented network in which some nodes cannot reach the rest has a lower connectedness. The modularity optimization analysis returned a set of community assignments for the nodes, as well as the Q of the graph with those assignments. The group assignments for the nodes were converted to colors and are displayed in Figure 4. The robustness of the community assignments was also tested using a different information theoretic procedure implemented by Meila, [95], which utilizes the measure 'variation of information (VOI)' (see Figure S7 and also [96]). All calculations were performed in MATLAB (The Mathworks, Inc., Natick, MA).
Characterization of connection length versus the change in correlation strength over development
To characterize the relationship between connection length and the change in correlation strength over development, we split all 561 possible connections into 4 groups based on vector distance. Since using vector distance as an approximate for connectional distance is much more inconsistent when comparing ROIs across the midline, only intrahemispheric connections or connections to midline structures (i.e., within 5 mm of the midline) were examined. These connections were then sorted by connection length and plotted on a graph where the x-axis corresponds to the child correlation strengths and the y-axis corresponds to the adult correlation strengths ( Figure 5 and Figure S2). On both the graphs ( Figure 5) and the cortical surfaces ( Figure S2), the color of the lines denotes the strength of correlation. Significant differences seen in Figure 5 and Figure S2 were obtained via direct comparison between children (the youngest 60 children out of 210 total subjects; age 7.01-9.67; average age 8.48) and adults (the oldest 60 adults out of 210 total subjects; age 22.47-31.39; average age 25.48). Two-sample two-tailed t-tests (assuming unequal variance; p#0.05) were performed on all potential connections that passed the above criteria. Fischer z transformation was applied to the correlation coefficients to improve normality for the random effects analysis. To account for multiple comparisons the Benjamini and Hochberg False Discovery Rate [97] was applied. Connections that were significantly different between groups, but r,0.1 in both groups, were not displayed.
'Small world' characterization
The small-world metrics were calculated according to descriptions by Watts and Strogatz [47]. In the main manuscript, calculations were performed on the group average correlation matrices thresholded at 0.10 and converted to binary matrices (for analysis across varying thresholds see Figure S3). For each matrix across age, the average clustering coefficient and average path lengths were compared to those values in lattices with equivalent N (number of nodes) and K (number of connections). To ensure that our matrices also differed from random graphs, 100 random graphs with equivalent degree distributions were also created. From these graphs mean average path lengths and clustering coefficients were calculated. These metrics are presented in Figure 3 and Figure S3. All calculations were performed in MATLAB (The Mathworks, Natick, MA). Figure S1 Modularity remains relatively high across age and does not differ between children and adults across differing thresholds. Blue dots represent modularity and red dots represent graph connectedness. A graph in which there is a path between all nodes represents 100% graph connectedness, whereas a fragmented network in which some nodes cannot reach the rest has a lower graph connectedness (see Materials and Methods for details). (A) Modularity across age as presented in Figure 3 Figure S2 Scatterplot of modularity as a function of age. Each point in the graph represents the modularity calculated for each individual subject. A threshold of r$0.1 was applied to each subject's matrices before calculations were performed and denotes connected versus non-connected region pairs (see Materials and Methods). Found at: doi:10.1371/journal.pcbi.1000381.s002 (0.35 MB TIF) Figure S3 Reducing the boxcar size does not substantially alter community assignments over age. The same procedure as presented in Figure 4 with the boxcar reduced to (A) 40 subjects and (B) 20 subjects. Found at: doi:10.1371/journal.pcbi.1000381.s003 (4.65 MB TIF) Figure S4 An extended version of Figure 5, which includes a visualization of these connections represented on a semitransparent brain. Found at: doi:10.1371/journal.pcbi.1000381.s004 (4.69 MB TIF) Figure S5 Clustering coefficients and path lengths do not differ between children and adults across differing thresholds with respect to comparable lattice and random graphs. For children all parameters across thresholds were calculated from the first 60 subjects in age order (i.e., subjects 1-60, average age 8.48). For adults, all parameters across thresholds were calculated from the last 60 subjects in age order (i.e., subjects 151-210, average age 25.48. (A) Clustering Coefficients across thresholds for children compared to equivalent lattice and random networks. (B) Path lengths across thresholds for children compared to equivalent lattice and random graphs. (C) Clustering Coefficients across thresholds for adults compared to equivalent lattice and random graphs. (D) Path lengths across thresholds for adults compared to equivalent lattice and random graphs. At all thresholds examined, both children and adults show relatively high clustering coefficients and low path lengths, consistent with 'small world' topology. Found at: doi:10.1371/journal.pcbi.1000381.s005 (0.59 MB TIF) Figure S6 Connection strength as a function of distance for all possible connections is similar between children and adults. The relationship of correlation as a function of distance is described by the inverse square law, r,1/D 2 , as reported in [85] for all possible connections in children (blue) and adults (red). Found at: doi:10.1371/journal.pcbi.1000381.s006 (0.71 MB TIF) Figure S7 Variation of information (VOI) in observed and equivalent random networks subjected to perturbation alpha. VOI is a measure of how much information is not shared between two sets of community assignments and allows for the quantification of network robustness (see [95] and [96]). Values of 0 indicate identical community assignments, and values of 1 indicate maximally different community assignments. To assess the stability of community assignments, the edges of a network are randomized with probability alpha to perturb the network, and the VOI between the original and perturbed networks are calculated over a range of alpha. An equivalent random network was generated for comparison. The entire perturbation process was repeated 50 times to obtain mean VOI values and standard errors of the means, which are plotted as error bars. (A) VOI over a range of alpha in the youngest boxcar and equivalent random graphs. (B) VOI over a range of alpha in the oldest boxcar and equivalent random graphs. Compared to random graphs the community assignments in both children and adults are significantly robust. Found at: doi:10.1371/journal.pcbi.1000381.s007 (0.43 MB TIF) Video S1 Over age, the graph architecture matures from a ''local'' organization to a ''distributed'' organization. This movie shows the dynamic development and interaction of positive correlations between the two task control networks, the default network, and cerebellar network using spring embedding. The figure highlights the segregation of local, anatomically clustered regions and the integration of functional networks over development. This is the full movie that Figure 3 is based on in the main text. Nodes are color coded by there adult network profile (core of the nodes) and also by there anatomical location (node outlines). Black -cingulo-opercular network; Yellow -fronto-parietal network; Red -default network; Blue -cerebellar; Light bluefrontal cortex; Grey -parietal cortex; Green -temporal cortex, Pink -cerebellum, Light pink -thalamus. At the beginning of the movie (i.e. in children) regions are largely organized by their anatomical location, but over age anatomically clustered regions segregate. The cluster of frontal regions (light blue outlines) best demonstrates this segregation. In addition, at the beginning of the movie (i.e., in children) the more distributed adult functional networks (core colors of nodes) are in many ways disconnected; however, over development the functional networks integrate. The isolated regions of the default network in childhood (Red) that coalesce into a highly correlated network best illustrate this integration. Over age node organization shifts from the ''local'' arrangement in children to the ''distributed'' organization commonly observed in adults. Found at: doi:10.1371/journal.pcbi.1000381.s008 (9.68 MB MP4)
Supporting Information
Video S2 Reducing the boxcar size to 40 subjects does not change qualitative patterns observed with the 60 subject boxcar. The same procedure as presented in Figure 3 and Video S1 is presented here with the boxcar reduced to 40 subjects. Found at: doi:10.1371/journal.pcbi.1000381.s009 (9.62 MB MPG)
|
2016-10-31T15:45:48.767Z
|
2009-05-01T00:00:00.000
|
{
"year": 2009,
"sha1": "b4ea5ca431d34abe8c5b2b417028ff9d1bfa3536",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1000381&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a885a17013603dc34cf07a2c44ae01bccb4141c3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
221860126
|
pes2o/s2orc
|
v3-fos-license
|
Recent Synthetic Applications of the Hypervalent Iodine(III) Reagents in Visible-Light-Induced Photoredox Catalysis
The synergistic combination of visible-light-induced photoredox catalysis with hypervalent iodine(III) reagents (HIRs) represents a particularly important achievement in the field of hypervalent iodine chemistry, and numerous notable organic transformations were achieved in a mild and environmentally benign fashion. This account intends to summarize recent synthetic applications of HIRs in visible-light-induced photoredox catalysis, and they are organized in terms of the photochemical roles of HIRs played in reactions.
INTRODUCTION
During the past several decades, the chemistry of hypervalent iodine reagents (HIRs) has gained more and more attention due to their unique electrophilic properties (Brand et al., 2011;Charpentier et al., 2015), valuable oxidizing abilities (Yoshimura and Zhdankin, 2016;Wang and Studer, 2017), and environment friendly features (Zhdankin, 2013;Yoshimura and Zhdankin, 2016). The special structural features and unparalleled reactivities of HIRs lie in their unique 3-center-4-electron (3c-4e) bonds (L-I(III) -X), which are highly polarized and are longer and weaker than classical covalent bonds (Zhdankin, 2013;Yoshimura and Zhdankin, 2016;. Generally, HIRs offer multiple advantages for synthetic organic chemistry: (i) mild and highly chemoselective oxidizing properties; (ii) benign environmental character; (iii) commercial availability; and (iiii) convenient structural modification (Brand et al., 2011;Zhdankin, 2013;Yoshimura and Zhdankin, 2016;Hari et al., 2018). These advantages of HIRs give synthetic chemists the opportunities to design and access novel and more challenging reactions. As a result, a wide array of organic transformations ranging from oxidative coupling processes , ligand transfer reactions (Zhdankin, 2013;Yoshimura and Zhdankin, 2016), rearrangements (Zhdankin, 2009;Brand et al., 2011), C-C, C-O or C-N bond formations Hyatt et al., 2019) to numerous other reactions have recently been developed based on HIRs.
Since 2008, visible-light-induced photoredox catalysis has emerged as one of the most rapidly expanding fields in organic chemistry (Xuan and Xiao, 2012;Koike and Akita, 2014;Romero and Nicewicz, 2016;Shaw et al., 2016;Staveness et al., 2016;Twilton et al., 2017). In photoredoxcatalyzed procedures, metal photocatalysts (iridium-, ruthenium-, and copper-based) or organic dyes (rose bengal, eosin Y, BODIPY, 4CzIPN, coumarins, and rhodamine derivatives) can efficiently convert visible light into chemical energy, thereby allowing the activation of organic substrates via single-electron transfer (SET) events, and eventually accessing to a large number of synthetically important reactions under very mild reaction conditions. Very recently, HIRs have quickly established themselves as efficient and versatile reaction partners for visible-light-induced photoredox catalysis. Many studies related to the elegant merging of photoredox catalysis with HIRs have resulted in significant advancements Wang and Studer, 2017;. By the appropriate choice of HIRs, photocatalysts, light sources and solvents, a wide array of bondforming reactions were developed in mild and environmentally benign fashion (Figure 1).
Mechanistically, a typical photoredox catalytic cycle consists of a sequence of three key steps: a photoexcitation process followed by two SET processes. On account of the smooth occurrence of the SET processes, the redox (oxidation/reduction) potentials of both photocatalysts and HIRs must be taken into consideration in order to find the best-matched partners in a photoredox catalysis/HIR reaction. The oxidative/reductive abilities of commonly used transition metal and organic photocatalysts are relatively well investigated (Table 1) (Reckenthaler and Griesbeck, 2013;Koike and Akita, 2014;Romero and Nicewicz, 2016;Roth et al., 2016;Lemos et al., 2019). However, despite the practical significance of HIRs, redox potentials of them has not been sufficiently evaluated until now, only limited of redox potential values of HIRs were reported in literatures (Figure 2) (Charpentier et al., 2015;Roth et al., 2016;Vaillant and Waser, 2017). Just in 2020, Radzhabov and coworkers reported new calculated values of the relative redox potentials of [bis(acetoxy)iodo]-arenes (Radzhabov et al., 2020). The influence of various substituents and the effects of various solvents on the reduction potentials of HIRs was both detailed evaluated. This theoretical assessments may provide a useful reference for the design of new photoredox reactions based on ArI(OAc) 2 .
In line with photoredox catalysis, HIRs play two different kind of photochemical roles such as reagent for functionalgroup transfer and mild oxidant for substrates activation Wang and Studer, 2017;. HIRs bearing trifluoromethyl, azido, alkynyl, and cyano groups can readily participate in photocatalytic reactions for the transformation of perfluoroalkylation (Koike and Akita, 2016), azidation (Fumagalli et al., 2015), alkynylation (Kaschel and Werz, 2015), and cyanation (Le Vaillant et al., 2017), respectively. In contrast, hydroxyl-, alkoxyl-, and acetoxy-benziodoxoles (BI-OH, BI-OR, and BI-OAc) are usually acted as the oxidant for activation of carboxylic acids , alcohols (Liu et al., 2018) or alkyl C-H bonds (Li et al., 2017) for the generation of oxygen-or carbon-centered radicals under photoredox catalysis. In certain cases , two HIRs were employed in the same photoredox procedure: one of which acts as a reagent and the other serves as mild oxidant.
The review herein intends to summarize recent synthetic applications of HIRs in visible-light-induced photoredox catalysis. The document is organized in terms of the photochemical roles of HIRs played in reactions, with particular emphasis placed on the literature from 2016 until the end of March of 2020. In every section, we arrange the synthetic methods according to their reaction types.
HIRS ACT AS FUNCTIONAL GROUP TRANSFER REAGENTS Fluoroalkylation
Visible-light photoredox catalytic methods have been proven to be one of the most efficient pathways for the incorporation of a variety of fluoroalkyl groups into organic skeletons (Koike and Akita, 2016). Both cyclic and acyclic HIRs possessing various fluorinated groups can serve as effective fluoroalkyl-transfer reagents in photoredox-catalyzed fluoroalkylation Wang and Liu, 2016). In these processes, HIRs usually choose the oxidative quenching pathway to furnish the key fluoroalkyl radicals, thus enabling the synthesis of a wide variety of fluoroalkylated compounds.
In 2018, Qing and coworkers reported the decarboxylative trifluoromethylation of (hetero)arenes using ArI(OCOCF 3 ) 2 as CF 3 source by ruthenium photoredox catalysis (Yang et al., 2018) (Figure 3A). A series of fluorinated ArI(OCOCF 3 ) 2 were examined and C 6 F 5 I(OCOCF 3 ) 2 (FPIFA) was proved to be the best option. Notably, FPIFA is easily accessible from C 6 F 5 I and TFA in the presence of oxone (Harayama et al., 2006;Zagulyaeva et al., 2010), and C 6 F 5 I could be recycled from the decarboxylation reaction in high yield.
The authors proposed the reaction mechanism depicted in Figure 3E. Initially, Ru(bpy) 2+ 3 is excited by visible light to generate the excited specie * Ru(bpy) 2+ 3 , which performs the SET process with FPIFA to afford the iodanyl radical, accompanied by the formation of Ru(bpy) 3+ 3 . Then, the resulting iodanyl radical extrudes C 6 F 5 I to release the trifluoroacetoxy radical, which can undergo further scission, leading to the formation of CF 3 radical. The CF 3 radical thus attack the aromatic ring in arene to give aromatic radical. The aromatic radical might be oxidized either by Ru(bpy) 3+ 3 (path a) or by FPIFA (path b) to yield the corresponding aromatic cation. At last, the aromatic cation is converted into the target product through the deprotonation or nucleophilic attack process.
Later, Xia and coworkers reported a mechanistically similar reaction for the synthesis of perfluoroalkylated aminoquinolines via R f radical intermediates (Han et al., 2019) (Figure 3B). The perfluoroalkylation reagents, such as FPIFA, C 6 F 5 I(OCOCF 2 CF 3 ) 2 and C 6 F 5 I(OCOCF 2 CF 2 CF 3 ) 2 , were all effective in the reaction. Moreover, similar to reported by Qing et al. (Yang et al., 2018), those HIRs can also be easily recovered by reaction of the by-product pentafluoroiodobenzene with perfluorocarboxylic acids in the presence of oxone.
Xu and coworkers developed a method of hydrotrifluoromethylation of benzyl-protected homoallylic alcohol and amine derivatives employing Togni's reagent as the CF 3 radical source under organic photoredox catalysis (Figure 3C). Togni's reagent was found to be a more effective trifluoromethylation reagent than CF 3 SO 2 Cl in the reaction. Dye 4CzIPN (2,4,5,6-tetra(9H-carbazol-9yl)isophthalonitrile) has been demonstrated as a competent organic photoredox catalyst for generation of trifluoromethyl radicals from Togni's reagent. It is noteworthy that the reaction proceeds through an oxidative quenching process to deliver a CF 3 · radical followed by a crucial 1,5-hydrogen transfer relay with in situ removal of benzyl group. An efficient photoredox-catalyzed protocol for the introduction of fluorinated groups into the coumarin framework was established by Xiang's group in 2019 (Song et al., 2019) ( Figure 3D). The reaction takes place efficiently using fac-Ir(ppy) 3 as the photocatalyst under the irradiation of blue LEDs. When Togni's reagent used as the perfluoroalkylated radical resource in this protocol, ortho-hydroxycinnamic esters were converted into 3-trifluoromethylated coumarins via a photoredox-catalyzed cascade in moderate to good yields.
Azidation
Since its first report in 1994 by Zhdankin and co-workers, azidobenziodoxol(on)es (ABXs, Zhdankin reagents) have established themself as valuable alternatives to other azide sources due to easy handling (crystalline solid) and the enhanced stability (being stable up to 130 • C) (Fumagalli et al., 2015). These cyclic HIRs have recently been popularly utilized as azidetransfer reagents for azidation of a broad range of substrates (Huang and Groves, 2016). Under visible-light irradiation and in the presence of PC, the weak I-N 3 bond in azido I(III) reagent frequently undergoes homolytic cleavage to form an azidyl radical and an iodanyl radical, thus triggering the radical chain process to provide the azidated product.
Chen and coworkers disclosed an impressive protocol for the azidation of 3 • C(sp 3 )-H bonds of complex substrates using the Zhdankin reagent under Ru photoredox catalysis (Figure 4A). The azidation reactions demonstrated excellent 3 • C-H selectivity and functional group compatibility. Interestingly, when chlorine or bromide donor was added into the reaction system, this protocol can be further modulated to access aliphatic C-H chlorination and bromination, respectively.
Greaney and coworkers have achieved a direct benzylic C-H azidation using the Zhdankin reagent under photoredox catalysis (Rabet et al., 2016) (Figure 4B). Reaction optimization showed that common photoredox catalysts such as Ru(bpy) 3 Cl 2 and Ir(ppy) 3 are totally ineffective, while Sauvage catalyst Cu(dap) 2 Cl is found to be unique for this azidation. Moreover, the C-N bond formation is wide applicable to primary, secondary, or tertiary benzylic position. The authors proposed the reaction mechanism depicted in Figure 4F. It is believed that the photoexcited state * Cu(dap) 2+ firstly reductive cleaves BI-N 3 to generate a source of azide radicals, then the azide radical serves as the H abstractor to convert the benzylic C-H substrate to a benzyl radical. Subsequently, the benzyl radical attacks BI-N 3 to form the azidated product and gives the chain-carrying iodane radical. The iodane radical thus regenerates benzyl radical by abstracting Frontiers in Chemistry | www.frontiersin.org 6 September 2020 | Volume 8 | Article 551159 a hydrogen atom from benzylic substrate and then propagates the radical chain reaction. In 2017, the Waser's group reported a method of synthesis of azidolactones starting from alkene-containing carboxylic acids (Alazet et al., 2017) (Figure 4C). Using Zhdankin reagent as the azide-transfer reagent and only 0.5 mol% Cu(dap) 2 Cl as photoredox catalyst, (1,2)-azidolactones were achieved under visible light irradiation. Zhdankin reagent and azidodimethylbenziodoxole (ADBX), two typical azide-transfer reagents, exhibited divergent reactivity in the azidolactonization: Zhdankin reagent was ideally suited for 1,2-azidation under photoredox conditions, while Lewis acid activation of ADBX led to 1,1-azidolactonization via a 1,2-aryl shift. When ADBX was used instead of Zhdankin reagent under the same photoredox conditions, only traces of (1,2)-azidolactones were observed.
Shortly after its discovery, this visible-light-promoted photoredox-catalyzed azidation methodology was elegantly expanded to alkene-substituted cyclobutanol derivatives by the same group (Alazet et al., 2018) (Figure 4D). In 2018, they introduced two new cyclic iodine(III) reagents (CIRs) with higher molecular weight for azidation: tBu-ABX and ABZ (azidobenziodazolone). The two reagents showed a better safety profile than the most commonly used Zhdankin reagent, which was both shock and friction sensitive. Furthermore, either tBu-ABX or ABZ can be used as alternatives to the Zhdankin reagent in a broad range of transformations including photoredox catalysis. They developed an azidative ring-expansion of alkenesubstituted cyclobutanol derivatives using ABZ as the safer azido-radical source and Cu(dap) 2 Cl as photoredox catalyst.
In 2019, the group of Yu has investigated the visiblelight-driven azidation of vinyl arenes with Zhdankin reagent as azidating agent in acetonitrile by using [Cu(dap) 2 ]PF 6 as photocatalyst (Figure 4E). It was found that the electronic nature of the aryl group attached to the olefin moiety plays a profound effect on the reaction consequence: when the aryl group was less electronically biased, amido-azidation products were obtained as major products through a threecomponent reaction involving the solvent acetonitrile as well as Zhdankin reagent. The mechanistic investigations suggested that these amido-azidation products were probably formed via the photoredox catalysis pathway.
Decarboxylative Alkynylation of Carboxylic Acids
Based on the previous success on visible-light photoredox catalytic decarboxylative alkynylation of carboxylic acids, Li, Cheng, and co-workers developed a metal-free procedure in which 9,10-dicyanoanthracene (DCA) Neumeier et al., 2018) serve as the photoredox catalyst for the replacement of the classic iridium catalysts (Yang C. et al., 2016) (Figure 5A). The results showed that carboxylic acids could be efficiently photo-oxidated by only 5 mol% of cheap organic photocatalyst DCA at room temperature. Moreover, natural sunlight can also be used as a light source. A gramscale reaction further demonstrates the synthetic utility of this methodology.
Due to its mild conditions to generate radicals, the photoredox catalysis provides a rational basis for developing novel strategies in biomolecule functionalization (Hu and Chen, 2015). Especially, photoredox-catalyzed decarboxylation strategies were successfully applied to selectively functionalize the C-terminal position of native peptides. Following their success on photoredox-catalyzed decarboxylative alkynylation of α-amino acids using EBXs, Waser and coworkers recently extended the methodology for decarboxylative alkynylation on C-terminus of peptides (Garreau et al., 2019) (Figure 5B). Using EBXs as alkynylation reagents and 4CzIPN as photoredox catalysts, alkynylated peptides can be efficiently achieved in 30 min at room temperature under blue LEDs irradiation. Moreover, this reaction exhibited superior selectivity for the C-terminus in the presence of carboxylic acid sidechains. The results showed that EBX reagents possess a high potential for biomolecule functionalization under mild photoredox-catalyzed conditions.
In 2018, the same group has shown that EBX reagents allowed the alkynylation of cyclic alkyl ketone oxime ethers through oxidative photoredox cycles, and versatile alkynyl nitriles were synthesized via a fragmentation-alkynylation sequence (Franck et al., 2018) (Figure 5C). It is worth noting that modified 4XCzIPN dyes were demonstrated as efficient photoredox organocatalysts in this methodology, and their redox properties were determined by both cyclic voltammetry and computation. Among them, 4ClCzIPN dye exhibited highly efficient in the fragmentation-alkynylation process. Various aryl-substituted EBX reagents worked well under the reaction condition. Preliminary investigations showed that other HIRs, such as silyl EBX reagent (TIPS-EBX), cyanobenziodoxolone (CBX) and phenyl vinyl benziodoxolone (PhVBX), can also react with oxime ethers under the same reaction conditions to achieve the corresponding alkynylation, cyanation, and alkenylation products. However, when Togni's reagent was employed, no desired trifluoromethylation product was obtained.
Based on investigations conducted in this study, it is believed that the mechanistic pathway in this process ( Figure 5D) begins with reductive quenching of the photoexcited state PS * of 4ClCzIPN dye by potassium carboxylate to give carboxyl radical and the reduced state photocatalyst. The resulting carboxyl radical undergoes decarboxylation to furnish the α-oxy radical, which subsequently eliminates the acetone to generate iminyl radical. 1 H NMR evidence showed that the carboxyl radical can also be trapped by EBX reagent and then hydrated to give a by-product of the ketone. Ring-opening of the iminyl radical then gives an alkyl nitrile radical. The alkyl nitrile radical reacts with EBX and proceeds through a transition state to give the final product and cyclic hypervalent iodine radical. The reduction of the hypervalent iodine radical provides carboxylate and regenerates the ground state PS to accomplish the organocatalysis cycle.
Alkynylation of Alcohols
Similar to carboxylic acids, alcohols can also be efficiently alkynylated employing EBXs as alkynylating reagents under photoredox-catalyzed conditions. It should be noted that an HIR catalysis circle, in which HIR catalyzes the generation of alkoxyl radicals, is often combined with the photoredox catalysis circle in those methodologies.
Chen and co-workers have conducted a series of studies aiming at photoredox-catalyzed alkynylation of different types of alcohols. In 2016, this group exploited the combination of photoredox catalysis and CIR catalysis for alkynylation of alcohols using alkyl-EBX reagents ( Figure 6A). Under the dual CIR/photoredox catalytic system, both strained cycloalkanols and linear alcohols can react with alkyl-EBXs delivering the corresponding alkynylation adducts. Moreover, structurally complex steroidal cycloalkanols can also convert into χ-alkynyl ketones smoothly. Various aryl substituents appended to EBXs are suitable for this process. The key to success in this transformation was the visible-light-induced alcohol oxidation for generation alkoxyl radicals and the subsequent βfragmentation of alkoxyl radicals into alkyl radicals. Compared with those that employ transition metal activation under strong oxidative conditions, visible-light-induced alkoxyl radical generation by CIR catalysis proceeds smoothly at room temperature.
A plausible mechanism for this process is depicted in Figure 6D, the α-phosphorus alcohol first reacted with CIR to generate the benziodoxole/α-phosphorus alcohol complex in situ, which releases the alkoxyl radical and revives of CIR for the new catalytic cycle upon oxidation by Ru(bpy) 3+ 3 . The Ru(bpy) 3+ 3 was originated from the oxidative quenching of the photoexcited * Ru(bpy) 2+ 3 by CIR. The resulting alkoxyl radical subsequent carries on P-C(sp 3 ) bond cleavage to generate the phosphorus radical, and further performs radical α-addition with the BIalkyne to yield the desired phosphonoalkyne product.
Cyanation
In 2017, Waser's group extensively investigated the photoredox mediated decarboxylative cyanation of aliphatic acids using HIRs as cyano-transfer reagents (Le Vaillant et al., 2017) (Figure 7). In their model reaction, the cyanation reactivities of six hypervalent iodine-based cyanation reagents were evaluated ( Figure 7A). Under photoredox catalysis, CDBX and acyclic iodine reagent were almost inefficient while cyanobenziodoxolone (CBX) gave the product in excellent yield, these results showed the superiority of CBX as a cyanide source. The subsequent substrate scope investigation indicated that this methodology allowed efficient cyanation of α-amino and α-oxy acids into the corresponding nitriles (Figures 7B,C). Furthermore, the direct cyanation of dipeptides and drug precursors was also achieved.
Computational and experimental evidences suggested that the favored decarboxylative cyanation mechanism may probably different from the usually assumed decarboxylative alkynylation (Le Vaillant et al., 2015;Zhou et al., 2015). The proposed reaction mechanism (Figure 7D) consists of the irradiation of IrL + 2 with blue LED gives the excited-state * IrL + 2 , which subsequently carries on SET process with the in situ generated cesium carboxylate to regenerate the IrL 2 complex and together give the key nucleophilic radical intermediate. The reaction of the radical intermediate with CBX provides the desired nitrile and an iodine centered radical. Finally, this iodine centered radical undergoes another SET process with the IrL 2 complex to close the catalytic cycle.
The proposed mechanism of the acetoxylation reaction is shown in Figure 8C. Firstly, when irradiation with blue LED, rose bengal (RB) was excited into the excited state RB * , which performs an SET reduction with PIDA to generate the acetoxy radical (CH 3 COO·), accompanied by formation of the cation radical (RB +· ), PhI, and CH 3 COO − . Abstraction of the hydrogen atom of aryl-2H-azirine by acetoxy radical provides the 2H-azirine radical. The 2H-azirine radical then undergoes a second SET oxidation with RB +· , leading to the formation of intermediate carbocation while completing the photocatalytic cycle. Finally, the intermediate carbocation couples with the acetate anion giving the corresponding acetoxylated azirine.
Diazomethylation
In 2018, Suero and co-workers developed an aromatic C-H bond diazomethylation reactions using the pseudocyclic hypervalent iodine (I) by ruthenium photoredox catalysis (Wang Z. et al., 2018) (Figure 9). The pseudocyclic hypervalent iodine (I) carrying a diazoacetate moiety served as a diazomethyl radical precursor through a SET process in photoredox-catalyzed protocol, and a wide range of aromatic hydrocarbons substituted with alkyl groups, halogens, amides and carbonyls undergo C-H diazomethylation to generate valuable diazo compounds.
The authors proposed the reaction mechanism depicted in Figure 9C. The photocatalytic system is initiated by the photoexcitation of [Ru(bpy) 3 ] 2+ to generate * [Ru(bpy) 3 ] 2+ . The photoexcited * [Ru(bpy) 3 ] 2+ undergoes single-electron transfer with the pseudocyclic hypervalent iodine (I) to yield the diazomethyl radical as direct equivalent of carbyne specie, which is further intercepted an aromatic ring to facilitate the cyclohexadienyl radical formation. Finally, the resulting radical intermediate is oxidized by [Ru(bpy) 3 ] 3+ and eliminates the proton to obtain the expected diazo compound.
HIRS ACT AS OXIDANTS FOR SUBSTRATE ACTIVATION
Due to the excellent coordinating property of iodine atom, HIRs can easily experience ligand exchange reaction with organic acids to form the hypervalent iodine-coordinated carboxylates. When combination with the photoredox catalysis, those hypervalent iodine-coordinated carboxylates frequently undergo homolytic cleavage to access highly reactive hypervalent iodine radicals as well as the oxygen radicals, thus triggering the decarboxylative functionalization reactions or other transformations . Based on the above concept, Chen and co-workers have conducted a series of studies on novel dual CIR/photoredox catalytic system (Huang et al., 2015;Jia et al., 2016Jia et al., , 2017, and the research results proved that CIRs played a crucial role in activating the substrates of organic acids and alcohols toward photoredox catalysis.
HIR-Mediated Activation of Organic Acids
An example of CIR-enabled decarboxylative functionalization of α, α-difluoroarylacetic acids, mediated by dual CIR/photoredox catalysis, were developed by Qing and coworkers (Yang B. et al., 2016) (Figure 10A). A series of novel difluoroalkylated arenes were smoothly achieved through an HIR-promoted decarboxylation and radical hydroaryldifluoromethylation sequence. All of the tested HIRs including PhI(OAc) 2 , PhI(OCOCF 3 ) 2 , BIOAc and BIOMe give the desired transformation. Among them, BIOMe was the best choice. Further investigation revealed that BIOMe acts not only as an activating reagent but also as an oxidant in the process.
Feng, Xu, and coworkers disclosed a visible-light-enabled reaction in which α,β-unsaturated carboxylic acids are activated by BI-OH, thus leading to the decarboxylative mono-and difluoromethylation transformations ( Figure 10B) (Tang et al., 2017). Four candidate HIRs, IBDA, IB, BI-OH, BI-OAc, were screened in the reaction. Among them, BI-OH turned out to be optimal. As explained in mechanistic pathway (Figure 10D), BI-OH can in sute generate a benziodoxole vinyl carboxylic acid complex (BI-OOCCH=CHR), thus activating of the vinyl carboxylic acid group.
Zhang, Luo, and coworker achieved enantioselective decarboxylative coupling of propiolic acid and β-ketocarbonyls by combination of chiral primary amine catalysis and visiblelight photoredox catalysis ( Figure 10C) . Various of alkynylation adducts were synthesized with excellent enantioselectivities under mild conditions. For HIRs tested in this process, PIFA, PIDA, BI-OAc, and BI-OMe performed almost no catalysis effect, and BI-OH were identified to give the optimal results in terms of both yield and enantioselectivity. Mechanistic studies revealed that BI-OH could in situ react with propiolic acid to generate the propiolate under the reaction conditions. This propiolate acted as a key intermediate both in photoredox catalytic circle and the aminocatalytic circle. Itami and co-workers developed a mild method for the photoredox-catalyzed decarboxylation of arylacetic acids by HIR in air, thus leading to various aryl-aldehydes and ketones (Sakakibara et al., 2018a) (Figure 11A). Photoredox catalyst, HIR, blue light irradiation, and O 2 are all critically important for this transformation. CIR 1-butoxy 1-λ 3 -benzo[d][1,2]iodaoxol-3(1H)-one (IBB) was proved more efficient in the procedure than non-cyclic iodine reagent PIDA. In contrast, Ph 2 ICl was completely inefficient. In this process, IBB reacts with arylacetic acid to form intermediate in situ, thus activating of the arylacetic acid for decarboxylation.
The same group's subsequent study revealed that the same methodology can also be extended for construction of carbonnitrogen and carbon-oxygen bonds ( Figure 11B) (Sakakibara et al., 2018b). Under the activation of IBB, arylacetic acids were directly converted into nitrogen, oxygen, or chlorine functionalities. The reaction of IBB with arylacetic acid was confirmed by 1 H NMR, and the resulting complex was a key activated intermediate in the photoredox catalytic cycle of the mechanism pathway.
In 2018, the Chen's group further expanded their protocol of photoredox-mediated Minisci alkylation of N-heteroarenes reported in 2016 . In the improved protocol ( Figure 11C) , the alkylating agents were replaced by aliphatic carboxylic acids, which are more abundant, inexpensive, stable and structurally diverse than alkyl boronic acids. Although the same HIR was employed in both protocols, it actually demonstrated different roles under the photoredox catalysis conditions, and these two reactions proceed through different mechanisms. BI-OAc serves as a radical precursor in former, while in the improved protocol, it is used for substrate activation to facilitate decarboxylative functionalization of carboxylic acids.
Genovino, Frenette, and coworkers developed a C-H alkylation of heteroaromatics using an acridinium photocatalyst and HIRs ( Figure 11D) (Genovino et al., 2018). Bis(trifluoroacetoxy)iodo benzene (PIFA), a more soluble and under-utilized HIR, was proved as attractive option. It is noteworthy that the more challenging linear carboxylic acids that form primary radicals are also suitable substrates. A mechanism pathway, which different from other photoredox Minisci reactions catalyzed by transation-metals, was proposed by the authors.
In 2019, Cheng reported a decarboxylative coupling of alkynyl carboxylic acids and aromatic diazonium salts using HIR under eosin Y photoredox catalysis ( Figure 12A) (Yang . The results showed that BI-OAc superior to BI-OH and BIOMe as decarboxylation facilitated reagent for the reaction. BI-OAc and arylpropiolic acid generated a benziodoxole 3-phenylpropiolate complex in situ, which facilitated C-C triple bond conversion in the mechanical pathway proposed by the authors. Duan and coworkers reported the decarboxylative acylation/ring expansion reactions between vinylcyclobutanols with α-keto acids to construct 1,4-dicarbonyl compounds ( Figure 12B) . This methodology takes advantage of organic photoredox catalysis and merges it with HIR. Both transition-metal and organic photoredox catalysts were examined in the reaction, among them, rhodamine B, an organic dye known for its low cost, less toxic and easy to handle, give the best results. BI-OH was proved to play an important role in facilitating decarboxylation of α-keto acids. Radical-trapping experiments confirmed that nucleophilic acyl radical, which originated from α-keto acid, was involved in this tandem radical process.
In 2016, Chen and co-workers developed a new photoredoxmediated protocol for Minisci C-H alkylation of N-heteroarenes using alkyl boronic acids as alkylation regents, BI-OAc as oxidants, and Ru(bpy) 3 Cl 2 as photocatalyst (Figure 12D). This protocol can be applicable to a range of easily accessible primary and secondary alkyl boronic acids for the preparation of various N-heteroarenes, and various functional groups, including alkyl bromide, aryl iodide, ester, amide, carbamate, terminal alkyne, and benzyl chloride, are well-tolerated. Mechanistic experiments suggested that BI-OAc serves as a facile precursor for an ortho-iodobenzoyloxy radical intermediate, which play a key role in the efficient transformation of usually less reactive alkyl boronic acids to form alkyl radicals ( Figure 12E).
HIR-Mediated Activation of Alcohols
Chen and coworkers reported in 2018 that allylic alcohols can be activated by CIRs under photoredox catalysis conditions, and a series of cyclopentanones, cyclohexanones, and dihydrofuranones bearing α-quaternary centers were synthesized via alkyl boronate addition/semi-pinacol rearrangement ( Figure 13A) (Liu et al., 2018). The interaction between tertiary allylic alcohol and BI-OAc was extensively investigated by crystallography, NMR spectroscopy and cyclic voltammetry experiments, and the results revealed that both the hydroxyl and olefin groups in allylic alcohols were greatly activated via coordination to the BI-OAc. The mechanistic investigations suggest that the CIRs employed in this reaction played at least triple roles in the whole pathway: (1) facilitating the formation of the alkyl radical and the cation intermediate, (2) activating the allylic alcohol, and (3) the in situ protecting of alcohols for avoiding the formation of the epoxide.
Mao, Zhu, and coworkers reported the synthesis of distal bromo-substituted alkyl ketones by visible light-promoted ring-opening functionalization of unstrained cycloalkanols (Wang D. et al., 2018) (Figure 13B). A set of mediumand large-sized rings, such as cyclopentanols, cyclohexanols, cycloheptanols, cyclododecanols, and cyclopentadecanols, are readily brominated through inert C-C bond scission with the assistance of HIR under visible-light irradiation. HIRs such as PIDA, BI-OH, IBX, and DMP were all effective for the reaction, and PIDA gave the best results. Two pathways were proposed for the formation of the key alkyloxy radical by authors. In one of them, PIDA was transesterificated with cycloalkanol in situ, thus facilitating generation of the challenging alkoxyl radical.
In 2019, Chen and coworkers discovered a method for δ C(sp 3 )-H heteroarylation of free aliphatic alcohols with various N-heteroarenes using HIRs as oxidant under Ru photoredox catalysis (Li G. X. et al., 2019) (Figure 13C). Both cyclic I(III) reagents (BI-OAc, BI-OH, PFBI-OH and PFBI-OAc) and acyclic I(III) reagents (PIDA and PIFA) were examined and PFBI-OH achieved the highest efficiency. The high electrophilicity of the iodo center of PFBI-OH makes itself more electrophilic for alcoholysis and easily reducible in SET process. Notably, this method also possesses the advantage of avoiding the use of a large excess of alcohols.
The heteroarylation process ( Figure 13D) starts with in situ alcoholysis of PFBI-OH with alcohol, and then an alkoxy radical intermediate is generated through the SET reduction. Subsequently, the alkoxyl radical intermediate undergoes 1,5-Hydrogen atom transfer (1,5-HAT) to generate C-radical, which is then engaged in Minisci-type C-C bond formation to give heteroaryl cation intermediate. Finally, the intermediate is converted into target heteroarene through SET oxidation process.
HIR-Mediated Activation of Alkyl C-H Bonds
Chen Gong and coworkers have conducted a series of studies using HIRs as oxidants to selective functionalization of alkyl C(sp 3 )-H bonds under photoredox-catalysis. In these HIRmediated methods, unactivated alkyl C(sp 3 )-H bonds, such as tertiary, benzylic methylene, methylene, and methyl C-H bonds, can be selectively cleaved by benziodoxole radicals (BI·), thus offering straightforward methodologies to synthesis of complex alkyl-substituted compounds from a wide range of acyclic alkanes.
In 2017, this group demonstrated the use of HIRs in both hydroxylation and amidation of tertiary and benzylic C-H bonds, enabled by their corresponding benziodoxole radicals (Li et al., 2017) (Figure 14). H-abstraction reactivities of eight HIRs were investigated for C-H hydroxylation or amidation, and PFBI-OH and BI-OH were proved as the most effective oxidants respectively for tertiary C-H bonds and benzylic C-H bonds. Distinct from the typical radical chain mechanism, the authors proposed a new ionic pathway ( Figure 14C) involving nucleophilic trapping of a carbocation intermediate with H 2 O or nitrile cosolvent.
In an effort focused on extending this methodology, the same authors applied their PFBI-OH/photoredox system to functionalize the challenging methylene C-H bonds, and a range of alkyl-substituted N-heteroarenes were efficient and chemoselectively constructed through Minisci-type alkylation reaction of N-heteroarenes with alkanes ( Figures 15A,B) . The use of PFBI-OH was crucial to elicit both high reactivity and unique steric sensitivity for C-H abstraction of alkanes. The PFBI· radical, which generated by homolytic cleavage of I-OH bond under compact fluorescent lamp (CFL) irradiation, can smoothly cleave stronger 2 • C-H bonds even in the presence of weaker 3 • C-H bonds.
Cai and coworkers developed a visible-light-promoted C-H functionalization strategy to prepare α-aryl-γ -methylsulfinyl ketones (Figures 15C,D) (Lu et al., 2018). In this process, alkyl C(sp 3 )-H bond of dimethyl sulfoxide (DMSO) can be cleaved by a new HIR to yield α-sulfinyl radical, which subsequent undergoes radical addition with allylic alcohol, followed by 1,2aryl migration to give the desired sulfoxide derivatives. The new HIR was in situ generated from the reaction of PIFA and 1,3,5trimethoxybenzene.
SUMMARY AND OUTLOOK
As shown herein, the synergistic combination of photoredox catalysis with HIRs has achieved numerous notable organic transformations. These reactions illustrated that hypervalent iodine chemistry can significantly benefit from the merger with photoredox catalysis systems. The ability to access highly reactive radical intermediates under very mild and environmentally benign conditions make these methodologies quite attractive.
Despite the significant progress made, there remain many opportunities for further exploration in the field of photoredox catalysis/HIR system. Firstly, there are a wide variety of HIRs yet to be engaged in photoredox-catalytic reactions. Moreover, from the perspective of green and sustainable chemistry, additional development of low-cost, non-toxic, and environmentally benign organic-dyes as a replacement of metal photoredox catalysts is highly desirable. Additionally, the discovery of stereoselective asymmetric reactions using chiral HIRs under photoredox-catalyzed conditions may potentially be a promising direction for future research. Finally, more in-depth mechanistic studies are highly warranted for fully understanding of the photoredox catalysis/HIR processes. It is highly anticipated that more and more HIRs as reagents or oxidants will continue to be applied in the area of visible-light-induced photoredox catalysis.
AUTHOR CONTRIBUTIONS
TY designed this proposal, determined the contents, and revised the manuscript. CC collected the literature data related to this review and wrote the manuscript. XW drew the chemical structures and prepared the figures. All authors contributed to the final version of the manuscript.
FUNDING
This work was partially supported by the National Natural Science Foundation of China (grant number 51872140).
|
2020-09-24T13:06:17.953Z
|
2020-09-23T00:00:00.000
|
{
"year": 2020,
"sha1": "5607d76e84dfb55a8be7bbf492c0c2f6ea220af1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2020.551159/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5607d76e84dfb55a8be7bbf492c0c2f6ea220af1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
264904
|
pes2o/s2orc
|
v3-fos-license
|
OSPACS: Ultrasound image management system
Background Ultrasound scanning uses the medical imaging format, DICOM, for electronically storing the images and data associated with a particular scan. Large health care facilities typically use a picture archiving and communication system (PACS) for storing and retrieving such images. However, these systems are usually not suitable for managing large collections of anonymized ultrasound images gathered during a clinical screening trial. Results We have developed a system enabling the accurate archiving and management of ultrasound images gathered during a clinical screening trial. It is based upon a Windows application utilizing an open-source DICOM image viewer and a relational database. The system automates the bulk import of DICOM files from removable media by cross-validating the patient information against an external database, anonymizing the data as well as the image, and then storing the contents of the file as a field in a database record. These image records may then be retrieved from the database and presented in a tree-view control so that the user can select particular images for display in a DICOM viewer or export them to external media. Conclusion This system provides error-free automation of ultrasound image archiving and management, suitable for use in a clinical trial. An open-source project has been established to promote continued development of the system.
Background
Medical sonography (ultrasonography) uses ultrasound to provide real-time images of soft tissues, internal organs and the fetus in utero. Because medical sonography is noninvasive and generally considered to have no harmful side effects, it has seen increasing use for a variety of diagnostic purposes in recent years. One of the most common applications of ultrasound imaging is in routine obstetric care, assessing the stage and status of pregnancy and the health and development of the fetus. Other applications include the imaging of most of the internal organs, muscles, ligaments and tendons. DICOM 3.0 (Digital Imaging and Communications in Medicine) is a standard describing the handling, transfer and storage of medical imaging data, including ultrasound scans [1]. A DICOM data object (or data set) combines a medical image in one of several standards (either still, or video) with patient information and other scan data. The linked storage of these data is an important feature of the standard, ensuring that the descriptive patient data are always associated with the correct medical image. Most modern health care facilities store these DICOM objects in a picture archiving and communication system (PACS), allowing ultrasound scan records to be managed in the same way as other types of medical images.
A considerable amount of work is currently being undertaken to evaluate the use of ultrasound in various new diagnostic procedures. The clinical trial of a new ultrasound procedure generates significant quantities of scan data that typically require cross-comparison and peer assessment. However, the nature of clinical trials often precludes the storage of data alongside patient records in an existing PACS, and the purchase of a PACS for the limited use of a trial is usually not cost effective. As a result the ultrasound scans generated by clinical trials are not always stored in a way that facilitates their management and future retrieval, and our own experience of this issue was the incentive for this project; we had attempted to manage large numbers of ultrasound images using software supplied with the ultrasound machine, only to discover that it became unmanageable when the number of images exceeded a certain threshold.
In order to satisfy the need for the management of ultrasound images without incurring the effort and expense of setting-up a commercial PACS, we have developed OSPACS. This system is based on a simple Windows application utilizing an open-source DICOM image viewer and a relational database. OSPACS is currently being used to manage the UKCTOCS (United Kingdom Collaborative Trial for Ovarian Cancer Screening) [2] ultrasound archive, providing error-free automation of ultrasound image archiving and management. An opensource project has been established in order to promote the continued development of OSPACS for UKCTOCS as well as other clinical trials.
Implementation
OSPACS is implemented using traditional client-server architecture, such that the client is comprised of a Windows Forms application (osImageManagerApp.exe), which accesses a server hosting the image database.
Software Development and Design
Development of OSPACS followed an Agile approach [3] inspired by Extreme Programming (XP) [4], utilizing practices such as Real Customer Involvement, Incremental Deployment, Incremental Design and Test-First Programming. This allowed the system to evolve in an incremental way through a series of iterations that were driven by the need to frequently deliver valuable software that satisfied the end-user (customer).
The Incremental Design practice requires design to be performed everyday instead of being confined to a particular phase during the project (as is the case in a Waterfall process) or during the iteration (as is the case in Rational Unified Process). In practical terms this means following the Test-First Programming practice so that the design evolves in a bottom-up fashion. However, Agile Modeling techniques were employed during the creation of the initial architecture and before important design decisions were taken, so the design of OSPACS was actually produced by a combination of bottom-up and top-down approaches.
Image Database
We have implemented the image database using the Microsoft SQLServer database management system (DBMS) [5]. Because this is a commercial product, the OSPACS setup program includes the option of installing the Express Edition of SQLServer, which is available free of charge from the Microsoft website [5]. SQLServer Express is functionally identical to the full commercial product in respect of OSPACS requirements, but the database size is limited to 4GB. While this constraint will significantly limit the number of ultrasound records that can be stored, the use of SQLServer Express will allow the system to be evaluated adequately.
During its design we attempted to follow the maxim of developing "the simplest thing that could possibly work" [6]. Consequently there is just one table with eleven fields, the most important of which is the DicomFile field (see Table 1). This field is an Image data type (a large binary object, or BLOB) and contains the entire contents of the DICOM object as binary data. However, as the application software only accesses the database through a set of stored procedures, it should be possible to extend this schema relatively easily.
ezDICOM Component
The ezDICOM component is free, open source software that can be used to view a wide range of medical images including the DICOM standard and proprietary formats [7]. It has a good reputation for being mature, stable software and has been successfully used in a number of other Medical Imaging applications. OSPACS uses the ezDI-COM ActiveX control under the BSD open source license.
Software Development Tools and Libraries
The OSPACS software was developed in C# using Visual Studio 2005 tools (Microsoft) and it includes a collection of automated unit tests which cover more than 95% of the code base. The user interface layer is implemented as a Windows Forms application and uses the .NET 2.0 Frame-work Class Library (FCL) [5] to provide the main window, dialog boxes, associated controls, etc. Non-functional requirements, such as logging and error handling, were implemented using the third-party library MxToolbox [8]. Automated functional tests were developed using the Framework for Integrated Test (FIT) library [9]. The standard installation process will install an application called osImageManager and create both an image database and a test patient database on the local computer. The FIT automated functional tests can then be run from the application's 'Database Admin' dialog box in order to validate the system within the context of the client PC. For the system to be used in a production context it is necessary only to change the configuration of the database sources to the required servers, and then re-run the scripts from the application's Database Admin dialog box in order to create any production image database that might be required. In this way the installation of OSPACS is made simple, repeatable and reliable.
Image import
Ultrasound images in the DICOM image format can be imported as files from removable media, and information from the DICOM image header cross-checked against an external database. The current implementation of OSPACS is specific to UKCTOCS, and patient details from the DICOM header are compared to specific data fields in the patient information tables of the main UKCTOCS database. Where information in the DICOM header does not agree with a UKCTOCS database entry, the image is flagged and options to correct data in the DICOM header are provided. Following a successful validation of data in the DICOM header, the relevant information is extracted and used to populate fields in a new image database record.
Patient anonymity is protected by removing all patientidentifying information from the DICOM header data, and also by masking parts of the image data itself before populating the database record. The region of the image containing the patient identifier does not vary in scans from similar hardware, so a defined region of the image is simply over-written with white pixels.
Viewing images
Once DICOM objects have been imported into the image database they can be readily retrieved by running a standard query using values entered into a dialog box opened from osImageManager's View menu. When the dialog box is closed the records in the database matching the query value are displayed in a standard Windows treeview control, arranged in a heirarchy of scanning centre, patient ID, scan date and images (see Figure 1). Selection of an image item in the treeview displays the corresponding image in a viewing window (see Figure 1). Images can be retrieved either by reference number of the trial volunteer, or the reference number of the removable media entered during the import process.
Image export
Selected images in the treewiew can easily be saved to removable media in their original file format. While this process does not create a DICOMDIR file (defined by the DICOM standard, a directory that indexes and describes all of the DICOM files that are stored on the media) and hence is not fully compliant with the DICOM standard, these files can be opened and viewed by any DICOM viewing application.
Conclusion
OSPACS has been successfully implemented within the Gynaecological Cancer Research Centre, where it is being used to manage ultrasound images collected as part of the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). During the course of the trial it is expected that UKCTOCS will accumulate approximately 1,500,000 ultrasound images, from 300,000 examinations, collected by 13 regional centres throughout the UK. OSPACS is currently being used to automate the import of scans from the regional centres, helping to resolve patient identifiers with the central UKCTOCS database, and to export groups of images to removable media for external The osImageManager application Figure 1 The osImageManager application. Screenshot of the osImageManager application, showing the treeview control and DICOM viewing window.
Publish with Bio Med Central and every scientist can read your work free of charge
|
2018-05-08T17:47:14.385Z
|
2008-06-20T00:00:00.000
|
{
"year": 2008,
"sha1": "2de0f794d401aaf57134268cfe86fb3fde14457f",
"oa_license": "CCBY",
"oa_url": "https://scfbm.biomedcentral.com/track/pdf/10.1186/1751-0473-3-11",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2de0f794d401aaf57134268cfe86fb3fde14457f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
264805200
|
pes2o/s2orc
|
v3-fos-license
|
The differential properties of certain permutation polynomials over finite fields
Finding functions, particularly permutations, with good differential properties has received a lot of attention due to their possible applications. For instance, in combinatorial design theory, a correspondence of perfect $c$-nonlinear functions and difference sets in some quasigroups was recently shown [1]. Additionally, in a recent manuscript by Pal and Stanica [20], a very interesting connection between the $c$-differential uniformity and boomerang uniformity when $c=-1$ was pointed out, showing that that they are the same for an odd APN permutations. This makes the construction of functions with low $c$-differential uniformity an intriguing problem. We investigate the $c$-differential uniformity of some classes of permutation polynomials. As a result, we add four more classes of permutation polynomials to the family of functions that only contains a few (non-trivial) perfect $c$-nonlinear functions over finite fields of even characteristic. Moreover, we include a class of permutation polynomials with low $c$-differential uniformity over the field of characteristic~$3$. As a byproduct, our proofs shows the permutation property of these classes. To solve the involved equations over finite fields, we use various techniques, in particular, we find explicitly many Walsh transform coefficients and Weil sums that may be of an independent interest.
Introduction
Let F q be a finite field with q = p n elements, where p is a prime number and n is a positive integer.We use F q [X] to denote the ring of polynomials in one variable X with coefficients in F q and F * q to denote the multiplicative group of nonzero elements of F q .If F is a function from F q to itself, by using Lagrange's interpolation formula, one can uniquely express it as a polynomial in F q [X] of degree at most q − 1.A polynomial F ∈ F q [X] is a permutation polynomial of F q if the mapping X → F (X) is a bijection on F q .Permutation polynomials over finite fields are of great interest due to their numerous applications in coding theory [5,14], combinatorial design theory [6], cryptography [17,21], and other areas of mathematics and engineering.These polynomials with some desired properties such as low differential uniformity, high algebraic degree and high nonlinearity act as an important candidate in designing cryptographically strong S-boxes and hence in providing secure communication.
Block ciphers are susceptible to a wide variety of attacks.One of the most effective cryptanalytic tools for attacking block ciphers is the differential attack introduced by Biham and Shamir in their paper [2].To measure the resistance of a given function over a finite field (i.e., of a given S-box) against the differential attack, Nyberg [19] introduced the notion of differential uniformity as follows.Let F be a function, F : F q → F q .For any a ∈ F q , the derivative of F in the direction a is defined as D F (X, a) := F (X + a) − F (X) for all X ∈ F q .For any a, b ∈ F q , the Difference Distribution Table (DDT) entry ∆ F (a, b) at point (a, b) is the number of solutions X ∈ F q of the equation D F (X, a) = b.Further, the differential uniformity of F , denoted by ∆ F , is given by ∆ F := max{∆ F (a, b) : a ∈ F * q , b ∈ F q }.We call the function F a perfect nonlinear (PN) function and an almost perfect nonlinear (APN) function when ∆ F = 1 and ∆ F = 2, respectively.
Borisov et al. [3] introduced the concept of the multiplicative differentials of the form (F (cX), F (X)) and exploited this new class of differentials to attack certain existing ciphers.In 2020, Ellingsen et al. [7] gave the generalized notion of differential uniformity and introduced the concept of (output)multiplicative differentials and the corresponding c-differential uniformity.For any function F : F q → F q and for any a, c ∈ F q , the (multiplicative) c-derivative of F with respect to a is defined as c ∆ F (X, a) := F (X + a) − cF (X) for all X ∈ F q .For any a, b ∈ F q , the c-Difference Distribution Table (c-DDT) entry c ∆ F (a, b) at point (a, b) is the number of solutions X ∈ F q of the equation c ∆ F (X, a) = b.The c-differential uniformity of F , denoted by c ∆ F , is given by c ∆ F := max{ c ∆ F (a, b) : a, b ∈ F q and a = 0 if c = 1}.For c = 1, we recover the case of differential uniformity.When c ∆ F = 1, we call F a perfect c-nonlinear (PcN) function and when c ∆ F = 2, we call F an almost perfect c-nonlinear (APcN) function.Note that for monomial functions, x → x d , the output differential (c 1 F (X), F (X)) is the same as the input differential (F (c 2 X), F (X)), where c 1 = c d 2 , which was the differential that Borisov et al. [3] exploited.Recently, the authors in [1] pointed out a connection between the c-differential uniformity (cDU) and combinatorial designs by showing that the graph of a PcN function is a difference set in a quasigroup.Difference sets give rise to symmetric designs, which are known to construct optimal self complementary codes.Some types of designs also have applications in secret sharing and visual cryptography.Moreover, Pal and Stȃnicȃ [20] in a very recent manuscript show that the c-differential uniformity, when c = −1, of an odd APN permutation F (odd characteristic) equals its boomerang uniformity, and when F is a non-permutation, boomerang uniformity of F is the maximum of the (−1)-DDT entries irrespective of the first row/column.The construction of functions, particularly permutations, with low c-differential uniformity is an interesting problem, and recent work has focused heavily in this direction.One can refer to [9,13,15,18,26,28,30] for the numerous functions with low c-differential uniformity investigated till now.There are very few known (non-trivial, that is, nonlinear) classes of PcN and APcN functions over a finite field with even characteristic; see, for example, [8,10,12,24].
In this paper, we compute the c-differential uniformity of some classes of permutation polynomials introduced in [16,25,29].The methods we employ can be of independent interest, and they use Walsh transforms computations, Weil sums, and some detailed investigation of the involved equations via number theory tools.The structure of the paper is as follows.We first recall some results in Section 2, that are required in the subsequent sections.In Section 3, we compute the c-differential uniformity of four classes of permutation polynomials over finite fields of even characteristic.Further, Section 4 deals with the c-differential uniformity of one class of permutation polynomials over finite fields of characteristic three.Finally, in Section 5, we conclude the paper.
Preliminaries
In this section, we first review a definition and provide some lemmas to be used later in subsequent sections.Throughout the paper, we shall use Tr n m to denote the (relative) trace function from F p n → F p m , i.e., Tr n m (X) = n−m m i=0 X p mi , where m and n are positive integers and m|n.For m = 1, we use Tr to denote the absolute trace.Also v p (n) will denote the highest nonnegative exponent v such that p v divides n (that is, the p-adic valuation).
We first recall the definition of the Walsh transform of a function.
Definition 2.1.[11] For a function F : vX) , where ω = e 2πi p is a complex primitive pth root of unity.
We now present a lemma dealing with solutions of a cubic equation over a finite field with even characteristic.We take the form from [29, Lemma 2.2], which is derived from the original paper of Williams [27].
Lemma 2.2.[29, Lemma 2.2] For a positive integer n and a ∈ F * 2 n , the cubic equation (2) three distinct solutions in F 2 n if and only if p n (a) = 0, where the polynomial p n (X) is recursively defined by the equations Next, we recall five classes of permutation polynomials for which we investigate their c-differential uniformity.For an element δ in a finite field F p 2m , we let δ = δ p m .Lemma 2.3.[29, Theorem 3.1] Let m be a positive integer and p m (X) be defined in Lemma Lemma 2.4.[29, Theorem 2.1] For a positive integer m ≡ 0 (mod 3) and any element δ ∈ F 2 2m , the polynomial Lemma 2.5.[25, Theorem 3.1] For a positive integer m ≡ 0 (mod 3) and any element δ ∈ F 2 2m , the polynomial Lemma 2.6.[16, Proposition 8] For a positive integer m and a fixed δ ∈ F p 3m with Tr n m (δ) = 0, where n = 3m, the polynomial Lemma 2.7.[16, Proposition 10] For a positive integer m and a fixed δ We also recall some results providing us Walsh transform coefficients of some functions, that would be required later in our results.
where v 2 is the 2-valuation, that is, the largest power of 2 dividing the argument.Also, let Then we have the following cases: Further, we also need a lemma given in [4] to evaluate W fu (α) for any α ∈ K where f u is a more general function f u (x) = Tr(uX ) where u ∈ F 2 d 1 , then we have the following cases: The following lemma can be gleaned from the proof of [11,Proposition 2].Lemma 2.10.Let m be a positive integer and n = 2m.Also, let a i ∈ F p n (i = 0, . . ., m) for an odd prime p. Then the absolute square of Walsh transform coefficient of the function where ℓ is dimension of kernel of the linearized polynomial L(X) = m i=0 (a i X p i +(a i X) p n−i ).For our first theorem, we will be using the following lemma that we will now prove.The result for the case of finite fields with odd characteristic is already mentioned in the above Lemma 2.10.We show that this result also holds in the case of p = 2, and we add its proof here for completeness.
Lemma 2.11.Let m be a positive integer and n = 2m.Also, let a i ∈ F 2 n (i = 0, . . ., m).Then the square of Walsh transform coefficient of the function f where ℓ is dimension of kernel of the linearized polynomial Proof.We can easily write the square of Walsh transform coefficient of the function f : We first simplify f (Y ) + f (Z) + f (Y + Z) as follows: where is the linearized polynomial over F 2 n with kernel of dimension l, denoted by Ker(L).This will give us (wZ) .
The above equality holds because for those Z ∈ Ker (L), Y L(Z) forms a permutation over F 2 n making the inner sum in the square of the Walsh transform zero.To proceed further, we consider F 2 n as an n-dimensional vector space over F 2 and hence L(Z) as a linear transformation of , we get that f (Z) + Tr(wZ) is linear on the kernel of L. This will imply that either f (Z) + Tr(wZ) is identically zero on Ker(L), or f (Z) + Tr(wZ) is an onto map on Ker(L).Hence, the claim is shown.
Next, we recall the general technique given in [23] using the expression for the number of solutions to a given equation over finite fields in terms of Weil sums.The authors used this technique to compute the c-DDT entries.Let χ 1 : F q → C be the canonical additive character of the additive group of F q defined as follows One can easily observe (see, for instance [22]) that number of solutions (X 1 , X 2 , . . ., X n ) ∈ F n q of the equation F (X 1 , X 2 , . . ., X n ) = b, denoted by N (b), is given by We would be using the above expression to calculate the c-differential uniformity of a few permutations over finite fields in the forthcoming sections.
Permutations over F 2 n with low c-differential uniformity
We first consider the c-differential uniformity of the function , where n = 2m and δ ∈ F 2 n .From Lemma 2.3, we know that F is a permutation polynomial over F 2 n .In the following theorem, we give conditions on δ and c for which F turns out to be either a PcN or an APcN function.
Proof.Clearly, by expanding the given trinomial, one can easily simplify F (X) and gets the below expression which is the same as, or, equivalently, We now use the following notations Case 1.Let c ∈ F 2 n and δ ∈ F 2 m .To compute T 0 and T 1 , we first write and where, Further, splitting the above sum depending on whether Tr 2m m ((1 + c)β) is 0 or not, we get where S 0 , S 1 are the two inner sums.We first consider the sum S 0 below, To compute S 0 , we need to find number of solutions of the following equation in , which is clearly nonzero for some Next, we consider where W fu (w) is the Walsh transform of the function )) at w.We split our analysis in three cases depending on the value of m.Also, we have Hence, , that gives us Tr 2m m ((1 + c)β) = 0, which is obviously not true and hence we are done.Subcase 1(b).Let m ≡ 0 (mod 4).
Thus, by using Lemma 2.9 if we show that L 1 • L 2 w u 2 = 0 then we have W G (w) = 0, therefore, S 1 = 0 and then we showed the claim.So, our goal is to now show = 0.This will give us This above analysis shows the claim that and p m ((δ + δ) −1 ) = 0. Then T 0 , T 1 become and, )) Hence, we can rewrite Equation (3.2) as where, We now split our analysis in two cases and define S 0 and S 1 depending on whether Tr 2m m ((1 + c)β) = 0 or not, respectively.We first compute S 0 as follows: )) This would further reduce S 0 as follows: )) To compute S 0 , we need to find the number of solutions β ∈ F 2 n of the following equation or equivalently, to find the number of solutions β ∈ F * 2 m of the equation given below: Further, multiplying the above equation by Tr 2m m (δ) and using Z = Tr 2m m (δ 2 )(1 + c)β, one can rewrite the above equation as follows, Substituting Z with Tr 2m m (δ 2 m−1 )Z in the above equation, we therefore get m (δ) −1 = 0 has three distinct solutions in F 2 n , implying that p m ((δ + δ) −1 ) = 0, which is a contradiction to the given assumption.Hence S 0 = 2 n .Let c ∈ F 2 n \ F 2 m .Using a similar technique as above, we are led to finding the number of solutions β ∈ F 2 n of the following equation, Further, we reduce the above equation by substituting Combining the coefficients of β i for i = 1, 2 and 4, we rewrite the above equation in the following simplified way, m (δ 2 m−1 ) 3 (B ′ ) 3 = 0, which is the same as (3.5) Notice that for a ∈ F 2 m , the above equation reduces to = 0.If the reduced equation has three solutions in F 2 m then Z 3 + Z + 1 Tr 2m m (δ) = 0 also has three distinct solutions which is not true as p m ((δ + δ) −1 ) = 0. Also, Tr m 1 (Tr 2m m (δ) + 1) = 1, thus for a ∈ F 2 m , we have no solution of Equation (3.5) in F 2 m .If a ∈ F 2 m , then we may have at most three solutions to Equation (3.5).Thus we have S 0 ≤ 2 n+2 .We now consider S 1 as follows. )) where W G (w) is the Walsh transform of the trace of function at w. Now from Lemma 2.11, the square of the Walsh transform coefficient of G is given by where ℓ is dimension of kernel of the linearized polynomial It is easy to see that F 2 m ⊆ Ker(L).Thus, if we can show that G(X) + Tr(wX) = 0 for all X ∈ F 2 m , then S 1 = 0. We shall now make efforts to prove that is not identically zero on F 2 m .Observe that for X ∈ F 2 m , G(X) + Tr(vX) reduces to the following polynomial over If m = 1 then G(X) + Tr(wX) = 0 implies that Tr 2m m (β(1 + c)) = 0, which is not possible.For m ≥ 2, the degree of G(X) + Tr(wX) is strictly less than that of 2 m − 1. Hence the claim.
Next, we discuss the c-differential uniformity of for some fixed values of c and δ.
, where n = 2m and m ≡ 0 (mod 3).Then F is PcN for all c ∈ F 2 m \ {1} and δ ∈ F 2 n .Proof.We know F (X) is a permutation polynomial from Lemma 2.4.One can easily simplify F (X) to get the below expression Now, by using Equation (2.1), the number of solutions X ∈ F 2 n of the above equation, With, )), the above equation becomes Further, we simplify T 0 and T 1 as follows: , and, where, Further, splitting the above sum depending on whether Tr 2m m (β) is 0 or not, we get where S 0 , S 1 are the two inner sums.We first consider the sum S 0 below.We write The last identity follows by analyzing the number of solutions β of the following equation: it is clear that the polynomial p m (X) has an odd number of terms if m ≡ 0 (mod 3), and each of term is a monomial of X.Hence, it cannot have three distinct solutions in F 2 m .Also, since Tr m 1 (1 + 1) = 0, then it cannot have a unique solution.Thus, Equation (3.7) has only one solution β = 0 in F 2 m , and that gives us S 0 = 2 n .Next, we have where W fu (w) is the Walsh transform of the function f u (X) = Tr(u(X 2 1 +1 + X 2 m+1 +1 )) at w.We split our analysis in two cases depending on the value of m.Also, we have As u = η 3 , we have η = 0, that gives us Tr 2m m (β) = 0 which is obviously not true and hence we are done.
Thus, by using Lemma 2.9 if we show that L 1 • L 2 w u 2 = 0 then we have W G (w) = 0 which gives us S 1 = 0 and then we are done with the claim.So, our goal is to now show that = 0.This will give us which is a contradiction to the assumption that Tr 2m m (β) = 0. Subcase 1(c).Let m ≡ 2 (mod 4), then d 1 = m and d 2 = 4.In this subcase, we have Then again from Lemma 2.9, if we show that w u 2 ∈ S d 1 ∩ S d 2 , then we are done.In this subcase, Then we have Tr 2m m w u 2 = 0, which implies that Tr 2m m (β) = 0, which is not possible.Hence W G (w) = 0, giving us S 1 = 0.This shows the claim that F is PcN for c ∈ F 2 m \ {1}, and the proof is done.
, where n = 2m and m ≡ 0 (mod 3).Then F is PcN for all c ∈ F 2 m \ {1} and δ ∈ F 2 n .Proof.Notice that F (X) is a permutation polynomial over F 2 n from Lemma 2.5.The proof follows along a similar lines as in the aforementioned Theorem 3.2.
Proof.First, we know that F is a permutation polynomial via Lemma 2.6.Next, by expanding the trinomial, one can easily simplify F (X) and gets the below expression
Since Tr 3m
m (δ) = 0, we can write the above equation as Recall that, for any (a, b) ∈ F 2 n × F 2 n , the c-DDT entry c ∆ F (a, b) is given by the number of solutions X ∈ F 2 n of the equation F (X + a) + cF (X) = b, or, equivalently, )) = b, From Equation (2.1), the number of solutions X ∈ F 2 n of the above equation is given by or, equivalently, Using notations Let c ∈ F 2 m \ {1}.To compute T 0 and T 1 , we simplify them further, and, where, As c ∈ F 2 m \ {1}, we can split the above sum depending on whether Tr 3m m ((1 + c)β) is 0 or not, or equivalently, Tr 3m m (β) is 0 or not.We write Proof.Clearly, after simplifying where ω = e 2πi/3 .Equivalently, Case 1.Let c ∈ F 3 m \ {1} and δ ∈ F 3 n .To compute T 0 and T 1 , we first write where, Further, splitting the above sum depending on whether Tr 2m m (β) is 0 or not, we get To compute S 0 , we need to compute the number of solutions β ∈ F 3 n for following equation, It is clear that when Tr 2m m (δ) = 0, the above equation has only one solution β = 0. Let us assume that Tr 2m m (δ) = 0. Raising the above equation to a cubic power, we get , where ν ∈ F * It is easy to see that F 3 m ⊆ Ker(L).Thus, if we can show that G(X) + Tr(vX) = 0 for all X ∈ F 3 m , then S 1 = 0. We shall now prove that G(X) + Tr(vX) is not identically zero on F 3 m .For X ∈ F 3 m , the polynomial G(X) + Tr(vX) gets reduced to the polynomial where W G (−v) is the Walsh transform of trace of function G : X → u 1 X 4 − u 2 X 3 m−1 +1 at −v.By following similar arguments as in the Case 1 above, one can show that S 1 = 0.This completes the proof.
Conclusions
In this paper we show that some permutation polynomials are PcN over finite fields of even characteristic and even dimension, for c = 1 in the subfield of half dimension.This adds to the small list of known (non-trivial) PcN functions.We also find a class of permutation polynomials over finite fields of characteristic 3, of even dimension n = 2m, which is PcN for c ∈ F 3 m \ {1}, and has c-differential uniformity 3 for all c / ∈ F 3 m .
|
2023-11-01T06:43:06.157Z
|
2023-10-31T00:00:00.000
|
{
"year": 2023,
"sha1": "bf0ce8dfce4c02898e53798838d2bcece358d625",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bf0ce8dfce4c02898e53798838d2bcece358d625",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
251511850
|
pes2o/s2orc
|
v3-fos-license
|
Transcript-Targeted Therapy Based on RNA Interference and Antisense Oligonucleotides: Current Applications and Novel Molecular Targets
The development of novel target therapies based on the use of RNA interference (RNAi) and antisense oligonucleotides (ASOs) is growing in an exponential way, challenging the chance for the treatment of the genetic diseases and cancer by hitting selectively targeted RNA in a sequence-dependent manner. Multiple opportunities are taking shape, able to remove defective protein by silencing RNA (e.g., Inclisiran targets mRNA of protein PCSK9, permitting a longer half-life of LDL receptors in heterozygous familial hypercholesteremia), by arresting mRNA translation (i.e., Fomivirsen that binds to UL123-RNA and blocks the translation into IE2 protein in CMV-retinitis), or by reactivating modified functional protein (e.g., Eteplirsen able to restore a functional shorter dystrophin by skipping the exon 51 in Duchenne muscular dystrophy) or a not very functional protein. In this last case, the use of ASOs permits modifying the expression of specific proteins by modulating splicing of specific pre-RNAs (e.g., Nusinersen acts on the splicing of exon 7 in SMN2 mRNA normally not expressed; it is used for spinal muscular atrophy) or by downregulation of transcript levels (e.g., Inotersen acts on the transthryretin mRNA to reduce its expression; it is prescribed for the treatment of hereditary transthyretin amyloidosis) in order to restore the biochemical/physiological condition and ameliorate quality of life. In the era of precision medicine, recently, an experimental splice-modulating antisense oligonucleotide, Milasen, was designed and used to treat an 8-year-old girl affected by a rare, fatal, progressive form of neurodegenerative disease leading to death during adolescence. In this review, we summarize the main transcriptional therapeutic drugs approved to date for the treatment of genetic diseases by principal regulatory government agencies and recent clinical trials aimed at the treatment of cancer. Their mechanism of action, chemical structure, administration, and biomedical performance are predominantly discussed.
Introduction
"Transcript-targeted therapy" can be defined as any molecular treatment able to modify transcriptionally or post-transcriptionally the levels of coding and noncoding RNAs in order to obtain a therapeutic advantage.
The more common tools for transcriptional regulatory therapy are based on RNA interference (RNAi) or antisense oligonucleotides (ASOs) and their basic mechanisms are summarized in Figures 1-5. Exogenous messenger RNAs (mRNAs) as therapeutics and the Genome Editing tools, primarily based on the use of clustered regularly interspaced short palindromic repeats and CRISPR-associated protein 9 (CRISPR-Cas9) deserve to be analyzed and discussed in another review for the large amount of data collected in the last decade about them. In this review, we describe the molecular targets and the pharmaceutical formulations of ASO-and siRNA-based therapeutics that have been approved for human clinical use or have been investigated in the last 6 years in human clinical trials. last decade about them. In this review, we describe the molecular targets and the pharmaceutical formulations of ASO-and siRNA-based therapeutics that have been approved for human clinical use or have been investigated in the last 6 years in human clinical trials. last decade about them. In this review, we describe the molecular targets and the pharmaceutical formulations of ASO-and siRNA-based therapeutics that have been approved for human clinical use or have been investigated in the last 6 years in human clinical trials. . Inclisiran recognizes PCSK9 mRNA to promote its degradation and to reduce protein translation; in this way, Inclisiran prolongs the half-life of LDL receptors that can continue to capture LDL-cholesterol, determining its blood reduction. A detailed description is reported in the text.
siRNAs and ASOs
"RNA interference or RNA silencing (RNAi)" is a process in which double-stranded RNA (dsRNA) is processed into short interfering RNAs (siRNAs) that downregulate gene Figure 5. Inclisiran recognizes PCSK9 mRNA to promote its degradation and to reduce protein translation; in this way, Inclisiran prolongs the half-life of LDL receptors that can continue to capture LDL-cholesterol, determining its blood reduction. A detailed description is reported in the text.
siRNAs and ASOs
"RNA interference or RNA silencing (RNAi)" is a process in which double-stranded RNA (dsRNA) is processed into short interfering RNAs (siRNAs) that downregulate gene expression by degradation of targeted mRNA molecules ( Figure 1). The history of RNAi research began in the early 1990s when a number of scientists, through their studies on plants and fungi, independently observed that RNA molecules in some particular conditions could inhibit gene expression at the post-transcriptional level [1,2]. These phenomena, initially known as "Post-Transcriptional Gene Silencing" (PTGS), quelling, and cosuppression, were not fully understood in their mechanism until 1998 when Andrew Fire and Craig Mello demonstrated that double-stranded RNAs (dsRNA) were the causative agents [3]. This discovery made possible the unification of the previous observations in a single-cellular mechanism that they called RNA interference (RNAi, Nobel Prize in Physiology or Medicine, 2006). In 2001, Elbashir and Caplen independently [4,5] found out that 21-22 nucleotides could induce RNAi in mammalian cells without triggering nonspecific interferon responses, normally induced by >30 nt dsRNA, and soon these small interfering RNAs (siRNAs) were identified as a therapeutic tool for the treatment of numerous diseases such as cancer, viral infections, and neurodegenerative diseases. In the cytoplasm, siRNAs are loaded into the "RNA-induced silencing complex" (RISC); generally, only one strand (guide strand) is incorporated into the RISC, the other strand (passenger strand) is normally discarded and degraded. After loading into the RISC, the guide strand targets the transcript with a total complementarity and triggers an endonucleolytic cleavage (mediated by a RISC-associated protein called "Argonaute-2" or "Ago2"), which induces its degradation and inhibits the protein translation [6]. A simplified schematic draw is reported in Figure 1. Two types of small RNA molecules are inductors of the RNAi pathway: endogenous "microRNA" (miRNA), produced by the own cellular genome, and exogenous "small interfering RNA" (siRNA), derived from extracellular genomes such as virus or artificially introduced for experimental or therapeutically purposes. Unlike siRNAs, miRNAs recognize mRNA targets with an imperfect complementarity and, also in this case, typically silence genes by repression of translation [7].
"Antisense Oligonucleotides" (ASOs) are short-length single sequences of deoxynucleotides, 12-28 bases, that are synthesized to be complementary to a sequence of mRNA or pre-mRNA to generate a DNA/RNA heteroduplex; in this way, they can regulate the target expression. For the first time, Zamecnik and Stephenson used ASO to inhibit Rous sarcoma virus (RSV) cycle [8]. Generally, the action of ASO can be mediated through different mechanisms: -A "RNAse-dependent mechanism" that takes place in the nucleus and is mediated by the enzyme "RNAse H", able to degrade the RNA strand of RNA/DNA duplex ( Figure 2). -"RNAse-independent mechanisms" that take place in the cytoplasm; in this case, the ASO can act through different paths: a. One is dependent from the interaction between ASO and transcript, thus preventing RNA loading on the ribosome and arresting mRNA translation Another mechanism is operating through the splicing process [10]-that is, modifying the open reading frame. This mechanism could have wide applications such as in ataxia telangiectasia or Duchenne muscular dystrophy ( Figure 4) [11].
It is evident that ASOs need high affinity and stability with the target transcripts and they must resist nuclease activity; for this reason, different chemical modifications of ASOs have been performed to preserve the sequence integrity and to allow them to reach their effect. On the basis of their chemical modifications, they are classified in first, second, and third generation (Table 1). "First-generation ASOs" are characterized by a modification in the phosphate backbone-in which a nonbridging oxygen atom is substituted by a sulfur atom-and are called "phosphorothioate ASOs" (PS-ASOs). The first developed PS-ASO is the "Fomivirsen", used in the treatment of Cytomegalovirus (CMV) retinitis [12][13][14]. The phosphorothioate sequence of twenty-one nucleotides in length (5 -GCGTTTGCTCTTCTTCTTGCG-3 ) makes ASOs more stable and allows a better uptake without altering their RNAsedependent activity or their binding to mRNA [15]. PS-ASO Formivirsen binds to UL123 transcripts and inhibits the translation into IE2 protein ( Figure 3). "Second-generation ASOs" have a modification at the 2 position of ribose, such as the 2-O-methyl (2-OME) and the 2-O-methoxyethyl (2-MOE). This ASO cannot induce RNAse H activity, so it is possible to create a structure with central PS-oligonucleotides between these chemical groups; this structure is also called "gapmer". The molecule is an ASO with considerable affinity and resistance to nucleases. Another modification, the 2-O-methylcarbamoylethyl (2-O-MCE), yields ASOs with similar RNAse-inducing activity but less hepatotoxicity [16]. An example of second-generation ASO is "Inotersen", designed and approved for the treatment of Hereditary Transthyretin Amyloidosis (hATTR, Figure 2, Table 1; see also Section 2).
"Third-generation ASOs" are characterized by chemical modifications to the monosaccharide. One group of third-generation ASOs are "phosphorodiamidate morpholino oligonucleotides" (PMOs), containing a morpholino ring and nonionic linkages [17]. The other ASOs of this generation are "peptide nucleic acids" (PNAs). Both PNAs and PMOs are characterized by a better stability and a RNAse-H-independent mechanism. PNAs are oligodeoxynucleotide analogs in which the deoxyribose phosphodiester backbone is replaced by a pseudo-peptide polyamide backbone [18]. They can affect gene expression inhibiting transcription by binding to the DNA or translation by binding to the mRNA. To date, no PNAs have been approved for the treatment of genetic diseases and cancer except for the detection of SARS-CoV-2 nucleic acid in biological samples (FDA approval May 2021, https://www.fda.gov/, accessed on 30 April 2022).
Another important issue is the delivery of ASOs to their targets [19,20] and the mechanisms of cellular recognition and internalization [21]. Intraocular, intravenous, intrathecal, and subcutaneous administrations of ASOs are adopted to reach the target cellular districts in association with various delivery systems, such as the conjugation with cellularpenetrating peptides (CPP) [18] or the use of liposomal structure [20]. Indeed, there are a lot of receptors expressed on the cell membrane that can mediate the uptake of ASOs [21]. Moreover, studies about ASO toxicity are important in order to estimate the balance between therapeutic and toxic effects. According to the pharmacokinetic profile, the liver and kidney are the main tissues where we can observe high levels of ASOs. In particular, the kidney explicates a major part of ASOs' elimination. So, the main adverse drug reactions (ADRs) involve nephrotoxicity and hepatotoxicity. Other ADRs are represented by mild hyperglycemia, autoimmune reactions, activation of the complement, and hypotension [22,23].
In 1998, the FDA approved the first ASO drug called "fomivirsen" (trade name Vitravene) followed by others such as "mipomersen", "eteplirsen", and "nusinersen", which were authorized for clinical use (
Transcript-Targeted Therapies for Genetic Diseases
The first transcript-targeted therapies authorized for use in clinical practice are focused against genetic diseases. These therapies, based on RNAi or ASOs, can be distinguished between those acting through regulation of transcript levels (1) and those modifying the splicing of specific pre-RNAs (2).
(A) Hereditary Transthyretin Amyloidosis (hATTR) is a rare autosomal dominant, multisystemic, progressive disease caused by more than 120 mutations in the gene encoding transthyretin (TTR) located on chromosome 18q12.1. The protein TTR binds and transports thyroxine and retinol. The most common mutation is a single-nucleotide substitution that causes the substitution of Valine in position 30 with Methionine (ATTR Val30Met) ( Figure 2). The association of altered TTR makes amyloid fibrils and leads to the formation of amyloid plaques in numerous organs such as the peripheral nerves, heart, kidneys, and gastrointestinal tract. The following main clinical manifestations are polyneuropathy, characterized by sensorimotor and autonomic disturbances, and cardiomyopathy, which mainly occurs with arrhythmias and heart failure [25,26]. hATTR is a life-threatening disease with median survival between 5 and 15 years from diagnosis. The poor prognosis and short life expectancy of this disease are in a large part due to the poor effectiveness of the current treatment options (liver transplantation and transthyretin stabilization with tafamidis or diflunisal) in keeping disease progression under control [27].
An alternative therapeutic target is represented by the reduction in the circulating level of TTR, thus decreasing the amount of proteins that can form the amyloid fibrils. Two drugs have been designed based on this mechanism of action: "inotersen", an ASO-based drug, and "patisiran", a RNAi-based drug. These drugs are useful alternatives to other classical treatment such as liver transplantation and TTR stabilizers.
"Inotersen" (trade name Tegsedi produced by Akcea Therapeutics), a second-generation ASO containing a 2 -O-methoxyethyl modification, is designed to target TTR mRNA to inhibit hepatic TTR production and, consequently, amyloid fibrils and plaques (Table 1, Figure 2). In a clinical trial (NEURO-TTR ClinicalTrials.gov number: NCT01737398, https://clinicaltrials.gov/ct2/show/NCT01737398, accessed on 30 April 2022), inotersen was administered by subcutaneous injections, three times a week for the first week followed by once a week administration for 64 weeks. Patients that received inotersen obtained an improvement of their life's quality. However, they showed different ADRs such as glomerulonephritis, thrombocytopenia, and death [28].
APOLLO, a phase III, double-blind, placebo-controlled clinical trial for patisiran, began in December 2013. This clinical trial recruited 225 patients with hATTR aggravated by polyneuropathy: 77 and 148 patients were assigned, respectively, to the placebo and the Patisiran arms. All of them, after receiving premedication to reduce the risk of infusion-related reactions, were treated with Patisiran (0.3 mg per kg) or placebo intravenously once every 3 weeks for 18 months. The trial demonstrated that the Patisiran group accomplished > 70% reduction in transthyretin from baseline, 56% improved mNIS +7 (modified neuropathy index score +7) versus 4% in the placebo group, and 51% improved quality of life (statistically tested by Norfolk QOL-DN questionnaire) versus 10% in the placebo arm. Finally, the trial showed no risk of death associated with Patisiran treatment and similar incidences of both severe and serious adverse events in the two study arms [30] (ClinicalTrials.gov Identifier: NCT01960348, https://clinicaltrials.gov/ct2/show/NCT019 60348, accessed on 30 April 2022).
Efficacies of patisiran and its direct competitor, the ASO-based drug inotersen, were tested in two randomized, double-blind, controlled trials implemented by Adams et al. (2018) [30] and Benson et al. (2018) [28]. These trials showed that Patisiran achieves better results than Inotersen both in terms of efficacy (serum levels of transthyretin reduced by 81% with Patisiran versus 71% with Inotersen) and safety (Inotersen systematically causes thrombocytopenia). Despite these results, the efficacy of both drugs over periods longer than 18 months are under investigation [31].
In June 2022, FDA approved "vutrisiran" for the treatment of hATTR Amyloidosis with polyneuropathy [32]. Vutrisiran (or AMVUTTRA) is a chemically modified doublestranded siRNA that hits mutant and wild-type TTR mRNA and is covalently linked to a tail containing three N-acetylgalactosamine (GalNAc) to enable delivery of the siRNA to hepatocytes in order to cause degradation of mutant and wild-type TTR transcript through RNA interference. In this way, a reduction in serum TTR protein and TTR protein deposits in tissues is determined. Clinical studies suggest subcutaneous injection (25 mg) once every 3 months.
(B) Familial chylomicronemia syndrome (FCS) is a syndrome characterized by high levels of chylomicrons due to autosomal recessive mutation of the lipoprotein lipase (LPL) gene located on chromosome 8p21.3 or the Apolipoprotein C2 (APOC2) gene located on chromosome 19q13.32. LPL is an enzyme expressed on the cell membrane that hydrolyzes triglycerides, contained in VLDL and chylomicrons, into glycerol and fatty acids; APOC2 is an apoprotein that acts as a cofactor of LPL. FCS can determine some complications such as lipemia retinalis, acute pancreatitis, xanthomas, and diabetes. To treat this syndrome, it is possible to inhibit the expression of APOC3, because it is an inhibitor of LPL, in order to improve the activity of LPL [33]. "Volanesorsen" is the first drug based on a secondgeneration ASO, containing 2 -O-methoxyethyl modification, that applies this mechanism: in fact, it has as target the APOC3 s mRNA. A study realized by treating 57 patients with volanesorsen administered by subcutaneous injection has shown the efficacy of this drug to reduce the level of triglycerides and to increase the level of HDL [34]. On the other hand, an obstacle is represented by the adverse drug reactions linked to the use of the drug-in particular, the thrombocytopenia. On 28 February 2019, EMA (European Medicine Agency) approved the use of "waylivra", which is the trade name of volanesorsen produced by Akcea Therapeutics Ireland Ltd. for the treatment of patients affected by FCS. The use of this drug is indicated in patients with a high risk of pancreatitis that show low response to the other classical drugs and diets.
(C) Delayed graft function (DGF) is one of the most serious manifestations of acute kidney injury (AKI) that occurs after kidney transplantation. Although there is no unique definition of this condition, in 69% of studies (reviewed between 1984 and 2007) it is defined as the use of dialysis within 7 days of the transplant [35]. Pathogenesis of DGF, which is not yet completely understood, is due to innate and adaptive immune response and, above all, acute ischemia-reperfusion injury (IRI) [36]. Recent research demonstrated the key role played by the increased expression of proapoptotic protein p53 in renal tubular cells as a result of DGF (and more generally of AKI) [37]. "Teprasiran" (QPI-1002), developed by Quark Pharmaceuticals, is a synthetic and chemically modified siRNA drug that temporarily downregulates the expression of proapoptotic protein p53, protecting kidneys from programmed cell death induced by IRI and preserving tissue and organ integrity ( Table 2). A phase III pivotal trial showed teprasiran efficacy in the treatment of DGF following kidney transplantation; for this reason, it was designated as an orphan drug for prophylaxis of DGF (ClinicalTrials.gov Identifier: NCT02610296; https://clinicaltrials.gov/ct2/show/ NCT02610296, accessed on 30 April 2022). Teprasiran has also achieved positive therapeu-tic results for prevention of AKI in high-risk patients undergoing cardiovascular surgery, as shown by a multicenter, double-blind, placebo-controlled phase II trial (ClinicalTrials.gov Identifier: NCT02610283; https://clinicaltrials.gov/ct2/show/NCT02610283, accessed on 30 April 2022). Recently, Thielmann et al. (2021) [38] reported that the incidence, severity, and interval of early AKI in high-risk patients undergoing cardiac surgery were significantly reduced after teprasiran treatment. A total of 1043 participants with a major adverse kidney event were enrolled in a randomized, double-blind, placebo-controlled, phase 3 study in order to evaluate the efficacy and safety of teprasiran for the prevention of major adverse kidney events in subjects at high risk for AKI (ClinicalTrials.gov Identifier: NCT03510897, https://clinicaltrials.gov/ct2/show/NCT03510897, accessed on 30 April 2022).
(D) Nonarteritic ischemic optic neuropathy (NAION) is a common optic neuropathy caused by infarction of the short posterior ciliary arteries that supply the anterior portion of the optic nerve head. This infarct event induces optic nerve axonal edema and optic disc compartment syndrome, leading to an acute, unilateral, painless vision loss. Although the pathogenesis of NAION is not yet fully understood in detail, it is presumed to be a multifactorial disease caused by a transient disturbance in the circulation of optic nerve head, probably due to generalized hypoperfusion, vasospasm, or thrombosis [39]. Ocular neuroprotection plays a key role in the treatment of NAION and, in particular, the preservation of retinal ganglion cells (RGC): it has been shown that optic nerve injury induces apoptosis of this cellular population through the activation of the proapoptotic protein Caspase-2 [40]. "QPI-1007", developed by Quark Pharmaceuticals, is a synthetic and chemically modified siRNA drug that inhibits the expression of Caspase-2 ( Table 2). QPI-1007 demonstrated therapeutic efficacy both in animal models of acute and chronic ocular neurodegeneration (inducing a significant neuroprotective effect) and in Phase I/II trials in patients with NAION, where it has been observed that a single intravitreal injection of QPI-1007 slows down or even blocks the visual deterioration that is characteristic of this disease (ClinicalTrials.gov Identifier: NCT01064505; https://clinicaltrials.gov/ct2/show/NCT01064505, accessed on 30 April 2022).
(E) Familial hypercholesteremia (FH) is an inherited disease in which LDL cholesterol (bad) in blood is over 190 mg/dL in adults. Untreated FH increases risk for developing coronary artery disease or leads to heart attacks. "Mipomersen" is a PS-ASO that specifically binds to Apo B-100 mRNA blocking the translation. The drug is approved to treat homozygous FH and is administered by subcutaneous injection [41][42][43].
Recently, at the end of last year, FDA approved Novartis Leqvio ® (Inclisiran)-a synthetic, chemically modified, double-stranded siRNA able to lower cholesterol and keep it low with two doses a year. This adjuvant drug has been approved for the patients affected by heterozygous familial hypercholesteremia and atherosclerotic cardiovascular disease. The patients treated with statin therapy require further reduction in uncontrolled LDL-cholesterol levels. Three subcutaneous doses of Inclisiran (284 mg) at months 0, 3, and 6 are given by a healthcare professional [44]. siRNA drug reaches the target hepatic organ through circulation. siRNA contains 32 ribonucleotides chemically modified with 2-O-methyl-ribonucleotide (2 -O-methyl), 11 modified with 2 -fluoro-ribonucleotide (2 -fluoro), and one 2 -deoxy-ribonucleotide. In addition, two phosphorothioate groups are added to the 5 end of the sense and both the 5 and 3 ends of the antisense strands. In the liver, the siRNA drug conjugated with triantennary N-acetylgalactosamine sugars is rapidly internalized by the abundant expression of membrane asialoglycoprotein receptors (ASGPR). In the hepatocyte, the antisense strand in the RISC complex binds to the complementary sequence in its target mRNA of protein PCSK9 to promote mRNA degradation and to reduce its translation ( Figure 5). PCSK9 is the protein that binds low-density lipoprotein cholesterol receptors (LDLR) to trigger its degradation by proteosome; in this way, the reduced expression of PCSK9 minimizes their targeting for degradation and permits a longer half-life of LDL receptors on the membrane of hepatocytes available to bind and take in LDL-Cholesterol (LDL/C), resulting in its blood reduction ( Figure 5) [45] published results about the Phase 1-2 trials of tofersen for SOD1 ALS. The authors reported that in adults with ALS due to SOD1 mutations, cerebral spine fluid (CSF) SOD1 concentrations decreased at the highest concentration of tofersen administered intrathecally over a period of 12 weeks. CSF pleocytosis occurred in some participants receiving tofersen. Lumbar-puncture-related adverse events were observed in most participants. (2). Splicing modifications of specific pre-RNAs by second-and third-generation modified ASOs: Nusinersen, Eteplirsen, and Milasen.
(A) Spinal muscular atrophy (SMA) is a disease characterized by the loss of function of lower motor neurons and can be classified into four types according to the age of clinical outset: type 1 is the most severe form of SMA, it occurs before the age of 6 months and is associated with low expectancy of life and fundamental use of respiratory supports; type 2 occurs between the ages of 6 and 18 months; type 3 is divided in two forms-type 3a, which occurs before 3 years, and type 3b, which occurs after 3 years; type 4 occurs after 18 years and represents the adult form [46]. Pathogenesis of SMA is due to a mutation or deletion at the SMN1 gene encoding for the Survival Motor Neuron (SMN) protein that is crucial for the survival of motor neurons. The novel therapy of SMA focuses on another gene, SMN2 gene, which physiologically encodes mainly for a truncated SMN protein and on a smaller scale for a functional SMN protein. The difference between the two types of proteins is represented by the splicing of exon 7 that is skipped in SMN2 mRNA. Indeed, the regulation of the splicing of SMN2 mRNA can represent a therapeutic target because it is possible to increase the synthesis of a functional SMN2 protein by modifying the splicing process. "Nusinersen" (trade name Spinraza) produced by Biogen is a second-generation ASO with a 2 -O-methoxyethyl modification; it acts by regulating the splicing of SMN2 gene and its target is represented by the intronic splicing silencer whose activity is blocked by nusinersen. In this way, mature mRNA includes exon 7 and allows the production of a functional SMN protein [47]. Nusinersen was tested on a total of 149 infants, administered by intrathecal injection on days 1, 15, 29, and 64. Further maintenance doses were administered on days 183 and 302. In type 1 SMA, the drug induces an improvement of neurological function, expectancy of life, and a higher probability of survival without respiratory supports. Finally, the use of nusinersen cannot be considered as curative but it can improve the patient's quality of life [48].
(B) Duchenne muscular dystrophy (DMD) is a primitive myopathy linked to X chromosome, in particular, to the DMD gene localized on region Xp21 that codifies for an important muscular protein called "dystrophin". Men are generally affected and women are asymptomatic carriers. At birth, patients do not show any motor deficits, only high levels of creatine kinase. At the age of 2/3 years, some muscular deficits such as muscular weakness and difficulty to jump and run appear. At an age of about 10 years, they lose ambulation without use of crutches, and at age of 12 years they need to use a wheelchair. The DMD gene is composed of 79 exons and there are various mutations of this gene, the most common are deletions but there are also duplications and alterations of the reading frame such as the insertion of premature stop codon. In about 14% of DMD patients, there is a deletion of exons 49 and 50 and the introduction of a premature stop codon at exon 51 that determines, as a final effect, the lack of dystrophin's production. It is possible to restore the reading frame by skipping exon 51; this determines the production of a dystrophin that it is shorter than the classical dystrophin but is functional. "Eteplirsen" (trade name "Exondys 51" produced by Sarepta) is a third-generation ASO, in particular, a phosphorodiamidate morpholino antisense oligonucleotide (PMO, 5 -CTCCAACATCAAGGAAGATGGCATTTCTAG-3 ) that is characterized by a neutral charge; its mechanism action is represented by the skipping of exon 51 (Figure 4) [49][50][51]. Eteplirsen is administered by intravenous infusion, and different clinical trials have been effectuated before the approval of FDA [52]. In these trials, efficacy of eteplirsen was demonstrated by the increase in dystrophin levels that are measured after muscle biopsies and evaluation of the effect on ambulation. The drug has been demonstrated to be able to delay the loss of ambulation and the deficit of other muscles such as respiratory muscles [51].
(C) An experimental, tailored, splice-modulating, antisense oligonucleotide drug called "Milasen" deserves to be mentioned. It was specifically designed to treat an 8-yearold girl (Mila is her name) suffering from a rare and fatal, progressive form of neurodegenerative disease leading to death by adolescence. J. Kim and coworkers designed [53], planned, and produced milasen by collaboration with a company in just 12 months from diagnosis to treatment. Milasen is a 22-nucleotide antisense oligonucleotide with the same structure and sugar chemistry modifications (phosphorothioate and 2 -O-methoxyethyl) as nusinersen to correct mis-splicing and restore normal (exon 6-exon 7) splicing and MFSD8 expression in the young patient. Dose-response analysis has indicated that its half-maximal potency is in the nanomolar range. RNA-seq from patient fibroblasts showed that milasen treatment more than tripled the amount of normal splicing in MFSD8 transcript. The experimental drug approved by the FDA inaugurates the new era of ultrapersonalized medicine, which will lead to rewriting all the rules, from experimentation to drug approval.
Androgen Receptor (AR)
Androgen Receptor (AR) is a member of the nuclear receptors family; they are typical receptors of steroid hormones [64]. Androgen receptor plays a key role in prostate cancer that does not respond to castration therapy. This resistance is due to various alterations such as androgen receptor overexpression, point mutations, changes in androgen biosynthesis, and constitutive activation of AR [64]. These alterations make AR a critical therapeutic target and an ongoing clinical trial is aimed to evaluate a novel AR inhibitor in patients affected by castration-resistant prostate cancer. The inhibitor, called AZD5312 or ARRx, is a generation 2.5 of ASO that binds AR mRNA and inhibits the production of AR; the effect on tumor cells is represented by the inhibition of cellular growth and by the promotion of apoptosis ( Table 3). The efficacy of this generation of ASO was evaluated in a preclinical study that focuses on the role of androgen receptor full-length (AR FL ) and androgen receptor splice variants (AR Vs ) [65]. ARRx is tested in combination with enzalutamide in a phase 1 and 2 ongoing clinical trial (ClinicalTrials.
Breast Cancer Type 2 Susceptibility Protein (BRCA2)
Breast cancer type 2 susceptibility protein (BRCA2) is a tumor suppressor protein involved in the error-free repair of DNA double strand breaks (DSB) caused by environmental and medical radiation or generated during crossing over in meiosis. BRCA2 mutations (but also mutations of the related protein BRCA1) are associated with the increased risk of breast and ovarian cancer, as well as other types of cancer [67,68]. For BRCA-mutated ovarian cancer, FDA has approved the use of olaparib, an inhibitor of PARP-1, which is an enzyme involved in DNA single strand break (SSB) repair and in DNA replication [69,70]. The inhibition of PARP-1 caused by olaparib induces replication fork stalling resulting in double-strand break (DSB) that causes failure of replication and, subsequently, apoptosis unless homologous recombination repair (HRR) mechanisms occur [71,72]. This means that olaparib is effective only in HRR-deficient cells, while HRR-proficient cells are resistant [73,74]. Despite the therapeutic potential of PARP-1 inhibitors (such as olaparib), they can be used only for the treatment of tumors predominantly composed of HRR-deficient cells, since their use in a heterogeneous tumor cells population with a high rate of HRRproficient cells can quickly lead to excessive growth of HRR-proficient clones and, therefore, to drug resistance [75,76]. One of the mechanisms that makes tumor cells HRR-proficient is BRCA2-reversion mutation [74]. A preclinical study showed that the use of an BRCA2targeting antisense oligonucleotide, in combination with olaparib, sensitized numerous human cancer cell lines to this drug, thus increasing the incidence of chromosomal translocations and aneuploidies and preventing resistance to olaparib (and in general to PARP-1 inhibitors) in various tumor cell populations [77].
Clusterin
Clusterin is a chaperone protein that is part of the family of heat shock proteins [78]. It is an antiapoptotic protein that can interfere with BCL2 and NfkB. Several tumors present high levels of clusterin, facilitating the escape of programmed cell death [79]. In particular, prostate cancer shows overexpression of clusterin and, at the clinical level, resistance to androgen deprivation, chemotherapy, and radiotherapy [80,81]. Two phase 3 clinical trials, called SINERGY and AFFINITY, use Custirsen also called OGX-011, a second-generation 2 methoxyethyl-modified phosphorothioate ASO that binds clusterin mRNA in order to improve the sensitivity of cancer cells to antitumor therapies [82] (Table 3). Both trials select patients with metastatic castration-resistant prostate cancer. Previously, OGX-011 was tested in cell lines and in mice, demonstrating the ability to reduce clusterin expression and also to improve or restore chemosensitivity in both in vitro and in vivo models [83].
The aim of SINERGY trial is to compare survival of patients treated with custirsen, docetaxel, and prednisone. In the other arm, there are patients treated with docetaxel and prednisone but without custirsen. Results show that there is no significant improvement in the survival of patients treated with custirsen; only in patients with a poor prognosis did treatment with custirsen improve survival compared with treatment of docetaxel and prednisone alone. Another important point is the higher number of adverse drug reactions in patients treated with custirsen [84]. The AFFINITY trial attempts to value the improvement of survival in patients treated with custirsen, cabazitaxel, and prednisone compared with cabazitaxel and prednisone alone. Similar to the SINERGY trial, the AFFINITY trial also includes either patients with poor prognoses or general patients. The results are similar to those of the SINERGY trial and demonstrate that there is no significant improvement of survival of patients treated with custirsen. Moreover, in the AFFINITY trial, no improvement of survival was observed in poor prognosis patients, in contrast with the result of SINERGY [85]. A multinational, randomized, open-label study of custirsen in patients with advanced or metastatic (Stage IV) Non-Small-Cell Lung Cancer (https://www.clinicaltrials.gov/ct2/show/NCT01630733, accessed on 30 April 2022) has been planned between 2012 and 2017 but no results were posted for this study.
Epidermal Growth Factor Receptor (EGFR)
Epidermal growth factor receptor (EGFR) is a transmembrane protein-the main member of ErbB family proteins, which includes four structurally related receptor tyrosine kinases. EGFR binds specific ligands, such as epidermal growth factor (EGF) and transforming growth factor α (TGFα), initiating several signal transduction cascades involved in DNA synthesis and cell proliferation, principally MAPK, Akt, and JNK pathways (Table 3). These features suggest that mutations involving EGFR overexpression are associated with the development of a number of cancers [86]. EGFR has been identified as a therapeutical target of an antisense plasmid DNA (EGFR-AS) for the treatment of head and neck squamous cell carcinoma (HNSCC) [87,88]. Currently, the treatment of choice in Europe and USA for Head and Neck Squamous Cell Carcinoma (HNSCC), especially for elderly (>65 years old), frail, or unfit for cisplatin patients, consists in systemic administration of Cetuximab (a monoclonal antibody that inhibits EGFR) in combination with radiotherapy (RT) [89,90]. After checking the effective antitumor effects on preclinical HNSCC models, a phase 1 TRIAL (with a cohort of 11 patients) was carried out to verify whether systemic administration of Cetuximab RT combined with intratumoral injections of EGFR-AS effectively increased the antitumor effects of the current elective therapy, particularly at locoregional level. Indeed, locoregional failure remains the leading cause of death after cisplatin or Cetuximab treatment. The results confirmed the increased antitumor activity of the double inhibition of EGFR provided by Cetuximab and EGFR-AS associated to radiotherapy, as well as the good tolerance of the simultaneous administration of these two drugs' controls [91]. Although this approach appears to be quite promising, a phase 2 trial is necessary to confirm the effective safety and efficacy of this combined treatment.
Eukaryotic Translation Initiation Factor 4E (eIF4E)
Eukaryotic translation initiation factor 4E (eIF4E) is a translation initiation factor involved in directing ribosomes to the 7-methyl-guanosine five-prime cap structure of mRNAs, an altered nucleotide on the 5 of some transcripts that plays a key role in several cellular processes including mRNA stability and translational efficiency. It has been shown that eIF4E overexpression causes tumorigenic transformation in different cell lines and its expression is dysregulated in 30% of human cancers, such as cancer of the colon, lung, prostate, and breast [92]. For these features, EIF4E has been identified as a target of a secondgeneration antisense oligonucleotide called ISIS 183750 for the treatment of colorectal cancer (Table 3). In vitro experiments, aimed to verify the potential additional effects of a combined EIF4E ASO-irinotecan treatment, have been performed in colorectal cell lines. Based on these results, a clinical trial evaluating ISIS 183750 in patients with irinotecan-refractory colorectal cancer was conducted. The results showed that, despite the proven penetrance of ISIS 183750 into the target cells and elicitation of the pharmacodynamic effect of EIF4E inhibition, the combination of ISIS 183750 with irinotecan did not lead to objective results in patients with irinotecan-refractory colorectal cancer, perhaps due to a vast stromal binding of the ASO that may have caused a low cellular uptake. Nevertheless, the clinical activity of the combined therapy was in part demonstrated by stabilization in a subset of patients [93].
FoxP3
FoxP3 is a transcription factor expressed by regulatory T cells (Treg), an important subset of CD4 T cells, that have a suppressive role in immune system regulation [94]. FoxP3 determines the production of CTLA4 that binds B7, expressed by antigen-presenting cells (APC); in this way, CTLA4 blocks the interaction between B7 and CD28 that is important for T-cell activation. Another mechanism involved in the inhibition of the activity of T cells is represented by the endocytosis of B7, an event leading to low response of T cells. High levels of FoxP3 have been detected in several cancers. suggesting that FoxP3 represents a suitable target for treatment. A preclinical study exploited a second-generation ASO, 2 -OMe-PS-ASO, in order to silence FoxP3 in B16 melanoma cells (Table 3). In particular, the aim of the research was the efficacy assessment of FoxP3 silencing associated with therapeutic vaccination. Furthermore, ASO is compared with polypurine reverse Hoogsteen hairpins (PPRHs). Results show that ASO penetrates in the cells better than PPRHs, displays a better silencing activity, and requires minor doses to obtain the therapeutic effect. The combination of ASO and vaccination was able to delay tumor growth and improve survival in mice. The results suggest that the synergic activity of vaccination and FoxP3-inhibitor ASO can be a novel strategy for cancer treatment [95].
Grb2
Grb2 is a protein involved in signal transduction pathways linked to tyrosine kinase receptors and MAP kinases. Alterations of Grb2 protein can have a crucial role in cancer development. Two clinical trials focus on Grb2 overexpression in acute myeloid leukemia and in Philadelphia chromosome positive (Ph + ) chronic myelogenous leukemia. In these tumors, overexpression of Grb2 is very important for cancer development and its inhibition could represent a novel strategy of treatment. In both trials, the drug used to reduce the level of Grb2 is the liposomal Grb2 ASO (L-Grb2)-code name BP1001-that prevents the synthesis of Grb2 protein ( Table 3).
Use of this drug is supported by preclinical studies. In fact, the inhibition of Grb2 plays a central role in cell proliferation of leukemia patients [96]. Moreover, another study demonstrated the increased survival of mice treated with L-Grb2 compared with the liposomal control oligonucleotide [97].
The clinical trials use different combinations of BP1001 with standard drugs. The phase 2 clinical trial in acute myeloid leukemia evaluates the combination of BP1001 with venetoclax plus decitabine compared with the combination of BP1001 with decitabine. This trial is in recruiting status (ClinicalTrials.gov Identifier: NCT02781883, https://clinicaltrials. gov/ct2/show/NCT02781883, accessed on 30 April 2022).
KRAS
KRAS was first identified as a viral oncogene in the Kirsten RAt Sarcoma virus [98]. Similar to the other member of the ras family, it is a GTPase that acts as a molecular on/off switch in many signal transduction pathways controlling cell proliferation. KRAS in human genome acts as a proto-oncogene whose mutations are implicated in several malignancies, about 20% in all human cancers, including lung adenocarcinoma, ductal carcinoma of the pancreas, and colorectal cancer [99][100][101]. Constrained ethyl residue (cEt) KRAS antisense oligonucleotide AZD4785 is an antisense oligonucleotide that contains 2 -4 ethyl residue and targets with high affinity to both wild-type and mutated KRAS mRNAs, resulting in inhibition of downstream effector pathways and antiproliferative effects in cancer cells including lung and colon cancer cell lines [102,103] (Table 3). A phase I, open label, multicentre, dose escalation study was conducted to verify safety, maximum tolerated dose (MTD), and pharmacokinetics of AZD4785 (IV administered) in patients with KRAS-driven advanced solid tumors (ClinicalTrials.gov Identifier: NCT03101839, https://clinicaltrials. gov/ct2/show/NCT03101839, accessed on 30 April 2022). The results of this study are not yet available. Recently, a potent and selective antisense oligonucleotide AZD4785 has been chosen to target and downregulate all KRAS isoforms and demonstrated its ability to silence KRAS and to inhibit multiple myeloma-tumor-bearing KRAS mutations [104].
4.9. Hypoxia-Inducible Factor-1alpha (HIF-1alpha) -1alpha (HIF-1alpha), normally activated in response to hypoxia-induced stress, is a key transcription regulator of a large number of genes important in cellular adaptation to low-oxygen conditions, including angiogenesis, cell proliferation, apoptosis, and cell invasion [105,106]. A synthetic antisense oligodeoxynucleotide targeting hypoxia-inducible factor-1alpha (HIF-1alpha) with potential antineoplastic activity has been designed (EZN-2968, Table 3) in order to block HIF-1alpha protein expression, resulting in the inhibition of angiogenesis, the inhibition of tumor cell proliferation, and apoptosis. Nevertheless, due to its potential systemic side effects, EZN2968 is partially used in the clinic. For this reason, Zhang et al. (2021) [107] proposed and generated a conditional ASO able to inhibit HIF-1alpha in cells expressing the target miRNA as a hepatocyte-specific miRNA, miR-122, via a toehold-exchange reaction [107].
Heat Shock Protein27 (Hsp27)
Heat shock protein27 (Hsp27) is a chaperone protein and a member of heat shock proteins. This family of proteins has numerous functions such as chaperone activity, regulation of apoptosis, cell differentiation, and signal transduction [108]. Hsp27 can determine the growth and metastasis of cancer cells and also resistance to therapeutic agents. Activation of heat shock proteins can be due to cell stressors such as hyperthermia, oxidative stress, and radiation. Hsp27, in particular, can be activated by the cytotoxic effects of chemotherapy. Preclinical evidences show the role of Hsp27 in cancer, such as its role in bladder cancer cell. One study used OGX-427, called apatorsen, to knock down Hsp27 expression both in cancer cell lines and in mice ( Table 3). Overexpression of Hsp27 increases cell growth and reduces chemosensitivity; on the other hand, OGX-427 reduces cell growth and sensitizes cancer cells to paclitaxel [109]. Taking into account these results and the high expression of Hsp27 in various types of cancer, two clinical trials-the Borealis-1 and the RAINIER trial-were designed to demonstrate the efficacy of a second-generation Hsp27ASO called apatorsen in cancer treatment.
Borealis-1 is a phase 2 trial that evaluates the efficacy of apatorsen in combination with gemcitabine and cisplatin compared with gemcitabine and cisplatin plus placebo, in patients affected by advanced urothelial cancer. Outcomes of this trial demonstrate a failure of apatorsen in improvement of survival. However, apatorsen shows a positive effect on survival in patients with poor prognosis. Patients with poor prognosis express higher levels of Hsp27 and circulating tumor cells (CTC) than patients with better prognosis; so, Hsp27 could be used as a biomarker to indicate patients that could be treated with apatorsen [110]. In the clinical study "Borealis-2", Rosenberg et al. (2018) [111] reported the efficacy and safety of apatorsen in combination with docetaxel compared with docetaxel alone in patients with metastatic urothelial carcinoma previously treated with platinumbased chemotherapy. The randomized, controlled phase II trial with a primary end point of overall survival should provide strong elements to decide whether to move forward with a phase III trial.
The RAINIER trial is a phase 2 trial (ClinicalTrials.gov Identifier: NCT01844817, https: //clinicaltrials.gov/ct2/show/NCT01844817, accessed on 20 June 2022) that compared apatorsen or placebo both combined with gemcitabine and nab-paclitaxel. Patients were affected by metastatic pancreatic cancer. The use of apatorsen did not improve overall survival (OS) of the patients, although a trend toward prolonged OS was observed in patients with high serum level of Hsp27 [112].
MicroRNAs
MicroRNAs (miRNAs) are a class of short (19-25 nt) noncoding RNAs that play an important role in several regulatory processes (such as proliferation, differentiation, metabolism, and apoptosis) by binding 3 -UTR of mRNAs. Aberrant expression of miRNAs is associated with a wide range of human diseases, including cancer; for this reason, these oligonucleotides are called oncoMir [113]. "SNAIL" is a protein involved in the invasiveness, sphere-forming ability, and induction of epithelial-mesenchymal transition (EMT) in ovarian cancer cells, an important step involved in metastasis formation [114]. Two microRNAs, miR-137 and miR-34a, have been identified: they can bind the 3 -UTR of SNAIL mRNA, reduce its expression, and reduce all the effects of SNAIL overexpression in ovarian cancer [115]. These two microRNAs could represent a valid candidate for the development of therapeutical alternatives for this type of cancer. "TRAIL" is a cytokine that could play an important therapeutic role in the treatment of various types of cancer, as it is able to induce apoptosis without damaging nearby tissues [116,117]. However, its application is limited because often cancer cells, and above all the cancer stem cells (CSCs), develop resistance to this type of treatment [118]. A preclinical study was conducted, which demonstrated that upregulation of miR25 in liver cancer stem cells (LCSs) induces resistance to TRAIL-induced apoptosis and concluded that knocking-down this microRNA, through its antisense oligonucleotide, increases the sensitivity of LCSs to TRAIL action [119]. In light of these evidences, downregulation of mir25 by its ASO could represent an interesting future therapeutic alternative for the treatment of several types of tumor (Table 3).
Ribonucleotide Reductase (RNR)
Ribonucleotide reductase (RNR) is a primary enzyme involved in synthesis of DNA. It consists of a large subunit (RNR1) and a small subunit (RNR2) associated to form a heterodimeric tetramer whose function is to remove the 2 -hydroxyl group of the ribose ring of nucleoside diphosphates catalyzing the formation of deoxyribonucleotides from ribonucleotides [120]. The inhibition of RNR causes the block of DNA synthesis and cell apoptosis [121]. GTI-2040 is a novel 20-mer phosphorothioate oligonucleotide that targets the mRNA of R2 subunit of RNR, preventing its interaction with ribosomes, spliceosomes, and other proteins involved in translation, as well as facilitating RNase-H-mediated degradation of RNA/DNA hybrids and inhibiting DNA transcription by forming a DNA triplex (Table 3). A preclinical study demonstrated a significant reduction in R2 mRNA and protein levels in numerous tumor cell lines, such as melanoma, colon, breast, pancreatic, ovarian, lung, and glioblastoma. Based on the results of preclinical studies, a phase I trial was conducted in which the effectiveness of GT-2040 (administered by continuous intravenous infusion) was tested in combination with gemcitabine hydrochloride in a cohort of 16 patients with advanced solid tumors. Gemcitabine is a nucleoside analogue that inhibits RNR, blocking the synthesis of DNA and the progression of cells through the G1/S-phase boundary. Due to these characteristics, it possesses antitumor activity (both in vitro and vivo) and its use has been approved for the treatment of various solid tumors, including pancreatic, bladder, nonsmall lung, and ovarian cancers. The study showed that the combined treatment of GT-2040 and gemcitabine, despite having an acceptable safety profile in pretreated patients (the most common adverse events being fatigue, nausea, vomiting, diarrhea, and anorexia), does not present a relevant antitumor activity. However, a partial clinical activity was demonstrated as several patients had prolonged stable disease [122].
Signal Transducer and Activator of Transcription 3 (STAT3)
Signal transducer and activator of transcription 3 (STAT3) is a member of the STAT protein family that includes seven different types of STAT protein. Mainly, STAT3 plays a central role in the transduction of the signal by different receptors acting as a transcription factor but it is also important in the regulation of immune system and, through these mechanisms, it can contribute to the development of cancer cells. It is known that inflam-mation can promote cancer cells and, obviously, transcription factors are crucial for tumor growth [123].
Based on these evidences, STAT3 has been chosen as a target in novel cancer treatment strategies: a drug designed to target STAT3 is AZD9150, called Danvatirsen, a generation 2.5 ASO with a constrained-ethyl group produced by IONIS TM in partnership with AstraZeneca ( Table 3). The efficacy and safety of AZD9150 were verified by different preclinical studies. Toxicology and pharmacokinetics were tested in mice and cynomolgus monkeys and the results showed a safety profile as in 2nd generation ASOs [124]. Furthermore, another preclinical work demonstrated the efficacy of AZD9150 to reduce levels of STAT3 mRNA and protein. In this way, the drug could determine an antitumor effect in both in vitro and in vivo models [125].
AZD9150 was also evaluated in neuroblastoma cells, where it was shown to be able to reduce STAT3 levels; this inhibition of STAT3 determines slow growth and also reduces the number of cellular colonies. Other effects include a possible alteration in tumor initiation and an increase in tumor cell chemosensitivity [126].
Another work focused on STAT3 inhibition in castration-resistant prostate cancer. In particular, inhibition of STAT3 was tested in combination with the stimulation of toll-like receptor 9; a STAT3 ASO was conjugated with a TLR9 agonist (a CpG oligonucleotide). Preclinical study has shown efficacy of these CpG-STAT3 ASOs to potentiate immune system activity against cancer cells, thanks to the reduction in tumor immune tolerance [127].
Furthermore, AZD9150 was also tested in a phase 1 clinical trial, started in July 2016 and completed in February 2019, aimed at finding the maximum tolerated dose and information about toxicity, pharmacodynamics, and pharmacokinetic profiles. The trial selected patients with diffuse large B-cell lymphoma, and drugs administered were combinations of MEDI4736 (durvalumab) and AZD9150, MEDI4736 as monotherapy, or MEDI4736 in combination with tremelimumab. No results have been yet communicated (ClinicalTrials.gov Identifier: NCT02549651, https://clinicaltrials.gov/ct2/show/NCT025 49651, accessed on 30 April 2022).
siRNA Targets in Cancer
In this section, we will summarize novel findings about use of siRNA in cancer treatment. Several siRNAs were tested in preclinical models in vivo and in vitro, and some clinical trials started to demonstrate the safety, efficacy, and possible use of siRNA in novel anticancer therapeutic strategies. However, improvement of the delivery system is considered crucial to obtain a significant effect. A rather extensive description on the studies performed until 2017 was reported by Chen et al. [128]. In this review, we focus on more recent findings about siRNA in cancer therapy. In Table 2, we report the gene targets, their proposed siRNA, and the tumor where the drug potentially acts; the associated Clinical Trial Identifier code is reported if assigned.
Ephrin Type A Receptor 2 or Ephrin Receptor A2 (EphA2 Receptor)
Ephrin type A receptor 2 or Ephrin receptor A2 (EphA2 receptor) is a tyrosine kinase receptor encoded by the EPHA2 gene. The ligand of this receptor is represented by ephrin A1 (encoded by the human gene EFNA1). This interaction between ephrin A1 and EphA2 receptor can drive different tumorigenic events such as cell proliferation, migration, and angiogenesis. These effects are the likely explanation for the observed overexpression of EphA2 receptor in different cancers [60].
Some preclinical studies have targeted EPHA2 mRNA using a 1,2-Dioleoyl-sn-GlyceroPhosphatidylcholine (DOPC) nanoliposomal siRNA, also called EPHARNA, in order to decrease expression of EphA2 receptor ( Figure 1, Table 2). First of all, efficacy of this drug was demonstrated by studies in cell lines and in mice. EphA2 receptor siRNA can reduce transcript levels in in vitro and in vivo models. The decrease in tumor growth is more significant when EphA2 receptor siRNA is administered in combination with paclitaxel. A decrease in microvascular density is also observed, confirming an antiangiogenic role of EphA2 receptor siRNA [61].
Preclinical studies demonstrate a safety profile of EPHARNA in mice and nonhuman primates [62]. A phase 1 clinical trial is ongoing (https://clinicaltrials.gov/ct2/show/ NCT01591356, accessed on 30 May 2022), with the aim to evaluate the use of DOPC-EphA2 siRNA in advanced cancers. The clinical trial started in 2015 and is still recruiting. The result of this trial could provide some important clinical data.
KRAS
KRAS was first identified as a viral oncogene in the Kirsten Rat Sarcoma virus [98]. KRAS mutations (particularly in codons 12, 13, and 16) are present in almost all pancreatic adenocarcinomas. The importance of KRAS in cell signaling mechanisms makes it a potential target for the development of therapeutic alternatives for the treatment of this disease. After the failure in the development of inhibitors of posttranslational farnesylation (FTI), which showed no clinical activity [54,129], and specific ASOs, which showed a low specificity since the wild-type and mutated KRAS differ only in one codon [55,56], attention was paid to the development of therapeutic solutions based on RNAi, which show extraordinary sequence specificity.
A preclinical study conducted on Panc-1 and MiaPaca-2 cell lines, two of the most common human pancreatic cancer cell lines, showed that KRAS RNAi-induced knocking-down induces changes in the malignant phenotype: both cell lines showed reduced proliferation and migration capacity and a considerable reduction in angiogenic potential. This experimental evidence justified continuation of the research activity in the development of RNAi-based therapeutic solutions for the treatment of pancreatic cancer [57].
The first-developed drug that targets KRAS is SigG12D-LODERs. SigG12D-LODERs is a polymeric matrix containing siRNAs that target the mutated KRAS oncogene, specifically KRASG12D, with high specificity and proven antitumor activity that consists in inhibiting KRAS translation with potential blocking effects of tumor growth. LODER TM (local drug EleuteR) technology (developed and marketed by Silenseed) represents an innovative delivery platform that allows the insertion of RNAi-based drugs directly into the core of solid tumors using a standard endoscope ultrasound (EUS) biopsy procedure (Table 2). Furthermore, LODER protects siRNAs from degradation and guarantees their action for very long periods of time (few months or more).
An open-label phase I study was conducted on a cohort of patients with nonresectable locally advanced pancreatic ductal adenocarcinoma (LA-PDAC), in which a single dose of SigG12D-LODERs was administered with a standard EUS procedure, combined with gemcitabine given on a weekly basis. The study showed an excellent safety and tolerance profile, as well as the stabilization of the disease in a group of patients. These evidences led to the implementation of a phase II study [58].
A phase II study (ClinicalTrials.gov Identifier: NCT01676259, https://clinicaltrials. gov/ct2/show/NCT01676259, accessed on 30 April 2022), still in the recruitment phase, foresees the administration of 2.8 mg of SigG12D-LODERs in 12-week cycles to patients with unresectable LA-PADC combined with a classic chemotherapy process (Gemcitabine + nab-Paclitaxel). This study will employ a cohort of 80 people divided into two arms: one arm receives the combined therapy, the other only the chemotherapeutic treatment.
A Clinical Study of RNA Interference Based on the Use of "Spherical Nucleic Acids (SNA)"
A clinical study of RNA interference based on the use of "Spherical Nucleic Acids (SNA)" arranged on the surface of small, spherical, gold nanoparticles conjugated with radially oriented and densely packed siRNA oligonucleotides for the GBM oncogene "Bcl2Like12" (Bcl2L12) has been recently published (Table 2) [127]. In the paper, the effects of NU-0129 on Bcl2L12 have been evaluated. The Bcl2L12 gene expressed in glioblastoma multiforme is associated with tumor growth and its expression blocks apoptosis in tumor cells promoting tumor growth. Researchers think that targeting the Bcl2L12 gene with NU-0129 will help stop cancer cells. This is a first-in-human trial to determine the safety of NU-0129 that is able to cross the blood-brain barrier. The clinical study demonstrated that NU-0129 uptake into glioma cells correlated with significant underexpression of tumor-associated Bcl2L12 protein, as shown by comparison of NU-0129-treated recurrent vs. matched primary untreated tumor. The study supports that SNA nanoconjugates is a brain-penetrant precision medicine approach for the systemic treatment of GBM [59] (https://clinicaltrials.gov/ct2/show/NCT03020017, accessed on 30 April 2022).
Conclusions
The use of ASO-and siRNA-based therapeutics for the treatment of a range of genetic diseases is an established fact. It is likely that the improvement of delivery vectors and chemical formulations will translate into an improvement in clinical efficacy. On the contrary, the use of this kind of therapeutics is still an area of active investigation for cancer treatment. The achievement of clinical utility will probably require several efforts in order to define molecular targets for specific tumor subtypes and to design selective delivery procedures [63]. In particular, the simultaneous action on different endogenous targets is one of the advantages offered by these therapeutics. Indeed, it is possible to combine nucleic acids with different sequences in the same vector [130]. Network-based strategies and combined multiple silencing approaches provide an alternative tool to arrest cancer proliferation. It has been demonstrated in breast cancer cell lines that silencing of five key upregulated transcripts dramatically changes cell survival and migration [131]. In this sense, overexpression of genes encoding "eukaryotic Initiation Factors" or "Cleavage and Polyadenylation of Pre-mRNA" factors have been associated with colorectal tumors. These transcripts represent good candidates for transcript-targeted therapy [132][133][134]. Although these approaches have been already applied in preclinical models, they have not yet passed all the steps necessary for human investigations. Indeed, no clinical trials have been started exploiting the combinatorial silencing of multiple cancer targets [135]. Further studies are still needed to support the efficacy of combinatorial silencing.
Author Contributions: D.F.C. coordinated and planned the study; V.B. and D.F.C. wrote and prepared the figures and tables; C.M. and A.R. participated in collecting the bibliography and writing the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: This study was partially supported by project "Piaceri, 2020/2022-linea 2", University of Catania, Italy. Project Title: The transcriptome view of chromosomal aberrations: studies on cancer and neurodevelopmental diseases (TRACAND).
|
2022-08-12T15:19:22.252Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b2907e815750e32e17827d552d5478392ce3fab0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/16/8875/pdf?version=1660204528",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "315a65dacd27af72715d1df9aff0615c16a4c08c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
219306987
|
pes2o/s2orc
|
v3-fos-license
|
The effect of domain and text type on text prediction quality
Text prediction is the task of suggesting text while the user is typing. Its main aim is to reduce the number of keystrokes that are needed to type a text. In this paper, we address the influence of text type and domain differences on text prediction quality.
By training and testing our text prediction algorithm on four different text types (Wikipedia, Twitter, transcriptions of conversational speech and FAQ) with equal corpus sizes, we found that there is a clear effect of text type on text prediction quality: training and testing on the same text type gave percentages of saved keystrokes between 27 and 34%; training on a different text type caused the scores to drop to percentages between 16 and 28%.
In our case study, we compared a number of training corpora for a specific data set for which training data is sparse: questions about neurological issues. We found that both text type and topic domain play a role in text prediction quality. The best performing training corpus was a set of medical pages from Wikipedia. The second-best result was obtained by leave-one-out experiments on the test questions, even though this training corpus was much smaller (2,672 words) than the other corpora (1.5 Million words).
By training and testing our text prediction algorithm on four different text types (Wikipedia, Twitter, transcriptions of conversational speech and FAQ) with equal corpus sizes, we found that there is a clear effect of text type on text prediction quality: training and testing on the same text type gave percentages of saved keystrokes between 27 and 34%; training on a different text type caused the scores to drop to percentages between 16 and 28%.
In our case study, we compared a number of training corpora for a specific data set for which training data is sparse: questions about neurological issues. We found that both text type and topic domain play a role in text prediction quality. The best performing training corpus was a set of medical pages from Wikipedia. The second-best result was obtained by leaveone-out experiments on the test questions, even though this training corpus was much smaller (2,672 words) than the other corpora (1.5 Million words).
Introduction
Text prediction is the task of suggesting text while the user is typing. Its main aim is to reduce the number of keystrokes that are needed to type a text, thereby saving time. Text prediction algorithms have been implemented for mobile devices, office software (Open Office Writer), search engines (Google query completion), and in special-needs software for writers who have difficulties typing (Garay-Vitoria and Abascal, 2006). In most applications, the scope of the prediction is the completion of the current word; hence the oftenused term 'word completion'.
The most basic method for word completion is checking after each typed character whether the prefix typed since the last whitespace is unique according to a lexicon. If it is, the algorithm suggests to complete the prefix with the lexicon entry. The algorithm may also suggest to complete a prefix even before the word's uniqueness point is reached, using statistical information on the previous context. Moreover, it has been shown that significantly better prediction results can be obtained if not only the prefix of the current word is included as previous context, but also previous words (Fazly and Hirst, 2003) or characters (Van den Bosch and Bogers, 2008).
In the current paper, we follow up on this work by addressing the influence of text type and domain differences on text prediction quality. Brief messages on mobile devices (such as text messages, Twitter and Facebook updates) are of a different style and lexicon than documents typed in office software (Westman and Freund, 2010). In addition, the topic domain of the text also influences its content. These differences may cause an algorithm trained on one text type or domain to perform poorly on another.
The questions that we aim to answer in this paper are (1) "What is the effect of text type differences on the quality of a text prediction algorithm?" and (2) "What is the best choice of training data if domain-and text type-specific data is sparse?". To answer these questions, we perform three experiments: The prospective application of the third series of experiments is the development of a text prediction algorithm in an online care platform: an online community for patients seeking information about their illness. In this specific case the target group is patients with language disabilities due to neurological disorders.
The remainder of this paper is organized as follows: In Section 2 we give a brief overview of text prediction methods discussed in the literature. In Section 3 we present our approach to text prediction. Sections 4 and 5 describe the experiments that we carried out and the results we obtained. We phrase our conclusions in Section 6.
Text prediction methods
Text prediction methods have been developed for several different purposes. The older algorithms were built as communicative devices for people with disabilities, such as motor and speech impairments. More recently, text prediction is developed for writing with reduced keyboards, specifically for writing (composing messages) on mobile devices (Garay-Vitoria and Abascal, 2006). All modern methods share the general idea that previous context (which we will call the 'buffer') can be used to predict the next block of characters (the 'predictive unit'). If the user gets correct suggestions for continuation of the text then the number of keystrokes needed to type the text is reduced. The unit to be predicted by a text prediction algorithm can be anything ranging from a single character (which actually does not save any keystrokes) to multiple words. Single words are the most widely used as prediction units because they are recognizable at a low cognitive load for the user, and word prediction gives good results in terms of keystroke savings (Garay-Vitoria and Abascal, 2006).
There is some variation among methods in the size and type of buffer used. Most methods use character n-grams as buffer, because they are powerful and can be implemented independently of the target language (Carlberger, 1997). In many algorithms the buffer is cleared at the start of each new word (making the buffer never larger than the length of the current word). In the paper by (Van den Bosch and Bogers, 2008), two extensions to the basic prefix-model are compared. They found that an algorithm that uses the previous n characters as buffer, crossing word borders without clearing the buffer, performs better than both a prefix character model and an algorithm that includes the full previous word as feature. In addition to using the previously typed characters and/or words in the buffer, word characteristics such as frequency and recency could also be taken into account (Garay-Vitoria and Abascal, 2006).
Possible evaluation measures for text prediction are the proportion of words that are correctly predicted, the percentage of keystrokes that could maximally be saved (if the user would always make the correct decision), and the time saved by the use of the algorithm (Garay-Vitoria and Abascal, 2006). The performance that can be obtained by text prediction algorithms depends on the language they are evaluated on. Lower results are obtained for higher-inflected languages such as German than for low-inflected languages such as English (Matiasek et al., 2002). In their overview of text prediction systems, (Garay-Vitoria and Abascal, 2006) report performance scores ranging from 29% to 56% of keystrokes saved.
An important factor that is known to influence the quality of text prediction systems, is training set size (Lesher et al., 1999;Van den Bosch, 2011). The paper by (Van den Bosch, 2011) shows log-linear learning curves for word prediction (a constant improvement each time the training corpus size is doubled), when the training set size is increased incrementally from 10 2 to 3 * 10 7 words.
Our approach to text prediction
We implement a text prediction algorithm for Dutch, which is a productive compounding language like German, but has a somewhat simpler inflectional system. We do not focus on the effect of training set size, but on the effect of text type and topic domain differences.
Our approach to text prediction is largely inspired by (Van den Bosch and Bogers, 2008). We experiment with two different buffer types that are based on character n-grams: • 'Prefix of current word' contains all characters of only the word currently keyed in, where the buffer shifts by one character position with every new character.
• 'Buffer15' buffer also includes any other characters keyed in belonging to previously keyed-in words.
Modeling character history beyond the current word can naturally be done with a buffer model in which the buffer shifts by one position per character, while a typical left-aligned prefix model (that never shifts and fixes letters to their positional feature) would not be able to do this.
In the buffer, all characters from the text are kept, including whitespace and punctuation. The predictive unit is one token (word or punctuation symbol). In both the buffer and the prediction label, any capitalization is kept. At each point in the typing process, our algorithm gives one suggestion: the word that is the most likely continuation of the current buffer.
We save the training data as a classification data set: each character in the buffer fills a feature slot and the word that is to be predicted is the classification label. Figures 1 and 2 give examples of each of the buffer types Prefix and Buffer15 that we created for the text fragment "tot een niveau" in the context "stelselmatig bij elke verkiezing tot een niveau van' '(structurally with each election to a level of ). We use the implementation of the IGTree decision tree algorithm in TiMBL (Daelemans et al., 1997) to train our models.
Evaluation
We evaluate our algorithms on corpus data. This means that we have to make assumptions about user behaviour. We assume that the user confirms a suggested word as soon as it is suggested correctly, not typing any additional characters before confirming. We evaluate our text prediction algorithms in terms of the percentage of keystrokes saved K: in which n is the number of words in the test set, W i is the number of keystrokes that have been typed before the word i is correctly suggested and F i is the number of keystrokes that would be needed to type the complete word i. For example, our algorithm correctly predicts the word niveau after the context i n g t o t e e n n i v in the test set. Assuming that the user confirms the word niveau at this point, three keystrokes were needed for the prefix niv. So, W i = 3 and F i = 6. The number of keystrokes needed for whitespace and punctuation are unchanged: these have to be typed anyway, independently of the support by a text prediction algorithm.
Text type experiments
In this section, we describe the first and second series of experiments. The case study on questions from the neurological domain is described in Section 5.
Data
In the text type experiments, we evaluate our text prediction algorithm on four different types of Dutch text: Wikipedia, Twitter data, transcriptions of conversational speech, and web pages of Frequently Asked Questions (FAQ). The Wikipedia corpus that we use is part of the Lassy corpus (Van Noord, 2009); we obtained a version from the summer of 2010. 1 The Twitter data are collected continuously and automatically filtered for language by Erik Tjong Kim Sang (Tjong Kim Sang, 2011). We used the tweets from all users that posted at least 19 tweets (excluding retweets) during one day in June 2011. This is a set of 1 Million Twitter messages from 30,000 (Oostdijk, 2000); for our experiments, we only use the category 'spontaneous speech'. We obtained the FAQ data by downloading the first 1,000 pages that Google returns for the query 'faq' with the language restriction Dutch. After cleaning the pages from HTML and other coding, the resulting corpus contained approximately 1.7 Million words of questions and answers.
Within-text type experiments
For each of the four text types, we compare the buffer types 'Prefix' and 'Buffer15'. In each experiment, we use 1.5 Million words from the corpus to train the algorithm and 100,000 words to test it. The results are in Table 1.
Across-text type experiments
We investigate the importance of text type differences for text prediction with a series of experiments in which we train and test our algorithm on texts of different text types. We keep the size of the train and test sets the same: 1.5 Million words and 100,000 words respectively. The results are in Table 2. Table 1 shows that for all text types, the buffer of 15 characters that crosses word borders gives better results than the prefix of the current word only. We get a relative improvement of 35% (for FAQ) to 62% (for Speech) of Buffer15 compared to Prefix-only. Table 2 shows that text type differences have an influence on text prediction quality: all acrosstext type experiments lead to lower results than the within-text type experiments. From the results in Table 2, we can deduce that of the four text types, speech and Twitter language resemble each other more than they resemble the other two, and Wikipedia and FAQ resemble each other more. Twitter and Wikipedia data are the least similar: training on Wikipedia data makes the text prediction score for Twitter data drop from 29.2 to 16.5%. 2
Case study: questions about neurological issues
Online care platforms aim to bring together patients and experts. Through this medium, patients can find information about their illness, and get in contact with fellow-sufferers. Patients who suffer from neurological damage may have communicative disabilities because their speaking and writing skills are impaired. For these patients, existing online care platforms are often not easily accessible. Aphasia, for example, hampers the exchange of information because the patient has problems with word finding. In the project 'Communicatie en revalidatie DigiPoli' (ComPoli), language and speech technologies are implemented in the infrastructure of an existing online care platform in order to facilitate communication for patients suffering from neurological damage. Part of the online care platform is a list of frequently asked questions about neurological diseases with answers. A user can browse through the questions using a chat-by-click interface (Geuze et al., 2008). Besides reading the listed questions and answers, the user has the option to submit a question that is not yet included in training on Wikipedia, testing on Twitter gives a different result from training on Twitter, testing on Wikipedia. This is due to the size and domain of the vocabularies in both data sets and the richness of the contexts (in order for the algorithm to predict a word, it has to have seen it in the train set). If the test set has a larger vocabulary than the train set, a lower proportion of words can be predicted than when it is the other way around. the list. The newly submitted questions are sent to an expert who answers them and adds both question and answer to the chat-by-click database. In typing the question to be submitted, the user will be supported by a text prediction application.
The aim of this section is to find the best training corpus for newly formulated questions in the neurological domain. We realize that questions formulated by users of a web interface are different from questions formulated by experts for the purpose of a FAQ-list. Therefore, we plan to gather real user data once we have a first version of the user interface running online. For developing the text prediction algorithm that is behind the initial version of the application, we aim to find the best training corpus using the questions from the chat-by-click data as training set.
Data
The chat-by-click data set on neurological issues consists of 639 questions with corresponding answers. A small sample of the data (translated to English) is shown in Table 3. In order to create the test data for our experiments, we removed duplicate questions from the chat-by-click data, leaving a set of 359 questions. 3 In the previous sections, we used corpora of 100,000 words as test collections and we calculated the percentage of saved keystrokes over the Unfortunately, a real cure is not possible. However, things can be done to combat the effects of the diseases, mainly relieving symptoms such as stiffness and spasticity. The phisical therapist and rehabilitation specialist can play a major role in symptom relief. Moreover, there are medications that can reduce spasticity. question 0 508 How is (P)LS diagnosed? answer 0 508 The diagnosis PLS is difficult to establish, especially because the symptoms strongly resemble HSP symptoms (Strumpell's disease). Apart from blood and muscle research, several neurological examinations will be carried out. complete test corpus. In the reality of our case study however, users will type only brief fragments of text: the length of the question they want to submit. This means that there is potentially a large deviation in the effectiveness of the text prediction algorithm per user, depending on the content of the small text they are typing. Therefore, we decided to evaluate our training corpora separately on each of the 359 unique questions, so that we can report both mean and standard deviation of the text prediction scores on small (realistically sized) samples. The average number of words per question is 7.5; the total size of the neuro-QA corpus is 2,672 words.
Experiments
We aim to find the training set that gives the best text prediction result for the neuro-QA questions. We compare the following training corpora: • The corpora that we compared in the text type experiments: Wikipedia, Twitter, Speech and FAQ, 1.5 Million words per corpus. • A 1.5 Million words training corpus that is of the same topic domain as the target data: Wikipedia articles from the medical domain; • The 359 questions from the neuro-QA data themselves, evaluated in a leave-one-out setting (359 times training on 358 questions and evaluating on the remaining questions).
In order to create the 'medical Wikipedia' corpus, we consulted the category structure of the Wikipedia corpus. The Wikipedia category 'Geneeskunde' (Medicine) contains 69,898 pages and in the deeper nodes of the hierarchy we see many non-medical pages, such as trappist beers (ordered under beer, booze, alcohol, Psychoactive drug, drug, and then medicine). If we remove all pages that are more than five levels under the 'Geneeskunde' category root, 21,071 pages are left, which contain fairly over the 1.5 Million words that we need. We used the first 1.5 Million words of the corpus in our experiments.
The text prediction results for the different corpora are in Table 4. For each corpus, the out-ofvocabulary rate is given: the percentage of words in the Neuro-QA questions that do not occur in the corpus. 4
Discussion of the results
We measured the statistical significance of the mean differences between all text prediction scores using a Wilcoxon Signed Rank test on paired results for the 359 questions. We found that the difference between the Twitter and Speech corpora on the task is not significant (P = 0.18). The difference between Neuro-QA and Medical Wikipedia is significant with P = 0.02; all other differences are significant with P < 0.01.
The Medical Wikipedia corpus and the leaveone-out experiments on the Neuro-QA data give better text prediction scores than the other corpora. The Medical Wikipedia even scores slightly better than the Neuro-QA data itself. Twitter and Speech are the least-suited training corpora for the Neuro-QA questions, and FAQ data gives a bit better results than a general Wikipedia corpus.
These results suggest that both text type and topic domain play a role in text prediction quality, but the high scores for the Medical Wikipedia corpus shows that topic domain is even more important than text type. 5 The column 'OOV-rate' shows that this is probably due to the high coverage of terms in the Neuro-QA data by the Medical 5 We should note here that we did not control for domain differences between the four different text types. They are intended to be 'general domain' but Wikipedia articles will naturally be of different topics than conversational speech.
Wikipedia corpus.
Table 4 also shows that the standard deviation among the 359 samples is relatively large. For some questions, we 0% of the keystrokes are saved, while for other, scores of over 80% are obtained (by the Neuro-QA and Medical Wikipedia training corpora). We further analyzed the differences between the training sets by plotting the Empirical Cumulative Distribution Function (ECDF) for each experiment. An ECDF shows the development of text prediction scores (shown on the Xaxis) by walking through the test set in 359 steps (shown on the Y-axis).
The ECDFs for our training corpora are in Figure 3. Note that the curves that are at the bottomright side represent the better-performing settings (they get to a higher maximum after having seen a smaller portion of the samples). From Figure 3, it is again clear that the Neuro-QA and Medical Wikipedia corpora outperform the other training corpora, and that of the other four, FAQ is the bestperforming corpus. Figure 3 also shows a large difference in the sizes of the starting percentiles: The proportion of samples with a text prediction score of 0% is less than 10% for the Medical Wikipedia up to more than 30% for Speech. We inspected the questions that get a text prediction score of 0%. We see many medical terms in these questions, and many of the utterances are not even questions, but multi-word terms representing topical headers in the chat-by-click data. Seven samples get a zero-score in the output of all six training corpora, e.g.: • glycogenose III.
26 samples get a zero-score in the output of all training corpora except for Medical Wikipedia and Neuro-QA itself. These are mainly short headings with domain-specific terms such as: • idiopatische neuralgische amyotrofie. around the mean, while the leave-one-out experiments lead to a larger number of samples with low prediction scores and a larger number of samples with high prediction scores. This is also reflected by the higher standard deviation for Neuro-QA than for Medical Wikipedia.
Since both the leave-one-out training on the Neuro-QA questions and the Medical Wikipedia led to good results but behave differently for different portions of the test data, we also evaluated a combination of both corpora on our test set: We created training corpora consisting of the Medical Wikipedia corpus, complemented by 90% of the Neuro-QA questions, testing on the remaining 10% of the Neuro-QA questions. This led to mean percentage of saved keystrokes of 28.6%, not significantly higher than just the Medical Wikipedia corpus.
Conclusions
In Section 1, we asked two questions: (1) "What is the effect of text type differences on the quality of a text prediction algorithm?" and (2) "What is the best choice of training data if domain-and text type-specific data is sparse?" By training and testing our text prediction algorithm on four different text types (Wikipedia, Twitter, transcriptions of conversational speech and FAQ) with equal corpus sizes, we found that there is a clear effect of text type on text prediction quality: training and testing on the same text type gave percentages of saved keystrokes between 27 and 34%; training on a different text type caused the scores to drop to percentages between 16 and 28%.
In our case study, we compared a number of training corpora for a specific data set for which training data is sparse: questions about neurological issues. We found significant differences between the text prediction scores obtained with the six training corpora: the Twitter and Speech corpora were the least suited, followed by the Wikipedia and FAQ corpus. The highest scores were obtained by training the algorithm on the medical pages from Wikipedia, immediately followed by leave-one-out experiments on the 359 neurological questions. The large differences between the lexical coverage of the medical domain played a central role in the scores for the different training corpora.
Because we obtained good results by both the Medical Wikipedia corpus and the neuro-QA questions themselves, we opted for a combination of both data types as training corpus in the initial version of the online text prediction application. Currently, a demonstration version of the application is running for ComPoli-users. We hope to collect questions from these users to re-train our algorithm with more representative examples.
|
2014-07-01T00:00:00.000Z
|
2012-04-23T00:00:00.000
|
{
"year": 2012,
"sha1": "aa94673c6dc8701491ed656b580104a0f427cd6c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "aa94673c6dc8701491ed656b580104a0f427cd6c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
43460894
|
pes2o/s2orc
|
v3-fos-license
|
Theory of the spatial structure of non-linear lasing modes
A self-consistent integral equation is formulated and solved iteratively which determines the steady-state lasing modes of open multi-mode lasers. These modes are naturally decomposed in terms of frequency dependent biorthogonal modes of a linear wave equation and not in terms of resonances of the cold cavity. A one-dimensional cavity laser is analyzed and the lasing mode is found to have non-trivial spatial structure even in the single-mode limit. In the multi-mode regime spatial hole-burning and mode competition is treated exactly. The formalism generalizes to complex, chaotic and random laser media.
The steady-state electric field within and outside of a single or multi-mode laser arises as a solution of the non-linear coupled matter-field equations, the simplest of which are the two-level Maxwell-Bloch equations treated below. While the basic equations involved have been known for many years, and many aspects of their temporal dynamics have been studied [1], relatively little progress has been made in understanding the spatial structure of the non-linear electric field, particularly in the case of multi-mode solutions for which spatial holeburning and other non-linear effects are critical. It is natural to attempt to understand the non-linear solutions in terms of solutions of a linear wave equation. The two standard choices are either the hermitian solutions of a perfectly reflecting (closed) passive laser cavity [2], or the non-hermitian non-orthogonal resonances of the open passive cavity [3,4]). In fact the intuitive picture of a lasing mode is that it arises when one of the resonances of the passive cavity is "pulled" up to the real axis by adding gain to the resonator. Often comparison of the numerically generated lasing modes with calculated linear resonances do show strong similarities in spatial structure, providing useful interpretation of lasing modes [5,6], although not a predictive theory. However with the current interest in complex laser cavities based on wave-chaotic shapes [7,8], photonic bandgap media [9,10] or random media [11,12] it is important to have a quantitative and predictive theory of the lasing states, as the numerical simulations required to solve the timedependent Maxwell-Bloch equations are time-consuming and not easy to interpret.
In recent work we have formulated a theory of steadystate multi-mode lasing which addresses these concerns [13]. The theory implies that the natural linear basis for decomposing lasing solutions is the dual set of biorthogonal states corresponding to constant outgoing and incoming Poynting vector at infinity at the lasing frequencies (referred to as "constant flux" (CF) states). Our theory shows that even in conventional lasers it is incorrect to regard the lasing modes as corresponding to a single resonance of the passive cavity and that multiple spatial frequencies occur even when there is a single lasing frequency close to the frequency of a single passive cavity resonance. These multiple spatial frequencies arise because several CF states contribute to a single lasing mode. Note that biorthogonal modes have been used extensively in resonator theory [14] (notably for the case of unstable resonators) but have not previously been applied to multimode lasing theory. For multimode lasing the main difficulty is treating modal interactions and the related effects of spatial hole-burning [15]. We sketch below an efficient method for treating these effects exactly which can in principle be used in designing laser cavities to predict power output and tailor the mode spectrum of the laser. The techniques are illustrated for the simple case of a one-dimensional edge-emitting laser.
We begin with the semiclassical laser equations within the rotating wave and slowly-varying envelope approximations (see Ref. [13] for a derivation) describing a laser comprised of a uniform gain medium of two-level atoms (with level spacing ω a ) embedded in a background dielectric medium/cavity with arbitrary spatially varying index of refraction n(x). Here e(x, t), p(x, t) are the envelopes of the field and polarization, D(x, t) is the inversion, D 0 is the pump strength, g is the dipole matrix element, γ ⊥ and γ are damping constants for p, D. As usual the fast variation of the fields at ω a is removed and the actual fields are given by (E, P ) = (e, p) e iωat + c.c. For simplicity we take e, p to be scalar fields, appropriate for the 1D case with TM polarization that we will discuss below; the same scalar form would apply for planar random or chaotic cavities. n(x) is the (possibly) spatially dependent index of refraction of the cavity.
We assume a steady-state lasing solution which is multi-periodic in time: e(x, t) = µ Ψ µ (x)e −iΩµt , p(x, t) = µ p µ (x)e −iΩµt . In contrast to standard modal expansions, not only the lasing frequencies {Ω µ } but the spatial mode functions {Ψ µ (x)} are assumed to be unknown.
Such a multi-periodic solution requires that the inversion is approximately stationary [17], implying [13] that each lasing mode must satisfy the self-consistent equation.
is the gain profile evaluated at the lasing frequency, D is the cavity domain and G is the Green function of the cavity wave equation with purely outgoing boundary conditions and k = ω/c is an external wavevector of the lasing solution at infinity (for multimode solutions k = k µ = Ω µ /c). Henceforth we set c = 1 and use wavevector and frequency interchangeably. With these non-hermitian boundary conditions the spectral representation G(x, x ′ |k) is of the form: Here the functions ϕ m (x, k) are the CF states which satisfy −∇ 2 ϕ m (x, k) = n 2 (x)k 2 m ϕ m (x, k) with the nonhermitian boundary condition of only outgoing waves of wavevector k at the cavity boundary. For the special case of a 1D cavity of length a considered below (see Fig. 2, inset) this condition is just ∂ x ϕ m (x)| a = +ikϕ m (a). Note this differs subtly but importantly from the quasi-bound state boundary condition where the complex eigenvalue k m replaces the real wavevector k [13]. The dual set of functionsφ m (x, k) satisfy the complex conjugate differential equation with the boundary conditions ∂ xφm (x)| a = −ikφ m (a) corresponding to constant incoming flux.
In general these functions satisfy the biorthogonality relations [18]: and are also complete. These relations make it possible to expand an arbitrary lasing solution so that each Ψ µ is a vector in the space of biorthogonal functions. Here, ϕ µ m (x) = ϕ m (x, k µ ) and in what follows we define k µ m = k m (k µ ). By substitution of (6) into Eq. (4) and use of the biorthogonality relations one finds: where we have rescaled the pump 2πk a g 2 D 0 / γ ⊥ → D 0 , and measured electric field in units e c = √ γ γ ⊥ /2g. Equation (7) is the key result of our work; it determines the lasing mode(s), each of which is a superposition of CF states which depends on its lasing frequency, k µ , and the pump power, D 0 . It is useful to regard Eq. (7) as defining a map of the complex vector space of coefficients a µ = (a µ 1 , a µ 2 , . . .) into itself, where the actual nonzero lasing solution is a fixed point of this map. Above the lasing threshold for each mode, D µ 0t , we find that the non-zero solutions are stable fixed points and trial vectors flow to them under iteration of the map while the trivial zero solutions, which below D µ 0t are stable, become unstable. Note that the map is proportional to [k µ − k m ] −1 , favoring the CF state with complex wavevector close to the real lasing wavevector, and it is also proportional to [γ ⊥ − ik µ ] −1 , insuring that the lasing frequency is near the center of the gain profile. It can be shown that in the high-finesse limit in which the imaginary part of the CF frequency is very much smaller than the real spacing between them only one CF state dominates the lasing state (the "single-pole approximation"), and this CF state is virtually identical to the corresponding linear resonance [13]. In this limit the picture of a single resonance being "pulled" up to the real axis is valid. However in many realistic cases this limit is not realized and the actual lasing solution is the superposition of CF states determined by Eq. (7).
Eq. (7) determines the lasing frequencies as well. For the first lasing mode and a uniform index resonator this is particularly simple. At threshold the CF states in the denominator of the integrand can be ignored and biorthogonality leads to a simple relation: For a non-trivial solution we must have a m = 0 for some m and hence the coefficient must be real and equal to unity. The reality condition determines that the possible lasing frequencies at threshold are k where k m ≡ q m (k) − iκ m (k) (we suppress the index µ here). Furthermore, the modulus unity condition determines the threshold pumping: The CF state m and associated frequency leading to the lowest threshold will be the first lasing mode. Note that in contrast to the traditional mode-pulling formula [16] where the cavity mode frequency is a fixed value, here the single-mode laser frequency is determined by the solution of a self-consistent equation (a transcendental equation for the 1D case). Nonetheless for high finesse cavities this condition agrees with standard results: the lasing frequency is very close to the cavity resonance nearest to the gain center, pulled towards the gain center by an amount which depends on the relative magnitude of γ ⊥ vs. κ m (which is approximately the resonance linewidth) [13]. Above threshold the lasing mode is found by initially choosing the lasing wavevector, k = k (m) t , calculating the sets {ϕ m }, {φ m } corresponding to that choice and then iterating Eq. (7) starting from a trial vector a(0) to yield output a(1). A natural choice for the initial vector is a m = 1, a p = 0, ∀ p = m, where m is the dominant component at threshold, calculated from the above relations. As noted, the "lasing map" has the property that below the lasing threshold, D 0t , the iterated vector, a → 0, and above this threshold it converges to a finite value which defines the spatial structure of the lasing mode in terms of the CF states. There is one crucial addition necessary to complete the algorithm. Note that Eq. (7) is invariant under multiplication of the vector a by a global phase e iθ , so iteration of (7) can never determine a unique non-zero solution. Therefore it is necessary to fix the "gauge" of the solution by demanding that we solve (7) with the constraint of a certain global phase (typically we take the dominant a m to be real). Thus after each iteration of (7) we must adjust the lasing frequency to restore the phase of a m ; it is just this gauge fixing requirement which causes the lasing frequency to flow from our initial guess to the correct value above threshold. The invariance of (7) under global phase changes guarantees that the frequency thus found is independent of the particular gauge choice. For multi-mode lasing we repeat this procedure for each vector a µ . In the singlemode regime only one of these vectors will flow to a nonzero fixed point. This behavior of the multi-mode lasing map is illustrated in Fig. 1 for the simple uniform index 1D cavity corresponding to an edge-emitting laser with a perfect mirror at the origin and an index step at x = a (inset to Fig. 2). Below the first threshold, determined from Eq. 9 above, the entire set of vectors a µ flow to zero; above that threshold the first lasing mode turns on and its intensity grows linearly with pump strength. Due to its non-linear interaction with other modes, the turnon of the second and third lasing modes is dramatically suppressed, leading to a factor of four increase in the interval of single-mode operation. The intensity shows slope discontinuities at higher thresholds as seen in normal laser operation. Note that in this approach effects of spatial hole-burning and mode competition are treated exactly and not in the near-threshold approximation (cubic non-linearity) traditionally used [2,16] which greatly underestimates the output power [13].
We now consider the spatial structure of the CF states defining the lasing modes. The linear resonances (quasibound states) of this system are easily found [13], they are complex sine waves of wavevector n 0 k qb m a = π(m + 1/2) − i/2 ln[(n 0 + 1)/(n 0 − 1)]. The constant linewidth follows from the Fresnel transmissivity of dielectric boundary at normal incidence. Note that κ (qb) m > 0 here, corresponding to amplification with increasing x. From Eq. (6) we expect the lasing mode to involve several CF states and differ most from a single cavity resonance for a low finesse cavity; thus we consider relatively small index, n 0 = 1.5. The CF states depend on the lasing wavevector, k. In Fig. (2) we choose it to be the real part of the wavevector of the 9 th resonance, k qb 9 , and plot the 7 closest CF eigenvalues, k m (inset, Fig. 2). The 9 th CF state has k 9 ≈ k qb 9 and within the cavity is very close to the m = 9 resonance [13], but the other k m , while they have Re[k m ] ≈ Re[k (qb) m ] (hence similar FSR) have substantially larger or smaller κ m than κ (qb) m . Hence only the m = 9 CF state is close to a linear resonance, emphasizing that CF states are not resonances. We plot several of these states in Fig. 2, showing their different amplification rates. The actual lasing mode will be the sum of several of these CF modes with different spatial frequencies and amplification rates. In Fig. (3) we plot such a mode. Standard modal expansions in laser theory are equivalent to choosing only the central CF state and missing the contribution of these spatial "side-bands" [13]. The inset to Fig. 3 shows that near threshold only one CF state dominates (one can show that the other components are of order the cube of the dominant component). But well above threshold the two nearest neighbor CF states are each 15% of the main component and since one of these has higher amplification rate, the final effect is to increase the output power by more than 43% (see Fig. 3). The sidebands are still 6% of the dominant component when the index is increased to n 0 = 3, leading to an increase in output power by 26% (see inset, Fig. 3). The formalism The full field (red line) has an appreciably larger amplitude at the output x = a than the "single-pole" approximation (blue) which neglects the sideband CF components. Inset: The ratio of the two largest CF sideband components to that of the central pole for n0 = 1.5 ( , ×) and n0 = 3 ( ,+) vs. pump strength D0.
we have presented here is ideal for treating random or wave-chaotic lasers for which the output directionality, output power and mode spectrum are very hard to predict with heuristic arguments. Moreover in these systems the finesse typically is parametrically smaller than unity, suggesting that each lasing mode will consist of many CF states without a dominant component. We have considered here the borderline case of a cavity with finesse of order unity, so there is a dominant CF component, but the spatial sidebands are still appreciable. Since our method treats the non-linearity and mode-competition exactly we anticipate that it may be useful in designing efficient semiconductor microcavity lasers. This work was supported by NSF grant DMR 0408636 and by the Aspen Center for Physics.
|
2017-02-11T08:55:27.102Z
|
2006-10-09T00:00:00.000
|
{
"year": 2006,
"sha1": "8c6a2fc7cd196d298554ff0d85dc6d79d254ed8b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0610229",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8cd93e34579f1ff60880ed20b0f91c7f6963d2ab",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
266233649
|
pes2o/s2orc
|
v3-fos-license
|
ST waveform analysis vs cardiotocography alone for intrapartum fetal monitoring: An updated systematic review and meta‐analysis of randomized trials
Abstract Introduction ST waveform analysis (STAN) was introduced as an adjunct to cardiotocography (CTG) to improve neonatal and maternal outcomes. The aim of the present study was to quantify the efficacy of STAN vs CTG and assess the quality of the evidence using GRADE. Material and methods We performed systematic literature searches to identify randomized controlled trials and assessed included studies for risk of bias. We performed meta‐analyses, calculating pooled risk ratio (RR) or Peto odds ratio (OR). We also performed post hoc trial sequential analyses for selected outcomes to assess the risk of false‐positive results and the need for additional studies. Results Nine randomized controlled trials including 28 729 women were included in the meta‐analysis. There were no differences between the groups in operative deliveries for fetal distress (10.9 vs 11.1%; RR 0.96; 95% confidence interval [CI] 0.82–1.11). STAN was associated with a significantly lower rate of metabolic acidosis (0.45% vs 0.68%; Peto OR 0.66; 95% CI 0.48–0.90). Accordingly, 441 women need to be monitored with STAN instead of CTG alone to prevent one case of metabolic acidosis. Women allocated to STAN had a reduced risk of fetal blood sampling compared with women allocated to conventional CTG monitoring (12.5% vs 19.6%; RR 0.62; 95% CI 0.49–0.80). The quality of the evidence was high to moderate. Conclusions Absolute effects of STAN were minor and the clinical significance of the observed reduction in metabolic acidosis is questioned. There is insufficient evidence to state that STAN as an adjunct to CTG leads to important clinical benefits compared with CTG alone.
| INTRODUC TI ON
The aim of fetal monitoring is to identify fetuses at risk of neonatal and long-term injury attributable to asphyxia and enable timely interventions to prevent cases of fetal damage.
Cardiotocography (CTG) was introduced in the 1960s and assumed to prevent fetal asphyxia, and soon became widely used in clinical practice.The use of CTG has been associated with a decrease in neonatal seizures after prolonged labor but not with improved long-term outcomes in the child.It has also been associated with an increase in cesarean sections and assisted vaginal deliveries. 1 The CTG method has limitations such as low specificity, high falsepositive rates and high inter-rater variability; therefore, a method with better diagnostic accuracy is needed to identify truly hypoxic fetuses.
The ST waveform analysis (STAN) method was introduced after extensive experimental research in Sweden. 2 Following a lack of fetal oxygen, anaerobic metabolism will produce changes in the fetal ECG.The method can be used after rupture of membranes in single pregnancies after 36 weeks' gestation.A scalp electrode is necessary for monitoring.
4][5] The three meta-analyses included the same six randomized controlled trials [6][7][8][9][10][11][12][13] and arrived at the same conclusions: that the absolute effect of CTG + STAN on neonatal outcomes was minor compared with CTG alone.There was a reduction in babies born with metabolic acidosis in cord blood in women allocated to the CTG + STAN group; the relative risk reduction was 36% and the absolute risk reduction 0.25%.The difference was statistically significant in only one of the three meta-analyses (0.45% vs 0.7%, Peto OR 0.64, 95% CI 0.46-0.88). 3There were no differences in other neonatal outcomes, such as Apgar scores, neonatal seizures or encephalopathy, or transfer to a neonatal intensive care unit.[5] A newer systematic review and network meta-analysis evaluated the effectiveness of different types of fetal monitoring. 14It reported that intermittent auscultation reduced the risk of emergency cesarean sections without compromising neonatal outcomes compared with other methods, except when compared with CTG in combination with STAN and fetal blood sampling.However, in two of the seven studies included in the CTG + STAN group in the network meta-analysis, the fetal ECG analyses were of the PR segment and not the ST segment.Therefore, the results should be interpreted with caution.
We aimed to update our previous systematic review 3 to quantify the efficacy of the STAN method as an adjunct to conventional CTG compared with CTG alone.In addition to conventional quality assessments of the included studies, we used the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) to assess the overall quality of evidence. 15We performed trial sequential analyses (TSA) for selected outcomes to assess the risk of false-positive results, futility and the need for additional trials. 16
| MATERIAL AND ME THODS
We updated our previous systematic review. 3The protocol is published in the PROSPERO international prospective register of systematic reviews, registration no.CRD42015023563.
We repeated our previous literature searches, follow-
| Study selection and data extraction procedures
The citations identified by the systematic searches were screened and potentially eligible studies were obtained in full text for further assessment.Two reviewers (EB, PØ) assessed eligibility of the studies independently.Persisting disagreements were resolved by consulting a third reviewer (LMR).The selection criteria were: • Population: women in labor, ≥36 weeks of gestation with a singleton fetus in a cephalic presentation and a decision for continuous electronic fetal monitoring during labor; • Intervention: CTG plus STAN; • Comparator: CTG alone; • Primary outcomes: operative deliveries for fetal distress, metabolic acidosis in the newborn (pH <7.05 and BD (ecf) >12 mmol/L in umbilical artery).Secondary outcomes: neonatal and perinatal
Key message
It is unclear whether ST waveform analysis is better for labor surveillance than conventional CTG.Evidence is insufficient to state that STAN as an adjunct to CTG leads to important clinical benefits compared with CTG alone.Two of the reviewers (EB, PØ) extracted data from each study independently, using a predesigned form.
| Assessments and synthesis
All studies meeting the selection criteria were critically appraised using the Risk of Bias tool developed and recommended by the Cochrane Collaboration. 17Two reviewers (EB, LMR) performed the risk of bias assessments independently.Persisting disagreements were resolved by consulting a third reviewer (KGB).
Outcomes occurring relatively frequently were analyzed by calculating the pooled risk ratio (RR) with 95% confidence interval (CI) in accordance with a random-effect model.Rare outcomes with incidence <1% were combined using Peto odds ratio and a fixedeffect model. 18All computations were performed using REVIEW MANAGER (REVMAN, Version 5.4.1 Copenhagen: The Nordic Cochrane Center, The Cochrane Collaboration, 2020).Forest plots intended for publications were prepared using R software (Version 4.3.0,Vienna: R. Foundation for Statistical Computing, 2023) and the forest plot package. 19,20 assess the risk of random errors and false-positive results and to help clarify the need for additional trials by calculating an optimal information size, 16 we performed post hoc TSA for selected outcomes in TSA viewer (Version 0.9.5.10 beta.Copenhagen: Copenhagen Trial Unit, 2017). 21 did not perform any subgroup analysis but conducted sensitivity analysis where we excluded one trial using old STAN technology 11 and one trial that used a different algorithm for interventions. 7parate analyses were prepared to explore the impact of pooling data on neonatal and perinatal deaths.
We present our overall assessment of the quality of evidence in a "summary of findings" table.The assessment includes the magnitude of effect of the STAN method vs CTG alone, and a summary of available data on the most important outcomes. 22e quality of evidence was judged as either high, moderate, low or very low. 23
| RE SULTS
The new electronic searches identified 282 records; after screening of titles and abstracts, 16 records were assessed in full text, 13 were excluded and three included in the systematic review [24][25][26] (Figure 1 and Table 1).Reasons for exclusions and bibliography of excluded records are shown in Appendix S2.Additional and corrected data are shown in Appendix S3.
Our previous systematic review included six studies, thus data from nine randomized systematic trials were included in our updated review.
| Description of included studies
The new studies included were performed in Spain, 24 Denmark 25 and Australia, 26 with 200, 1005 and 970 women and their babies, respectively (Table 1).In all, 28 729 women and their babies were included in the updated systematic review.
Most trials used the STAN S21 or S31 monitors (Neoventa AB), whereas the Westgate trial ( 11) used an older device without computerized assessment for the fetal ECG (STAN 8801, Cinventa AB).
The Westgate study included women from 34 weeks' gestation, and we therefore performed sensitivity analyses without that study.The decision algorithm used in the Belfort study 7 implied that the fetal heart rate status was classified into three zones (green, red, yellow), which correspond closely to the U.S. 2008 National Institute of Child and Human Development criteria. 27If the fetal heart rate pattern is in the yellow zone, intervention is recommended if any ST event (either episodic or baseline increase) or two biphasic ST events occur.This algorithm is different from the one used in other studies.Moreover, in all studies except the Belfort study, 7 fetal blood sampling was performed in both arms at the discretion of the obstetrician in charge.Therefore, we also conducted sensitivity analysis without the Belfort study.
We assessed the overall risk of bias as low in all the included trials (Table 1, Information Appendix S4).
| The effect of STAN method vs CTG alone
The nine available trials included 28 729 women in labor but only a minority of the investigated outcomes reached statistical significance (Table 2, Appendix S5).Some of the investigated neonatal outcomes are rare, with incidences <1%, and it is difficult to gain statistical power for definite conclusions.Lack of power was not an issue for the investigated maternal outcomes, and our meta-analysis showed that STAN is associated with no difference in the rate of cesarean sections (RR 0.94; 95% CI 0.80-1.12)or assisted vaginal deliveries (RR 0.99; 95% CI 0.83-1.19)for fetal distress (Table 3).
Metabolic acidosis occurred with an incidence <1% in the group receiving CTG alone, and even lower in the STAN group (OR 0.66, 95% CI 0.48-0.90;Table 3).The difference corresponds to a number needed to treat to benefit of 441 (95% CI 249-1898) when the baseline risk is 0.7%.This means that one case of neonatal metabolic acidosis is avoided for every 441 women monitored with STAN instead of conventional CTG.
All included studies reported data on deaths and four reported neonatal seizures (Figure 2).Neither resulted in statistically significant differences between the STAN method vs CTG alone.Perinatal and neonatal deaths had an OR of 1.55 (95% CI 0.62-3.91)and neonatal seizures 0.86 (95% CI 0.29-2.56).The CIs were wide when expressed in relative terms, but re-expressed in absolute terms, they imply that STAN can be associated with two fewer to 14 more deaths per 10 000 births, and between seven fewer and 15 more neonatal seizures per 10 000 births (Table 3).Apgar scores <4 after 5 minutes seemed to occur more frequently with STAN (OR 2.86, 95% CI 1.13-7.24)but we found little or no difference with regard to the incidence of newborns with Apgar scores <4 after 1 minute (RR 1.11, 95% CI 0.61-1.99)and Apgar scores <7 after 5 minutes (RR 0.89, 95% CI 0.69-1.15;Table 3, Figure 2).
| Sensitivity analyses
The results are robust with regard to inclusion or exclusion of the Westgate 11 or Belfort 7 trials (Appendix S5).Because we pooled studies reporting perinatal and neonatal deaths, we also conducted a sensitivity analysis to explore the impact of this decision.The remaining results were consistent (Appendix S5).
| Trial sequential analyses
We decided that a relative risk reduction of 20% would represent a clinically important difference in the number of operative deliveries for fetal distress (cesarean sections, vacuum or forceps).In this case, the TSA suggested that the available data was sufficient to conclude that the two treatments are non-inferior (Appendix S6).
Furthermore, as the majority of newborns with metabolic acidosis are without symptoms or elevated risk for adverse outcomes, 28,29 we held 50% relative risk reduction as the clinically important difference in the incidence of metabolic acidosis.The main analysis indicated that there was a statistically significant difference between STAN and CTG alone (Appendix S6) but the conclusion depended on the choice of statistical methods.For example, the significance was lost when we used Peto OR in combination with a random-effect model rather than in combination with a fixedeffect model.With regard to perinatal and neonatal deaths and neonatal seizures, the results were far from statistically significant, but the number of observed events was too small to allow firm conclusions about superiority or non-inferiority.For Apgar score <7 at 5 minutes, TSA confirmed there were no important differences between the groups.
| Summary of findings
The application of GRADE showed that the quality of evidence was moderate or high for the selected outcomes (Table 3).
| DISCUSS ION
In this updated systematic review and meta-analysis of randomized controlled trials comparing ST waveform analysis against CTG alone, we found no significant difference in operative deliveries for fetal distress (either for cesarean sections or assisted vaginal deliveries) but there was a reduction in metabolic acidosis.We found no difference in neonatal and perinatal deaths, neonatal seizures or encephalopathy, transfers to NICU or Apgar score <7 at 5 minutes, or in the composite outcomes.The number of fetal scalp blood samples were significantly reduced in the STAN group compared with the CTG group.No significant differences were found in cesarean section rates or assisted vaginal deliveries for any indications.
The updated review included three new studies [24][25][26] with 2175 women, and thus nine randomized trials with 28 729 women and their babies were included.The updated review shows similar results as the previous version, 3 except that a previously reported significant reduction in the number of operative vaginal deliveries for any indications following STAN group disappeared.
Our review has several strengths.The findings are based on a thorough and up-to-date literature search that includes all available RCTs.All trials are associated with a low risk of bias and our findings seem robust regarding the sensitivity analyses, where we excluded two trials that prompted questions regarding external validity. 7,11 addition, we used GRADE to judge the quality of the evidence and TSA to assess statistical power considerations for different outcomes.
RCTs are considered the gold standard for clinical trials.They are typically conducted under the supervision of dedicated experts and in ideal conditions.Therefore, the external validity to a normal clinical setting (the distinction between efficacy and effectiveness), can be questioned.The setting is almost never identical across all trials investigating the effect of an intervention, and this was also the case for the nine available STAN trials.We believe the observed variation in settings is as close as can be expected to normal variation in clinical practice, and therefore we decided to include all nine trials in our meta-analysis.1][32][33][34][35][36][37][38] We therefore conducted sensitivity analyses to investigate the robustness of our results.The overall conclusions of this review are robust with regard to the inclusion or exclusion of the oldest study that used non-computerized ST analysis 11 and the study from USA that used another decision algorithm. 7 the numerous outcomes reported in the included trials, we argue that the most important are perinatal and neonatal death, neonatal encephalopathy, seizures and Apgar score <4 at 5 minutes.
Important long-term neurologic sequelae such as cerebral palsy or other neurodevelopmental morbidity are unfortunately not reported.
Outcomes such as Apgar score <7 at 5 minutes, intubation for ventilation and transfers to NICU are less important.From a methodological perspective, we note that all relevant neonatal outcomes occur with very low incidence (for example, <0.1% for death and 0.56% for metabolic acidosis).Under such circumstances, there is a risk that the use of relative effect sizes such as odds ratios inflates the reader's perception of the magnitude of a possible effect. 39Misconception can be avoided by presenting the relative effect sizes together with the corresponding difference in absolute terms (Table 3).The absolute risk reduction in metabolic acidosis in the STAN group compared with the CTG group is 0.23% and is probably of little clinical relevance, although the relative risk reduction is 34%.
It is common to view metabolic acidosis as a crucial outcome, but
we regard it as a surrogate endpoint.The appropriate use of surrogate endpoints requires accurate knowledge and direct correlation between the surrogate and the truly important outcome.We argue that the relationship between metabolic acidosis and harder outcomes is questionable.There is a known relationship between low TA B L E 1 Characteristics of included studies.Paper Amer-Wåhlin, Sweden 6, 12 Belfort, USA 7 Kuah, 2023 26 Ojala, Finland 8 Puertas, Spain cord artery pH values and serious outcome, but the threshold remains unknown. 40,41Few neonates with severe acidemia appear to have sequelae, particular those who are healthy at birth, and most neonates with adverse outcomes, also those with seizures, are not born acidemic. 28,42Recent studies also report that umbilical artery pH and base excess are poor predictors of short-term outcomes Vayssiere, France 9 Victor, Denmark 25 Westerhuis, The Netherlands 10, 13 Westgate, UK 11 Westgate, UK The risk in the intervention group (and its 95% confidence interval) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI).
b Inconsistency in effect estimates.
c A surrogate endpoint with questionable clinical importance.Choose not to downgrade.
d Wide confidence intervals -imprecise data.
e Optimal information size not achieved.
BLIX et al.
F I G U R E 2
Forest plot analyses for selected outcomes.
as low 5-minute Apgar score and long-term neurodevelopmental morbidity. 29,42,43e causes of severe long-term neurologic sequelae are probably more complex than previously believed and not simply due to hypoxia with metabolic acidosis. 44Thus, it seems obvious that metabolic acidosis is a surrogate endpoint and should be interpreted with caution.We found a statistically significant difference in favor of STAN when comparing the incidence of metabolic acidosis, without observing similar effects in other important outcomes.
In addition to conventional meta-analysis, we performed TSA on selected outcomes to explore the possible impact of random errors and false-positive results on the conclusions of our meta-analysis.TSA also allow power analysis to clarify the need for additional trials. 16These analyses showed that the statistical power is too low to draw firm conclusions about superiority or non-inferiority of either STAN or CTG alone for neonatal seizures or deaths.On the other hand, TSA showed adequate statistical power to conclude that the STAN method is probably not associated with important reductions in Apgar <7 at 5 minutes or with operative deliveries for all indications (cesarean sections, vacuum or forceps).
We found that metabolic acidosis was associated with a statistically significant improvement in favor of STAN.REVMAN does not enable the use of a random-effect model in combination with Peto OR effect sizes, therefore the main analysis was based on a fixedeffect model.Interestingly, the TSA analysis showed that the significant finding for metabolic acidosis disappeared in meta-analysis based on random-effect models, a result that underpins the need for caution in interpreting the statistically significant finding for metabolic acidosis.
A recent Norwegian population study investigated the impact of the introduction of the STAN technology on changes in the occurrence of fetal and neonatal deaths, Apgar scores of <7 at 5 minutes, intrapartum cesarean sections and assisted vaginal deliveries while controlling for time-and hospital-specific trends and maternal risk factors. 45The analyses found that the introduction of STAN into clinical practice had no impact on fetal or neonatal deaths, either on the rates of cesarean section or assisted vaginal deliveries.However, it was associated with a small, but statistically significant increase in the occurrence of babies with Apgar score <7 at 5 minutes. 45e reduction in fetal blood sampling was expected, as it is one main reason for introducing the STAN technology, although fetal blood sampling was optional in most of the RCTs.
In a recent commentary, Chandraharan stated that the problem with STAN is the manufacturer's guidelines for interpreting CTG grouped into the classes "normal", "intermediary" and "abnormal". 46He argued that if a physiological interpretation of CTG was used instead of the current tool based on pattern recognition, STAN's true potential to improve clinical outcomes might be observed. 46The theory that
CO N FLI C T O F I NTE R E S T S TATE M E NT
None declared.
R E FE R E N C E S
ing the same search strategy, in the following databases: Ovid MEDLINE® (In-Process & Other Non-Indexed Citations, Ovid MEDLINE®, Daily, Ovid MEDLINE® and Ovid OLDMEDLINE®), EMBASE Classic+ (EMBASE (Ovid), The Web of Science® (Thomson Reuters), The Cochrane Library (Wiley) and CINAHL Plus (EBSCOhost).The searches were performed on October 31 2022, with the limitation 2015-2022 (current), and new searches were performed on June 22, 2023.The searches are described in detail in Appendix S1.
F I G U R E 1
Flow diagram of the study selection process.Records identified from*: Medline (n = 46) EMBASE+EMBASE Classic (n=79) CINAHL (n=67) Web of Science (n=127) Cochrane (n=69) Records removed before screening: Duplicate records removed by automation tool (n = 106) Records screened (n = 282) Records excluded by humans (n = 266) Reports sought for retrieval (n = 16) Reports not retrieved (n = 0) Reports assessed for eligibility (n = 16) Reports excluded: 13 Abstract (n = 7) Not relevant (n = 4) Study protocol (n = 2) New studies included in review (n =3) Identification of new studies via databases Identification Screening Included Total studies included in review (n = 9) Studies included in previous version of review (n = 6) Previous study (3) magnitude of this effect was inconsistent across the available trials (Appendix S5).Similarly, the other investigated neonatal outcomes pointed towards little or no difference between STAN vs CTG alone.
5 |
implementing a new guideline of physiological interpretation of CTG compared with current guidelines based on pattern recognition will improve clinical outcomes, remains to be tested through clinical trials.Today, the STAN method is in widespread use in Denmark and Norway, but not in Sweden or Iceland.In Finland, one obstetric unit uses STAN.It is used in 20% of the obstetric units in UK, none in Ireland, and in some units in the Netherlands and Belgium and some other European countries.STAN is used in one hospital in Australia, and not used in the USA.CON CLUS ION Our updated systematic review and meta-analysis of nine randomized controlled trials comparing ST waveform analysis against CTG alone, including 28 729 women and their babies, showed no reduction in important clinical outcomes such as severe neonatal morbidity, mortality rates or operative delivery rates.The significant but modest absolute reduction of metabolic acidosis of 0,23% should be interpreted with caution.To our best knowledge, no new randomized clinical trial is planned and it is time to conclude that STAN carries no important clinical benefits compared with CTG alone.AUTH O R CO NTR I B UTI O N SEB screened titles and abstracts, assessed articles in full text, assessed risk of bias, extracted data, graded the results and wrote the first draft.KGB performed the analyses, wrote the Method section and graded the results.ER performed the literature searches and described the searches in the paper.LMR assessed risk of bias and graded the results.PØ screened titles and abstracts, assessed articles in full text and extracted data.All authors contributed to revision of the paper.
Table 1 ,
6mer-Wåhlin Originally, 5049 women were included and randomized to the study.Of these, 83 were excluded for technical reasons, leaving 4966 women for the analyses.In 2011 (ref) the authors published analyses according to intention-to-treat including the 83 previous excluded cases.The estimates are based on the publication from 2001.6 a b See Appendix S3 for detailed risk of bias assessment.
No. of studies Events, n/N Effect measure a Effect size (95% CI) I 2 (%)
Peto odds ratio (OR), fixed effect model, for outcomes with incidence <1%, otherwise risk ratio (RR), random effect model.Composite of intrapartum death, neonatal death, Apgar <4 at 5 minutes, neonatal seizures, metabolic acidosis, intubation at birth, or neonatal encephalopathy.Composite endpoint of 1-minute Apgar score <4 or 5-minute Apgar score <7 or metabolic acidosis or admission to NICU >24 hours.Summary of findings for selected outcomes.
a b Total operative deliveries = cesarean sections + assisted vaginal deliveries.c d
|
2023-12-16T12:39:21.633Z
|
2023-12-13T00:00:00.000
|
{
"year": 2023,
"sha1": "b022a304155e4b5679807e35fbd2e1ad74f0a6fd",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/aogs.14752",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "343ee065b4eec1e135b6a57b8994c98b1061636a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231979078
|
pes2o/s2orc
|
v3-fos-license
|
Imaging local discharge cascades for correlated electrons in WS2/WSe2 moir\'e superlattices
Transition metal dichalcogenide (TMD) moir\'e heterostructures provide an ideal platform to explore the extended Hubbard model1 where long-range Coulomb interactions play a critical role in determining strongly correlated electron states. This has led to experimental observations of Mott insulator states at half filling2-4 as well as a variety of extended Wigner crystal states at different fractional fillings5-9. Microscopic understanding of these emerging quantum phases, however, is still lacking. Here we describe a novel scanning tunneling microscopy (STM) technique for local sensing and manipulation of correlated electrons in a gated WS2/WSe2 moir\'e superlattice that enables experimental extraction of fundamental extended Hubbard model parameters. We demonstrate that the charge state of local moir\'e sites can be imaged by their influence on STM tunneling current, analogous to the charge-sensing mechanism in a single-electron transistor. In addition to imaging, we are also able to manipulate the charge state of correlated electrons. Discharge cascades of correlated electrons in the moir\'e superlattice are locally induced by ramping the STM bias, thus enabling the nearest-neighbor Coulomb interaction (UNN) to be estimated. 2D mapping of the moir\'e electron charge states also enables us to determine onsite energy fluctuations at different moir\'e sites. Our technique should be broadly applicable to many semiconductor moir\'e systems, offering a powerful new tool for microscopic characterization and control of strongly correlated states in moir\'e superlattices.
Abstract:
Transition metal dichalcogenide (TMD) moiré heterostructures provide an ideal platform to explore the extended Hubbard model 1 where long-range Coulomb interactions play a critical role in determining strongly correlated electron states. This has led to experimental observations of Mott insulator states at half filling 2-4 as well as a variety of extended Wigner crystal states at different fractional fillings [5][6][7][8][9] . Microscopic understanding of these emerging quantum phases, however, is still lacking. Here we describe a novel scanning tunneling microscopy (STM) technique for local sensing and manipulation of correlated electrons in a gated WS2/WSe2 moiré superlattice that enables experimental extraction of fundamental extended Hubbard model parameters. We demonstrate that the charge state of local moiré sites can be imaged by their influence on STM tunneling current, analogous to the charge-sensing mechanism in a singleelectron transistor. In addition to imaging, we are also able to manipulate the charge state of correlated electrons. Discharge cascades of correlated electrons in the moiré superlattice are locally induced by ramping the STM bias, thus enabling the nearest-neighbor Coulomb interaction (UNN) to be estimated. 2D mapping of the moiré electron charge states also enables us to determine onsite energy fluctuations at different moiré sites. Our technique should be broadly applicable to many semiconductor moiré systems, offering a powerful new tool for microscopic characterization and control of strongly correlated states in moiré superlattices.
Here we describe a new STM-based technique for imaging and manipulating the charge states of correlated electrons in gated WS2/WSe2 moiré superlattices that enables the determination of the nearest-neighbor Coulomb interaction energies and onsite energy fluctuations. Using this we are able to image the charge state of moiré sites via their influence on the tunneling current between an STM tip and the WS2/WSe2 heterostructure. This is analogous to Coulomb blockade in single-electron transistors where the presence of an additional electron can dramatically modulate electrical transport. By combining a back-gate voltage with the STM bias this mechanism enables us to locally charge and discharge correlated moiré electrons.
Gradually ramping the STM bias under these conditions results in a cascade of discharging events for correlated electrons at multiple neighboring moiré sites. Systematic investigation of this discharge cascade allows determination of nearest-neighbor Coulomb interactions as well as onsite energy fluctuations within the moiré superlattice.
A schematic diagram of our aligned WSe2/WS2 heterostructure device is shown in Fig. 1a. We used an array of graphene nanoribbons (GNRs) with 100~200 nm separation as a top contact electrode, and the doped silicon substrate as a back-gate to control the global carrier density within the heterostructure. This configuration has proven to work reliably for STM study of TMD materials on insulating substrates down to liquid helium temperatures 18 . Details of the device fabrication are presented in the reference 18 . Figure 1b shows an ultra-high vacuum (UHV) STM image of the moiré superlattice in an exposed WSe2/WS2 area between two graphene nanoribbons at T= 5.4K. Three types of moiré sites are labeled: AA, B Se/W , and B W/S , with the corresponding chemical structures shown in the top-view sketch of Fig. 1c. The moiré period is = 8.1nm, indicating a near-zero twist angle.
We characterized our gated WSe2/WS2 heterostructure via scanning tunneling spectroscopy (STS). Figs. 1d and 1f show plots of the differential conductivity (dI/dV) at the B Se/W and B W/S sites respectively as a function of both the STM tip bias Vb and backgate voltage Vg. Here we focus on the electron-doped regime of the moiré heterostructure at positive Vg, with the Fermi level located in the conduction band. The dI/dV spectra show negligible signal for 0 < Vb < 0.4V due to very small tunneling probability to conduction band-edge states which lie at the K point of the bottom WS2 layer and which feature a large out-of-plane decay constant 18 . At Vb > 0.4 V tip electrons can more readily tunnel into the conduction band Q-point states which have a smaller decay constant (i.e., they protrude further into the vacuum).
In addition to a general increase in dI/dV signal for Vb > 0.4 V, we observe sharp dispersive dI/dV peaks at the different moiré sites (the bright features labeled by blue arrows in Figs. 1d, f). These dispersive peaks can be more clearly observed in the density plot of the second order derivative (d 2 I/dV 2 ), as shown in Fig. 1e,g. To better understand the origin of these peaks we performed 2D dI/dV mapping at STM bias voltages of Vb = 0.775 V (Fig. 1h) and Vb = 0.982 V (Fig. 1i) 19 . Charging/discharging events at the B Se/W sites thus lead to a corresponding jump in the STM tunneling current and result in sharp peaks in the dI/dV spectra (similar ring-like charging behavior has been seen via STM in other nanoscale systems [20][21][22][23][24] ).
Ab initio calculations show that the WSe2/WS2 moiré flat band states at the conduction band edge are strongly localized at the B Se/W sites in real space (see Fig. S1 in the SI). For Vg > 25V, the moiré global filling factor n/n0 is greater than 1, where n is the gate-controlled carrier density and n0 is the carrier density corresponding to a half-filled moire miniband with one electron per moire lattice site. At this gate-voltage there is thus at least one electron localized at each B Se/W site. The effect of the STM tip is that it behaves as a local top gate that modifies nearby moiré-site electron energy. At positive sample bias, Vb, negative charge accumulates at the tip and repels nearby electrons, thus causing electrons localized to B Se/W sites to discharge when Vb exceeds a threshold value (see the sketch in Fig. 1j). The efficiency with which the STM tip discharges nearby localized electrons depends sensitively on their distance to the tip, and thus results in circular discharging rings for a given Vb (Fig. 1h). These rings expand continuously with increased Vb since larger Vb enables the discharge of localized electrons at larger tip-electron distances (Fig. 1i). When the tip is inside a discharge ring the circled moiré site is empty whereas it contains an electron when the tip is outside the ring. The effects of electron correlation on cascade discharging can be more effectively visualized in position-dependent dI/dV spectra. Fig. 3a shows position-dependent dI/dV spectra along the green line in Fig. 2f. This line passes through a high-symmetry 2-ring crossing point (marked D) which is equidistant to neighboring B Se/W sites I and II. For positions near D sites I and II are both occupied at low Vb (i.e., n = 2 where n is the total electron count for adjacent moiré sites). As Vb increases, however, the tip successively discharges the two sites and n changes from 2 to 1 and then from 1 to 0 as Vb crosses two dI/dV discharge peaks. At D one would expect by symmetry that these two discharging events should occur at the same value of Vb for a non-interacting picture. The data of Fig. 3a, however, show that these two discharging events occur at different Vb values, with a discharging gap of Δ = 122 ± 9 mV (obtained via high-resolution dI/dV mapping, see SI).
Similar behavior can be seen at another high symmetry point, marked T in Fig. 2f. Here the STM tip is equidistant to three neighboring B Se/W sites marked I, II, and III, and so the discharge cascade involves three electrons. Fig. 3e shows position-dependent dI/dV spectra along the yellow line in Fig. 2f which passes through T. Three dI/dV peaks are seen in the spectra, corresponding to a cascade of three discharging events that decrease the total number of electrons (n) in sites I-III from 3 to 2 to 1 and then, finally, to 0. At T we observe the voltage difference between the 3→2 and 2→1 discharge peaks to be identical to the difference between the 2→1 and 1→0 discharge peaks within the uncertainty of our measurement: ∆Vb T = 166 mV ± 11 mV.
In order to interpret the discharge cascade of correlated electrons in our TMD moiré system in terms of physically significant parameters, we employ a simplified -moiré-site model that includes on-site and nearest-neighbor interactions. The Hamiltonian describing our system is Here UNN is the nearest-neighbor Coulomb interaction term and < > only sums over nearest neighbors and N represents the number of moiré sites close to the tip (equals to 2 or 3 in our system). is the electron number at moiré site i, is the onsite energy at site i, and is the potential energy shift induced by Vb and Vg at site . has the form where = 1.6 × 10 −19 , and and are dimensionless coefficients describing the electrostatic potential on site induced by Vb and Vg, respectively. We note that depends sensitively on the tip position , meaning = ( ). In this model we neglect intersite hopping due to the small bandwidth of the moiré flat band (~5meV, see SI). We also ignore onsite Coulomb interactions since the total number of electrons ( = ∑ ) for sites near the tip is smaller than N and the energy of double occupancy for a single site is assumed to be prohibitively large. This Hamiltonian is meant to describe electrons in the lowest conduction band near the tip since higher-energy delocalized electrons are assumed to be swept away by tip repulsion for Vb > 0.
Our strategy for understanding the discharge phenomena observed here is to explore the consequences of this Hamiltonian for different electron occupation values n. By comparing the different total energies, E(n), we can identify the charge occupation, n*, that has the lowest total energy and we assume that this is the ground state. A discharge event from the n = n* + 1 state to the n = n* state occurs when E(n*) < E(n* + 1). Since our measurements primarily involve discharging events, the largest relevant energy is UNN which eclipses the energy associated with intersite hopping. As a result, the behavior induced by (1) can be treated within an essentially classical framework (in the context of discharge phenomena) that is adequate to extract information on the Hubbard model parameters εi and UNN from our data, the main goal of this work.
We start by applying this model to analyze the discharge behavior that occurs when the STM tip is held at position D. Here the tip is equidistant from sites I and II (the closest moiré sites to the tip) and so we model the moiré system as an N = 2 cluster as illustrated in Fig. 3b.
The on-site energy (Eq. (1)) for sites I and II can be written as ε and the electrostatic potential energy (Eq. (2)) for each site is = ( ) − where r D = 3.9 nm is equidistant from sites I and II. Straightforward energetic considerations allow the N=2 ground state energy for different n to be written as ( = 2) = 2( + ) + , ( = 1) = + , and ( = 0) = 0. Fig. 3c shows a plot of E(2), E (1), and E(0) as a function of applied electrostatic potential, ν.
Three different regimes can be seen where the ground state energy transitions from an n = 2 charge state to an n = 1 charge state and then to an n = 0 charge state as ν is increased. The boundaries between these different charge states mark the locations of electron discharging events. The first discharging event happens when E(2) = E (1), which occurs at the potential 1 = − − . The second discharging event happens when E(1) = E(0), which occurs at the potential 2 = − . The difference in electrostatic potential energy between these two discharge events is then Δ = 2 − 1 = . Using Eq. (2) and assuming that the gate voltage remains unchanged while ramping the bias voltage (typical for our experiments) allows UNN to be expressed in terms of the first and second discharge bias voltages: The discharge cascade behavior when the tip is located at T can be analyzed using similar reasoning, except for an N=3 cluster instead of an N=2 cluster. In this case = ( ) − A similar analysis can be used to determine variations in the Hubbard on-site energy, εi, for a moiré superlattice. This comes from the fact that for small Vb the STM tip can only discharge a single moiré site whose energy is described by (1) = + and (0) = 0 (i.e., the N=1 limit). In this case, discharge happens when E(1) = E(0), which occurs when = − .
Fluctuations in ε are thus directly related to fluctuations in the discharge potential, = − , which (using Eq. (2)) leads to = − ( ) where represents spatial fluctuations in the measured single-site discharge voltage.
This type of behavior can be seen experimentally as shown in Fig. 4. Fig. 4a shows a dI/dV map of a pristine region of the WSe2/WS2 moiré superlattice for Vg = 50 V and Vb = 0.465 V. The discharge rings around the B Se/W moiré sites are quite uniform in this defect-free region.
This uniformity is also seen in a dI/dV spectra linecut (Fig. 4b) that goes through five moiré sites along the red line in Fig. 4a. Fig. 4c, on the other hand, shows a dI/dV map obtained near a point defect (marked by a red dot) for a similar set of parameters (Vg = 53 V and Vb = 0.740 V). Here the discharge rings are highly non-uniform (the defect moiré site itself does not show a clear discharge ring for this set of parameters due to the large change in its onsite energy). The magnitude of the effect of the defect on neighboring moiré sites can be seen through dI/dV spectra (Fig. 4d) obtained along the red linecut shown in Fig. 4c. As seen in Fig. 4d, the defect causes significant changes in the onsite energies of adjacent moiré sites. The discharge bias (measured at the discharge ring center, = 0) of sites I and II, for example, is ~200 mV lower than those for sites III and IV (see blue dashed line in Fig. 4d). This implies that the on-site energy shift on sites I and II is ≈ (0)(200 ).
A problem with our characterization of moiré Hubbard parameters up to now is that we cannot convert them to quantitative energies until we determine ( ), the geometric electrostatic conversion factor of Eq. (2). In particular, we require ( ), ( ), and (0) to quantitatively determine UNN and . We can gain some experimental insight into the behavior of ( ) from the slopes of the lines representing discharge peaks in the dI/dV plots of Figs These factors allow us to extract a quantitative value of UNN = 22 ± 2 meV from our measurements at D and UNN = 27 ± 2 meV from our measurements at T. These two values of UNN are in reasonable agreement with each other, a self-consistency check that helps to validate our overall approach. We are also able to determine the fluctuation in onsite energy of sites I and II around the point defect in Fig. 4c to be ~ 65 .
The expected value of UNN can be roughly estimated by considering the energy difference associated with the initial position ( ⃗ ) of the discharging electron and its final position ( ⃗ ) after discharge. For an electron being discharged from the N=2 cluster discussed above for point D, the initial electrostatic energy is ( ⃗ ) ≈ All authors discussed the results and wrote the manuscript.
Notes
The authors declare no financial competing interests.
Data availability
The data supporting the findings of this study are included in the main text and in the Supplementary Information files, and are also available from the corresponding authors upon reasonable request.
DFT calculation of conduction flat bands
The near-0° twisted WS2/WSe2 moiré superlattice undergoes a structural reconstruction which leads to out-of-plane buckling of the two layers and in-plane strains which have been studied in a previous work 1 . Here, we study the influence of these reconstructions on the electronic structure of the conduction band states which is relevant to the experiment. The band structure (including spin-orbit coupling) of the reconstructed moiré pattern is plotted in the moire supercell BZ in Figure S1 (a). The three lowest-energy flat bands are states derived from the K point of the WS2 layer and are strongly localized in the moiré pattern as shown in Fig. S1b, 1c and 1d, respectively. The discharging behavior observed in the experiment is that of an electron in the c1 band in which the wavefunction is localized on the WS2 layer at the B Se/W site (as shown in Figure S1b).
For the simulation, the moiré superlattice is constructed using a 25x25 supercell of WSe2 and a 26x26 supercell of WS2 to ensure commensurability. The structural reconstruction is simulated using forcefield relaxation of the moiré superlattice as implemented in the LAMMPS 2 package. The intralayer interactions were modeled using the Stillinger-Weber 3, 4 forcefield and the interlayer interactions were studied using the Kolmogorov-Crespi 5, 6 forcefield. The force minimization was performed using the conjugate gradient method with tolerance of 10 -6 eV/Å.
The electronic structure of the reconstructed superlattice is studied using density functional theory 7 calculations as implemented in the Siesta 8 package. A supercell size of 21 Å was used in the out-of-plane direction to overcome spurious interaction between periodic images. A doubleplus polarization basis is used to expand the wavefunctions. Norm-conserving pseudopotentials 9 are used in the simulation and the exchange-correlation functional 10 is approximated using the local density approximation. Only the γ point was sampled in the supercell BZ to obtain the selfconsistent charge density.
Extracting and via high-resolution dI/dV spectra mapping
We performed high-resolution dI/dV spectral mapping to measure the gaps between the cascade discharging peaks at high-symmetry D and T points (Δ and Δ ) as shown in Fig. S3.
In order to more precisely obtain the discharging peak positions at the D and T points, we performed dI/dV spectroscopy measurements on a grid with spatial step size < 0.7 Å and bias step size < 14 mV. Figs. S3a-S3p show the evolution of the dI/dV maps sliced at different biases.
Consistent with the results shown in the main text, three neighboring discharging rings expand and cross each other as the bias is increased. The nearest neighbor interactions induce the split cascade discharging peaks at the T and D points. The high spatial resolution allows us to more easily determine the T and D points (labeled in Fig. S3f and S3i). The corresponding spectra are shown in Fig. S3q (T point) and S3r (D point), with the discharging peaks labeled with blues arrows. Here the discharging peak width is ~40mV, much larger than the bias step size (14 mV).
Multiple moire sites are measured to obtain the statistical Δ and Δ data shown in the main text. The dI/dV spectroscopy mapping measurements were performed on a square grid. At each point, the tip-height was first fixed at the setpoint Vb = -3.05V and I = 104pA, and then the dI/dV spectra at this point is measured under open-loop conditions. q,r. dI/dV spectra measured at the (q) T and (r) D points (pictured in f and g). The cascade discharging peaks are labeled with blue arrows.
Electrostatic simulation of and
Here we describe our determination of the values of and through the use of electrostatic simulations. The system geometry of our model is illustrated in Fig. S4a. The STM tip is approximated as a metallic cone with half angle .The apex-sample distance is H. The WS2/WSe2 heterostructure is approximated by two separate regions. The first region is a circular area with radius Rhole surrounding the moire sites of interest (MSOI). The charge configuration in this region is set by the charging/discharging events described in the main text. The second region is the area outside the circle. The carrier doping in this region is relatively high and we can approximate it as a thin metal sheet. The Si back gate is regarded as an infinitely large metal plate and is separated from the TMD heterostructure plane by a distance d=320nm. The dielectric constants for the space above and below TMD heterostructure are assumed to be 1 (vaccum) and 4.2 (hBN and SiO2).
As mentioned in the main text, the potential energy shift induced by Vb and Vg has the form = ⋅ − ⋅ , where we have neglected the moire site index i. We determine the values of and via the following method: first, to determine we ground the tip and the metallic plane and apply a 1V potential to the Si back gate. The value of the induced electric potential at the position of the moire electron (dot at the hole center) then equals . To determine , we ground the Si back gate and the metallic plane and apply a 1V potential to the tip. The resulting value of the electric potential at the hole center then equals .
Here we describe how to interpret the simulation results and compare them to the experimental data. Our simulation results should be fitted to the variation of the slopes of the dispersive discharging peak (for example Figs. 4d-g). As discussed in the main text, the slopes are equal to ( )/ . When the tip is close to the MSOI (for example at rt=0) both H and significantly impact . However, when is large, the impact of the tip height, H, on is negligible since H plays a negligible role in determining the distance between MSOI and the tip apex. Therefore, the shape of the curve ( )/ vs can help us to choose the correct H and to match our experimental results. Finally, the radius of the hole Rhole determines the screening strength of the surrounding TMD heterostructure on the external potential applied by the tip and the back gate. The value of Rhole therefore does not significantly affect the shape of the curve ( )/ vs (as long as is smaller than Rhole (in order of magnitude as the moire period), a condition that is satisfied here).
Our electrostatic simulations were performed using COMSOL. The distribution of ( )/ as the function of is well reproduced by using the following parameters: = 30°, H=0.8 nm, and Rhole=8nm. Fig.S4b shows the simulated (blue) and (orange) as functions of . As expected, has a negligible change with the increase of . The simulated ratio ( )/ is plotted in Fig. S4c (orange curve) and is seen to compare well to the experimental data for The tip apex is taken as a metallic cone with half angle and height H above the moiré site of interest (MSOI). The surrounding TMD heterostructure is assumed to be an infinitely large metallic plane with a hole of radius Rhole around the MSOI (the dot located at the hole center).
The Si back gate is regarded as an infinitely large metal plate and is separated from the TMD heterostructure plane by a distance d=320nm (not shown in a). b. Simulated (blue) and (orange) as functions of . As expected, is seen to be nearly independent of in our simulation range. c. Experimentally measured (blue points) and simulated (orange curve) values of ( )/ as a function of . The experiment and simulation show good agreement for the values H = 0.8 nm, = 30°, and Rhole = 8 nm.
|
2021-02-22T02:15:48.038Z
|
2021-02-17T00:00:00.000
|
{
"year": 2021,
"sha1": "73b30f8d98bb923377061412818baf331f7491de",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2102.09986",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "73b30f8d98bb923377061412818baf331f7491de",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
9225571
|
pes2o/s2orc
|
v3-fos-license
|
Label-Free Detection of Rare Cell in Human Blood Using Gold Nano Slit Surface Plasmon Resonance
Label-free detection of rare cells in biological samples is an important and highly demanded task for clinical applications and various fields of research, such as detection of circulating tumor cells for cancer therapy and stem cells studies. Surface Plasmon Resonance (SPR) as a label-free method is a promising technology for detection of rare cells for diagnosis or research applications. Short detection depth of SPR (400 nm) provides a sensitive method with minimum interference of non-targets in the biological samples. In this work, we developed a novel microfluidic chip integrated with gold nanoslit SPR platform for highly efficient immunomagnetic capturing and detection of rare cells in human blood. Our method offers simple yet efficient detection of target cells with high purity. The approach for detection consists of two steps. Target cells are firs captured on functionalized magnetic nanoparticles (MNPs) with specific antibody I. The suspension containing the captured cells (MNPs-cells) is then introduced into a microfluidic chip integrated with a gold nanoslit film. MNPs-cells bind with the second specific antibody immobilized on the surface of the gold nanoslit and are therefore captured on the sensor active area. The cell binding on the gold nanoslit was monitored by the wavelength shift of the SPR spectrum generated by the gold nanoslits.
Introduction
Detection of rare cells is an essential technology with a wide range of applications in clinical diagnosis and stem cell research [1][2][3][4]. However, isolation and detection of rare target cells in a large amount of surrounding cells has been a challenging task. Recent developments in the field are therefore focused on improving the efficiency of capturing and the purity of captured cells. To isolate and detect circulating tumor cells (CTCs) for clinical application, various strategies have been developed. These methods take advantage of different properties of CTCs as compared to blood cells, such as expression of specific surface antigens, size and stiffness of cancer cells [5][6][7][8][9][10][11][12][13].
There are several techniques that are used for identification of captured cells. Immunostaining of the sorted cells and enumeration of stained cells using a fluorescence microscope is one of the most common methods to identify the captured cells [7,8,10]. The label-free conductivity measurements are another technique for identification of captured cells. The conductivity sensors have been integrated to the cell-capturing unit and do not require staining of cells for numeration [6,14]. Surface plasmon resonance (SPR) is a label-free technology for detection of cells with the ability to observe the kinetic of the cell binding in real time. Yashunsky et al., have studied an infrared surface SPR-based technique for real time monitoring of epithelial cell-cell and cell-substrate interactions. This study demonstrated the ability of FTIR-SPR to resolve different phases of cell-cell and cell-substrate adhesion [15]. Surface plasmon-based infrared spectroscopy also has been used to monitor the submicron variations in cell layer morphology in real-time [16]. Rice et al., reported a microarray platform combined with gravity-coupled surface plasmon resonance imaging to detect CD4 + T cells. The kinetics of capturing on various antibody microarrays using SPR has been studied [17]. Hiragun et al., have demonstrated different patterns of SPR signal for cancer cell lines which can be used for diagnosis of cancers [18]. SPR has been also used to study viability of cells. Wu et al., have demonstrated label-free monitoring of cells viability by gold nanoslits-based Fano resonance biosensors [19]. Developing label-free methods, such as SPR, with the ability of real time monitoring of cell binding provides high-throughput screening techniques that can be very useful for application of rare cell detection.
Microfluidics, as an emerging technology in clinical applications, provides various advantages including process integration and short analysis time. Microfluidic devices for cell capturing provide efficient capturing of target cells with minimal non-specific binding owing to shear force produced by fluid flow. However, the laminar flow in microfluidic devices results in insufficient interactions between cells and antibody on the surface. Wang et al., have reported nanostructured silicon substrates with integrated chaotic micromixers to increase cell-substrate contact frequency to obtain high efficient capturing of CTCs [20]. Another strategy to maximize collisions between target cells and antibody-coated surfaces is to use surface ridges or herringbones in the wall of the device as reported by Stott et al. [7].
Here we demonstrate a method that captures target cell specifically and monitors the cell binding using the label-free surface plasmon resonance in one microfluidic device. The method includes two steps; in the first step, specific antibody on the iron oxide magnetic nanoparticle (MNPs) identifies and captures the cancer cells in the blood sample. In the second step, the cancer cells, captured by the magnetic nanoparticles (MNPs-cells) were flown into the SPR chip and allowed to bind to the second specific antibody on the gold nanoslits. The microfluidic chip has an integrated magnet to maximize interactions between the target cells and the antibody on the gold nanoslits while the liquid flow minimizes the blood cell interference. Double capturing by the two antibodies combined with a microfluidic chip resulted in a highly specific method to capture and detect cancer cells in blood.
A label-free SPR method was used to detect the captured cells. The gold nanoslit substrate that was used as the SPR sensing platform was developed by Lee et al. [21][22][23][24]. Gold nanostructures with extraordinary optical transmission have been integrated with an SPR chip for biosensing applications [25][26][27][28][29][30].
Our method utilizes functionalized magnetic nanoparticles (MNPs) for pre-isolation of the target cells and SPR response enhancement in conjunction with surface plasmon resonance (SPR) on gold nanoslits. Examples of nanoparticle enhanced SPR with improved sensitivity for detection of various biomarkers have been reported [31,32]. Previously, we have used the same platform and demonstrated a similar method to detect a lung cancer mRNA biomarker [33]. The main goal of this paper is demonstrating a simple label free detection method that can be used for fast screening of rare cells in blood.
Specific Capturing and Detection of Cancer Cells-DCM
The double capturing method (DCM) is based on two specific capturing steps of cancer cells. The schematic of DCM is shown in Figure 1. In the first step (Figure 1a), the functionalized MNPs immobilized with the first antibody that is specific for target cell surface receptors (antibody I) isolates the cancer cells from the sample. In the second step, the isolated cancer cells on the MNPs (MNPs-cells) are binding to the immobilized antibody (the second antibody, antibody II) on the gold nanoslit surface. The cell binding is detected by monitoring the shift of SPR spectrum produced by gold nanoslits. Antibody I and antibody II were selected to achieve high specific capturing of the target cells from the blood sample. The two steps are described in detail as follows. The second step includes introducing the mixture of blood sample and MNPs to the microfluidic chip and capturing the MNPs-cells to binds to the antibody II on the gold nanoslits. The cell binding on the gold nanoslits was monitored by the wavelength shift of the SPR spectrum generated by the gold nanoslits. The detection area of the nanoslits is defined by the focal spot of the probe light.
First
Step: Isolation of Cancer Cells by Antibody I on the MNPs
Preparation of Functionalized MNPs
Five microliters of the MNP suspension from a stock (25 µM) was pipetted into an Eppendorf tube. The MNPs were suspended in 100 µL of 1× PBS buffer. Then the tube was placed on the magnet separator to remove the supernatant. MNPs were re-suspended in 400 µL of 1× PBS buffer solution. Fifty microliter of antibody I solution (2.5 mg/mL of anti-EphA2) and cross-linker, 1-ethyl-3- [3-dimethylaminopropyl] carbodiimide hydrochloride (EDC), were added into the tube. The mixture was allowed to react for 4 h on a shaker at room temperature ((i) in Figure 1a). The functionalized MNPs were separated using a magnet to remove excess cross-linker and then re-suspended in 1 mL of 1× PBS buffer solution. The MNPs thus prepared were then stored in 4 °C until to be used with blood sample, as follows.
First Step of DCM
The functionalized MNPs suspension with final concentration of ~1 × 10 10 MNPS/mL (nominally calculated by the quantity used in the beginning of step I) was then mixed with the blood sample solution and incubated for 60 min ((ii) in Figure 1a) to allow the binding of the MNPs to the target cells. Following the first step, the isolated cancer cells on the MNPs (MNPS-Cells) were detected on the gold nanoslits in the second step.
Second
Step: Capture and Detection of the MNPs-Cells on the Gold Nanoslit
Immobilization of Antibody II on Gold Nanoslit
To capture and detect the MNPS-Cells, the gold nanoslits surface was functionalized with specific antibody II to bind with the cell surface receptors (Figure 1b). The gold nanoslits surface was allowed to react with a solution of 2 mM cross-linker DTSSP for 90 min and was then rinsed with 1× PBS buffer. A solution of 0.25 mg/mL anti-CD44 (antibody II) was then introduced to the SPR chip and incubated for 120 min to allow the binding of antibody II to the surface. To confirm the antibody immobilization, the transmission spectrum of the gold nanoslits was then taken by a spectrometer (BWTEK, BTC112E). The detailed optical setup can be found in our previous work [33]. The SPR spectra before and after the immobilization of anti-CD44 are shown in Figure A1. A 3.0 nm shift in the SPR peak position confirms the successful antibody coating on the gold surface.
Second Step of DCM
The suspension of MNPs-cells from step I was introduced to the microfluidic chip (described below) to bind the target cells with the antibody II on the gold nanoslits. The flow rate was controlled by a syringe pump (NE-1000, New Era Systems Inc., Pompano Beach, FL, USA). A micro magnet was put underneath the nanoslits to pull down the MNPs-cells to the gold surface. Cell capturing under a high flow velocity is achieved by combining the magnetic force to bring down the MNPs-cells to the gold surface functionalized with the antibody II. In this step, the real-time SPR response that indicated the progress of the cell capturing and cell binding was recorded.
Chip Fabrication and Measurement Setup
In this work, gold nanoslit film was employed as the sensing platform. Gold nanoslits were fabricated on a polymer substrate using nanoimprinting lithography (thermal-annealing-assisted template-stripping method) developed by Lee et al. [24]. The gold nanoslit period is 600 nm, the width is 220 nm and the area of the slit array is 300 µm × 300 µm.
The gold nanoslit film was integrated with the microfluidic chips as described below. The microfluidic chips were fabricated using a laser scriber to ablate trenches on the polymetheylmethacrylate (PMMA) substrate and double-sided tape [35,36]. The PMMA substrates were then bonded to each other by thermal binding and with the nanoslit film using the double-sided tapes. The gold nanoslit film integrated with PMMA layers was then attached to a glass slide using an optically clear adhesive layer (3M TM optically clear adhesive 8263).
In this work, we used two designs of microfluidic chips. For the parameter study, a micro-volume chip (MVC) was used to select the proper antibodies on the MNPs and the gold nanoslits. For detecting cancer cells in blood sample, a slightly modified chip was used (the Funnel chip, Figure 2). The funnel chip is suitable for processing a large volume (1 mL) sample. The MVC was formed by integrating the gold nanoslit film with a small-volume microfluidic chip. The layered structure and the top view of MVC chip is shown in Figure A2a,b. The sample was pipetted on top of the gold nanoslits through the inlet of the microfluidic channel. In this simple design, pump is not needed. The nanoslits can be washed by withdrawing the sample through the outlet using a syringe and introducing PBS buffer to flush the chip. The required sample volume for this chip is 7 µL. This chip was used to monitor the cell binding on the gold nanoslits by SPR. The capturing the cells on the gold nanoslits by various antibody combinations were studied on the MVC chip. The same design has been used in our previous work for the detection of a mRNA marker for lung cancer [33].
Large Volume Chip (Funnel Chip)
A novel fluidic chip for introducing large volume of sample was designed and fabricated to capture the cancer cells in the sample. For the application of rare cell detection, because of their low concentration, designing a fluidic chip to process large volume of sample is required. This funnel chip can process 1 mL of sample in less than 15 min. A gel loading pipet tip (Labcon, Cat. No. 1034-800-000) was used as the sample reservoir and to introduce the sample to the microchannel accommodating the gold nanoslit. In order to prevent sedimentation of the cells during the experiment, the tip is placed at an angle of 40° to 50° to the chip surface. A neodymium magnet is put beneath the nanoslit to bring the MNPs-cell to the surface to bind with the second antibody immobilized on the gold nanoslits. The flow velocity has been optimized to minimize the interference of blood cells. The layered structure and the top view of funnel chip are shown in Figure 2a,b, respectively.
A neodymium magnet was integrated with the microfluidic chip to increase the efficiency of capturing of target cells. The magnitude and distribution of magnetic field was optimized to retain the MNPs carrying the target cells on the detection area even with the high velocity of the flow to minimize the non-specific binding.
Cell Culture
Lung cancer cell lines CL1-5 was a gift from Prof. Pan-Chyr Yang [37,38]. A complete medium consisting of Dulbecco's Modified Eagle's medium (DMEM, Gibco) and 10% fetal bovine serum (FBS, Invitrogen) was used for maintaining the cells. The cells were incubated in tissue culture poly-styrene (TCPS) flasks (Corning) that were placed in an incubator, filled with 5% CO2 atmosphere and maintained at 37 °C. The cells were sub-cultured every 3 to 4 days. The cells were suspended by trypsin and counted by Cellometer (Auto T4 Cell Counter, Nexcelom.). Cell suspension was prepared by suspending the cells in the culture medium to desired densities.
Blood Sample Preparation
Human blood was collected from healthy donor into a tube containing of 0.2% EDTA (20× of blood volume). One milliliter of blood was centrifuged at 200× g for 10 min. The supernatant was carefully aspirated without disturbing the pellet. Then, red blood cell lysis buffer (BD Pharm Lyse TM ) was applied to the blood sample according to the protocol. After discarding lysed red blood cells, white blood cells (WBCs) was re-suspended in 2.5 mL of PBS buffer containing 1% FBS and then transferred into another tube for further uses.
Labeling and Imaging the Cells
To identify the captured cells, CL1-5 cells were labeled with CellTracker™ Green CMFDA (Abs. 492 nm, Em. 517 nm). Cells were suspended in pre-warmed CellTracker™ dye working solution (10 µM) and incubated for 30 min under growth conditions. After centrifuging the cells, the dye working solution was replaced with fresh, pre-warmed medium. Various number of the labeled CL1-5 cancer cells were spiked into 1 mL of blood. Red blood cell lysis was applied and lysed RBCs were discarded. DCM then was applied to detect the cancer cells. The labeled cells captured on the gold nanoslits were observed using an inverted microscope (Olympus IX71). An air-cooled Argon-ion laser (wavelength: 488 nm) was used as the light source. To avoid strong reflection from the gold surface, the laser beam was incident on the cells attached on the gold nanoslit at an angle of 45 degrees. The emitted light (wavelength at around 517 nm) that passes through the nanoslit was collected using an objective lens. The excitation light was blocked by a filter (U-MWB2, Olympus). The images were taken using a digital single-lens reflex (DSLR) camera (E-410, Olympus) attached to the microscope.
High Specificity Using Two Specific Antibodies
Two different antibodies were used to increase the specificity of the cell capturing in this study. Antibody I and antibody II were selected based on the specificity and binding affinity to CL1-5 cell surface receptors. To identify such antibodies, three candidate antibodies, anti-EGFR, anti-CD44 and anti-EphA2, were tested. High expression of EGFR [39] and CD44 [40] on CL1-5 cells have been reported. The overexpression of the receptor EphA2 has been reported in non-small cell lung carcinoma cells [41,42]. The binding potency of anti-EphA2 monoclonal antibody to CL1-5 cells was analyzed by flow cytometry and is shown in Supplementary Figure A3.
The antibody I on the MNPs binds with the CL1-5 cells to isolate the target cells from the sample. This step reduces the interference of non-target cells and increases the specificity of the detection method. The specificity of the antibody for CL1-5 cells was the determining factor to choose the antibody to be immobilized on the MNPs. In the second step, antibody II on the gold nanoslits binds to surface receptors of the CL1-5 cells (now bound with MNPs). The binding in the second steps results in a shift of the SPR resonance wavelength. The strength of the binding in the second step is crucial for strong binding of the target cells. Stringent washing is therefore applicable so as to minimize the non-specific binding of non-target cells on the gold nanoslit. Table 1 summarizes the result for four combinations of the candidate antibodies selected for the two steps. The functionalized MNPs first were mixed with the cells. Then the suspension of MNPs-cells were introduced to the micro-volume chip (MVC). Using an optical microscope to count the cells on the glad nanoslits, the retention rate was determined by dividing the number of the sedimented cells (i.e., initially after cells were introduced) to that of the bound cells (i.e., after stringent washing). Table 1. Various combinations of the candidate antibodies selected for the two steps.
For the three candidate antibodies there are six possible combinations with two different antibodies in step I and step II, respectively. To minimize non-specific binding of non-target cells in the first step, we chose the more specific antibody as the antibody I. Anti-CD44 is not suitable as antibody I because of the expression of CD44 on many types of cells such as leukocytes, fibroblasts, endothelial cells, and epithelial cells [43,44]. We therefore ruled out the two combinations that use CD44 as the antibody I. The result show that 100% retention rate was achieved by using anti-EphA2 (3F7) or anti-EGFR as antibody I and anti-CD44 as antibody II. The expression of the epidermal growth factor receptor (EGFR) on the surface of human peripheral blood monocytes has been reported [45]; for this reason we did not use anti-EGFR for the first step of capturing. To maximize the specificity of the binding of CL1-5 cells, anti-EphA2 (3F7) as antibody I and anti-CD44 as antibody II were selected.
SPR to Detect Specific Cell Binding on the Sensor's Surface
Gold nanoslits provide the surface plasmon resonance signal. Surface plasmon resonance of the fabricated gold nanoslits with a period of 600 nm manifested as a transmission spectrum in the wavelength range of 800-850 nm when cells in PBS buffer were introduced to the microfluidic chip.
A Nickel-coated ferritic iron needle attached to a cylindrical neodymium magnet (denoted as "horizontal-needle-magnet" configuration below) was put beneath the nanoslit to bring the MNPs carrying the target cells to the surface to bind with the anti-CD44 immobilized on the gold nanoslits (detail described below in Figure 5a). It should be emphasized that, using such configuration of the magnet, not all the target cells resides inside the detection area depicted in Figure 1b. In the following tests, the flow rate of sample introduction was 70 µL· min −1 . The position of the SPR spectrum shifts due to the cell binding on the gold nanoslits. Figure 3 shows cell binding detection using DCM. Figure 3a shows the spectral shift corresponding to a suspension of 1000 cells in 1 mL PBS buffer. A prominent SPR red shift (4.5 nm) was observed after introducing the sample to the funnel chip. This prominent shift resulted from the specific binding of the cells to the immobilized anti-CD44 antibody on the gold nanoslits. As a control to this test and to evaluate the SPR ability to detect the cell binding, 1000 target cells in 1 mL buffer were introduced into the SPR chip without immobilized antibody on the gold surface (control 1, Figure 3b). Introducing the MNPs-cell suspension into the SPR chip led to a negligible red shift (0.7 ± 0.4 nm) of the SPR resonance peak. This result demonstrates that the unbound cells do not lead to shift of SPR resonance peak. The SPR shift observed in Figure 3a is attributed to the cells bound by specific antibody-antigen binding. The SPR detection field is a few hundred nanometers above the sensor surface [46]. The SPR penetration depth at the wavelength of 850 nm is less than 400 nm [47]; therefore the observed shift in the SPR resonance is attributed mainly to the cells that are bound to the gold surface. This observation suggests that the unbound cells are further than 400 nm from the surface of gold nanoslits.
Further specificity evaluation was carried out by applying DCM to a sample of 1000 non-target cells (HSC-3 cells) in 1 mL buffer (control 2, Figure 3c). HSC-3 cells are head and neck squamous cell carcinoma and express CD44 [48,49]. No significant shift (0.4 ± 0.4 nm) in the SPR resonance wavelength was observed after the sample introduction. The shift is below the resolution of our spectrometer. The result of this test confirmed that non-target cells, which express CD44, are not captured on the gold nanoslits. This test further confirms the specificity of our method, DCM.
The comparison of the SPR response of the two control tests and that of the target cells (CL1-5) is shown in Figure 3d.
Capturing Cells in Blood
The previous results confirmed the specificity of transmission gold nanoslit SPR to detect bound cells on the sensor's surface. Following these observations, we further explored the sensitivity of nanoslit SPR platform for detection of rare cells in a large amount of surrounding non-target cells. To evaluate the specificity of our method, DCM was applied to detect the cells in the blood sample. Various numbers of CL1-5 cells were added to the white blood cells (WBCs) after discarding the lysed red blood cells. The sample were introduced to the funnel chip at the flow rate of 70 µLmin −1 and the horizontal-needle-magnet put beneath the nanoslit to bring the MNPs carrying the target cells to the surface to bind with the anti-CD44 immobilized on the gold nanoslits.
The result is shown in Figure 4a. At 40 min, a red shift of 0.6 ± 0.4 nm for the blood sample without spiked cells, a shift of 1.7 ± 0.4 nm for the sample spiked with 100 cells and 5.4 ± 0.6 nm shift for the sample with 1000 cells were observed. The corresponding temporal changes of the SPR response is shown in Figure 4b. As it has been shown for the sample of blood only (black dots), at 30 min after introducing the sample, the SPR spectrum was red shifted but the following post-wash step led to a backshift. The backshift after the post-washing step indicates effective elimination of non-specific binding of the blood cells from gold nanoslits. In comparison, the rapid red shift of the SPR spectrum (~5 nm in 20 min) caused by 1000 cells spiked in 1 mL blood confirms the high sensitivity and specificity of our method to detect the target cells in the blood sample.
This data helps in determining the optimal working point of our detection method in relation to the read-out time, i.e., the time point in which the SPR response difference for different concentration of target cells is maximized. A shorter read-out time is desirable for reducing the influence of non-specific binding of blood cells on the surface. According to the data shown in Figure 4, we choose 20 min as the working point of detection. At 20 min, a red shift of 0.4 ± 0.4 nm for the blood sample without spiked cells and a shift of 1.7 ± 0.4 nm for the sample spiked with 100 cells were observed. The shift of 1.7 nm for a sample of 100 cells in 1 mL blood was found to be the detection limit of our method at 20 min when using the horizontal-needle-magnet configuration. Although the high specificity of SPR to detect target cells was shown in this section, the low purity of captured cells, the non-specific attachment of blood cells and the low capturing efficiency on the detection area are the main limitations of this design.
Improving the Sensitivity and Purity of Capturing
The result shown above confirmed the specificity of gold nanoslit transmission SPR in detecting target cells on the surface. This observation shows the potential of nanoslit SPR platforms for detection of rare cells among a large number of surrounding non-target cells. For all the tests shown above, the funnel chip was integrated with the horizontal-needle-magnet and the sample was introduced to the funnel chip at the flow rate of 70 µL· min −1 . In this section, we modified the configuration of the magnetic field to improve the sensitivity and purity of capturing.
The detection area of the nanoslits, defined by the focal spot (300 µm by 300 µm, Figure 1b) of the probe light, is relatively small. Improving the fluidic system to be able to deliver the MNPs-cells more efficiently and precisely to the active detection area would greatly improve the sensitivity and the detection limit of our system. One possible solution could be integrating a magnet with the funnel chip to sharply focus the magnetic field on the nanoslit array, therefore efficiently capturing the MNP-cells inside the detection area. Different arrangements of the magnets were investigated to obtain the most advantageous configuration. The local magnetic field around the detection area was estimated through simulations based on the Finite Elements Method (FEM). The magnetic field intensity distribution along the chip was compared for a Nickel-coated ferritic iron needle attached to a cylindrical neodymium magnet (horizontal-needle-magnet), Stacked cylindrical neodymium magnets topped by a ferritic iron tip (vertical-magnet), nanoslits sandwiched between stacked magnets and a third cylindrical magnet (sandwich-magnet) and a sandwich configuration with additional ferritic iron tip for focusing the field on the sensor active area (sandwich-magnet-with-a-tip).
The results of simulation summarized in Figure 5d confirm that the horizontal-needle-magnet configuration (dotted line) introduces a very weak and broad magnetic field to the detection area. The configuration results in low purity and low cell capturing efficiency. On the other hand, the sandwich-magnet configuration yields the strongest magnetic field at the sensor (showed with dashed line); while the addition of a ferritic iron tip (sandwich-magnet-with-a-tip) allows a better focus of the field on the detection area (solid line). The sandwich-magnet-with-a-tip configuration, which yields a relatively strong magnetic field and good focus, was used when the sample was introducing to the funnel chip at the flow rate of 300 µL · min −1 . Then the top magnet was removed to allow the cells to bind with the antibody on the gold nanoslits for 30 min. The efficiency of cell capturing inside the detection area from a suspension of 100 cells in 1 mL medium was studied. Our experimental results showed that this arrangement can efficiently capture the low-abundant cells under the fast flow rate of 300 µL· min −1 . The microscopy image of the nanoslit surface with the captured cells is shown in Figure 6. A large number of MNPs-cells are captured on the detection area. With the new magnet configuration, the capturing efficiency is not reduced by the increased fluid velocity. Since higher flow rates cause a drastic decrease in the capture of non-target cells, the new magnet arrangement greatly increased the purity through a fast flow rate, even in the presence of large number of non-target cells. Figure 6. A suspension of 100 cells in 1 mL culture medium was introduced to the funnel chip integrated with the sandwich-magnet-with-a-tip configuration and flow rate of 300 µL· min −1 . Approximately 40% of the cancer cells were captured on the detection area.
Care must be taken in aligning the tip and gold nanoslit (detection area) that the majority of cells can be captured and detected on the detection area. As can be seen in Figure 6, significant amount of cells resides on on-detection region. This relatively poor result was obtained by manually aligning the magnet and the detection region (i.e., the nanoslit). The capturing efficiency can be greatly increased if the alignment is to be done by a robot. Figure 7a shows the microscopy images of the gold nanoslit surface after cell capturing when a suspension of 100 cells in 1 mL medium was introduced to the funnel chip. The flow rate of sample introduction was increased to 300 µL· min −1 . Figure 7b demonstrates the SPR wavelength shift after cell capturing. The observed shift (2.0 nm) is actually caused by less than five cells captured on the nanoslits. This result confirms high sensitivity of gold nanoslit SPR to detect rare cells even when a small fraction of cells (in this case 5 out of 100) are on the detection area. The results presented here indicates if the focusing configuration can be improved to give higher field intensity with a sharp field profile to confine the captured cells on the detection area, one could improve the sensitivity as well as maintaining the high purity of capturing. Using more focused magnetic field puts a more stringent requirement on the alignment between the tip and gold nanoslit. Such alignment can be easily achieved using motorized translational stages. Assuming the capturing efficiency of 40% using the sandwich-magnet-with-a-tip, 13 cells (5/40% = 12.5) in 1mL of blood can be detected.
Conclusions
The results presented here highlight several advantages of DCM combined with the funnel chip integrated with magnet and detection by SPR over some of the available technologies for rare cells detection. The double capturing results in a highly specific isolation of the target cells and minimizes the non-specific binding of non-target cells. A novel microfluidic chip to process large volume of sample was designed and fabricated. A neodymium magnet was integrated with the funnel chip to improve the purity as well as capturing efficiency of cell capturing on the nanoslits. It is expected to achieve detection limit of 13 cells/mL using the sandwich-magnet-with-a-tip configuration. Finally, the use of SPR for detection allows for real time monitoring of the capturing process and for the discrimination between bound and unbound cells on the substrate. This property is superior to optical microscopy.
Author Contributions
The experiment, data analyzing, microfluidic chip design and fabrication were conducted by Mansoureh Z. Mousavi, Huai-Yi Chen and Cho-Yuan-Yuan Chang. Flow cytometry test was carried out by Hsien-san Hou. Gold nanoslit chip fabrication was done by Pei-Kuen Wei's research group. Steve Roffler contributed with the antibody preparation. Mansoureh Z. Mousavi and Ji-Yen Cheng contributed to the writing of the paper.
Conflicts of Interest
The authors declare no conflict of interest.
|
2015-09-18T23:22:04.000Z
|
2015-03-01T00:00:00.000
|
{
"year": 2015,
"sha1": "4c0ba0758d68d20fcb2d1b33247ec30ef655fc57",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6374/5/1/98/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c0ba0758d68d20fcb2d1b33247ec30ef655fc57",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
119199651
|
pes2o/s2orc
|
v3-fos-license
|
Stardust findings. Implications for panspermia
In January 2004, the Stardust spacecraft flew through the dust of comet 81P/Wild 2 and captured specks of the cometary dust. On analysis of the comet 81P/Wild 2 samples, it was found that they contain materials found in the coldest and hottest region of the early solar nebula, strongly suggesting 'mixing' on the grandest scale. Here it is suggested that if microorganisms were present in the early solar nebula, as required by the hypothesis of cometary panspermia, then in the light of the Stardust findings, life was already present in the very material that formed the planetary bodies.
COMETARY PANSPERMIA
The hypothesis of cometary panspermia needs a small fraction of microorganisms present in the interstellar cloud from which the solar system formed to have retained viability, or to be capable of revival after being incorporated into newly formed comets. Some comets owing to orbital disruptions get deflected towards the inner solar system thus carrying microorganisms onto the Earth and other inner planets. So, as per cometary panspermia life was first brought to Earth, about 4 billion years ago by comets and they continue to do so (N.C. Wickramasinghe et al. 2003). The hypothesis of cometary panspermia is not yet vindicated.
COMETS
Comets formed in the early stages of the condensation of the solar system. They contain the most pristine material available from that epoch which helps us understand conditions that existed in the young solar nebula more than 4.6 billion years ago. The major comet reservoirs are Oort cloud and Kuiper belt as well as Trans-Neptunian scattered disc. Comets spend most of their lives in these reservoirs (Crovisier, J. 2006).
The aging or evolutionary effects that a comet nucleus will experience can be divided into four primary areas: the precometary phase, where the interstellar material is altered prior to incorporation into the nucleus; the accretion phase, the period of nucleus formation; the cold storage phase, where the comet is stored for long periods at large distances from the Sun; and the active phase where the comet undergoes drastic changes owing to increased solar insolation as it approaches the inner solar system (Meech K.1999) and (Meech K. J. & Svoren J. 2005).
STARDUST FINDINGS
In January 2004, the Stardust spacecraft flew through the dust of comet 81P/Wild 2 and captured specks of the cometary dust.
On analysis of the comet 81P/Wild 2 samples, materials like Olivine, Calcium Aluminum Inclusions (CAIs) (Sandford et al. 2006) which formed at extremely high temperatures and Polycyclic Aromatic Hydrocarbons (PAHs) which formed at very low temperatures were found (Sandford et al. 2008).
3
Stardust findings made it clear that some cometary materials formed in regions with temperatures above 2000 K while others, especially the ice components, appear to have been formed in regions below 40 K, only a few degrees above absolute zero (Brownlee et al. 2006).
CONCLUSIONS -IMPLICATIONS FOR PANSPERMIA
One of the cornerstones of the hypothesis of cometary panspermia is the requirement of the presence of microorganisms in the solar nebula.
Stardust findings strongly suggest that precometary phase included mixing on the grandest scales between the coldest and hottest regions of the solar nebula. So, if microorganisms were indeed present in the solar nebula then they got well 'mixed up' and not only got incorporated into comets but rather into every body of the solar system.
We do not know if microorganisms could have survived the turbulent 'mixing' of the early solar nebula and later the violent planetary formation processes. However if they somehow did, then it means life was already present in the very material that formed the planetary bodies. This conjecture is testable in future when various planetary bodies are studied in great detail.
|
2014-10-01T00:00:00.000Z
|
2009-03-28T00:00:00.000
|
{
"year": 2009,
"sha1": "051a44823db7944fdd27835962c99207bb1a7e0f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "92e4e95b4e4e141f5898267439ce010e71ebb51d",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
11082374
|
pes2o/s2orc
|
v3-fos-license
|
Community-based health research led by the Vuntut Gwitchin First Nation.
Objectives. This paper documents an exceptional research partnership developed between the Vuntut Gwitchin Government (VGG) in Old Crow, Yukon, with a group of scientists to examine northern food security and health as part of a larger, multidisciplinary International Polar Year (IPY) research program. We focus on the elements that enabled a successful community–researcher relationship. Study design. The VGG led the development of the research and acted as Principal Investigator on the IPY grant. The multidisciplinary collaboration spanned the physical, biological and health sciences, including issues related to food security. Methods. The food security and health component of this research was carried out using a series of complementary methods, including focus groups, structured interviews, a household questionnaire, an interactive workshop, community meetings, transcript analysis and a caribou flesh exposure assessment. Results. Results from the food security component are informing local and regional adaptation planning. The legacy of the research collaboration includes a number of results-based outputs for a range of stakeholders, a community-based environmental monitoring program, long-term research relationships and improved community capacity. Conclusions. The type of collaboration described here provides a useful model for new types of participatory health research with northern communities.
INTRODUCTION
Traditional food is central to the social, cultural and physical well-being of Aboriginal peoples in the Arctic. At the same time, changes in the physical and biological environment, in the context of rapidly changing sociocultural dynamics and globalization, are having a particular impact on food security in northern regions (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11).
Food security refers to the continued, adequate and secure access of individuals and households to safe, nutritious and personally acceptable food to meet the dietary requirements for a healthy and productive life (12,13). Cultural aspects of food procurement, preparation and consumption are particularly important in Aboriginal contexts (14). Food insecurity has been associated with multiple aspects of poorer health, where foodinsufficient households are more likely to report poorer functional health, restricted activity, chronic conditions, depression and distress, and less social support (15). Food insecurity is identified as a significant concern in northern Canada (2,16). Market foods in remote Arctic communities can cost more than double the price of what they cost in southern supply centres (17,18) due to high distribution costs and small markets, and less expensive choices often lack nutrient density (7). It is well recognized that the continued presence of traditional foods in the diets of Aboriginal peoples contributes in multiple ways to better health (7,19,20). Community members, however, experience challenges to accessing sufficient, healthy amounts of these species to meet their food needs and preferences.
Northern food production systems are under stress from a variety of social, economic, political and environmental forces. Many northern Aboriginal communities regularly experience periods of interruption of traditional food supply due to the temporal fluctuations in natural resources (21,22); for example, "no-summer" summers where the sea ice never breaks up and whales cannot be hunted (23), or years with irregular caribou migration patterns, that reduce accessibility for hunters (24). Climate change may exacerbate this situation by affecting species distribution, population abundance, morphology and behaviour (25). In the Arctic, climate change is already challenging food harvesting methods as well as socio-economic relationships that dictate the distribution of subsistence harvests, which have worked to sustain populations for many generations (11,23,26,27).
The extent of such impacts and their implications for the continued nutritional well-being of individuals and communities in the Arctic are largely unknown. Participatory research is a useful approach for addressing such gaps in health research. Where Aboriginal issues are involved, the research approach requires additional considerations. It is now recognized that northern researchers have practical and ethical responsibilities to actively engage with communities to carry out mutually beneficial research (28). A new northern research paradigm has emerged, promoting research that is collaborative, interdisciplinary, policy-oriented and reflective of northern priorities (29)(30)(31). Effective collaboration relies on early initiation and continued communication with community members and relevant local, regional and national organizations; community input on research design and process; incorporation of capacity-building and/ or employment opportunities; and dissemination of research outcomes in accessible formats (32,33). Wolfe et al. (31) also highlight the importance of an enabling context, including community capacity, convergence of community and researcher interests, supportive funding guidelines for innovative and cost-intensive collaborations, flexibility and strong relationships. The human component is primary, where mutual respect is paramount, as is valuing the knowledge and expertise of community members (34). These imperatives reflect the importance of research that is jointly managed between research institutions and communities, where appropriate and mutually agreed upon terms and conditions are specified in a research agreement (35).
While the above aspects of collaborative health research were initially developed as guidelines, the diverse cultural, social and political contexts among Aboriginal communities and the cultural and institutional differences between universitybased researchers and Aboriginal communities pose a great challenge for allowing them to evolve into standard practice for successful engagement. A number of models of collaboration have been proposed to address the diverse range of interests in health research (e.g., 4,8,10). Here we provide insight into one such research model. This paper discusses our experience conducting a community-based study in partnership with the Vuntut Gwitchin First Nation (VGFN) in Old Crow, Yukon. We outline the development of a multidisciplinary, multi-year research project that materialized as part of the International Polar Year, and focus on the food security component undertaken by our University of Northern British Columbia (UNBC) research team. The project, which was called "Environmental Change and Traditional Use of the Old Crow Flats in Northern Canada: Yeendoo Nanh Nakhweenjit K'atr'ahanahtyaa (YNNK), " was led by the Vuntut Gwitchin Government (VGG), in collaboration with the Yukon Government, Parks Canada and a multidisciplinary team of researchers from across the country. Its aim is to improve understandings of how the community of Old Crow and the surrounding traditional territory are being affected by climate change. By outlining various facets of the project, opportunities and lessons learned, we can focus on the elements that enabled a successful community-researcher relationship.
The community of Old Crow
Old Crow is a small Gwich'in community in the northern Yukon with a population of approximately 250, most of which are VGFN members. The VGFN also includes a number of individuals with Dagoo (or Tukudh) ancestry (36). Accessible only by air, apart from the occasional winter road, the cost of living is quite high relative to southern Canada; for example, a Revised Northern Food Basket (including both perishables and non-perishables) is 2.4 times more expensive than in Whitehorse (37). Residents continue to rely in large part on traditional foods such as caribou from the Porcupine Herd, which has long provided a primary source of sustenance for Vuntut Gwitchin families. Recent changes in caribou availability and the ability of hunters to access harvesting areas have community members concerned about long-term food security under changing climatic conditions (24).
Initiating a researcher-community collaboration
In January 2006, the VGG invited a team of northern researchers, including three of the six Northern Research Chairs from the Natural Sciences and Engineering Research Council of Canada (NSERC), to Old Crow to discuss their concerns regarding the impacts of climate change on the environment and health of the local population and their research needs. Two full days of exchanging ideas with the Chief and VGG staff Community-based health research led by VGFN resulted in the identification of research questions and the development of a detailed research plan. Under the leadership of the VGG, the multidisciplinary team of northern scientists brought a breadth of expertise to the project. Their research spans the physical, biological and health sciences, including issues related to food security.
The unique nature of this research program began from the pre-proposal stages. The design and development of the program took place in the community of Old Crow. The principal objectives for the program were developed at that time through consultations among the research team, the VGG, organizations such as the local North Yukon Renewable Resources Council and Parks Canada and the community-at-large. All parties supported the idea that the VGG lead the research program, and a proposal was co-developed with the VGG as the Principal Investigator and the research team leaders as Co-Investigators. A grant application was submitted to NSERC through the 2007-08 International Polar Year (IPY) program, which listed "Science for Climate Change Impacts and Adaptation'' and ''Health and Well-being of Northern Communities" as science priorities.
The research objectives were as follows: (1) to document environmental change in Crow Flats (an important harvesting area within VGFN traditional territory) from the last interglacial period to the present from a unique assemblage of archives; (2) to assess the distribution and abundance of vegetation and targeted wildlife species, and identify the relationships between these and the changing physical environment; (3) to evaluate the impact of changes in biophysical systems on traditional VGFN food sources; and (4) to develop a long-term, community-based environmental monitoring program. Researcher expertise spanned a range of disciplines, including quaternary paleontology, dendroclimatology, permafrost science, hydroecology, terrestrial ecology, wildlife biology, community health sciences and traditional knowledge of the land and its processes. The incorporation of traditional knowledge was also recognized as essential. While drawing on the collective experience of the interdisciplinary group of researchers, this paper focuses primarily on the food security and health component of the research.
Research activities
The food security and health component of this research was carried out using a series of complementary methods, including focus groups, structured interviews, a household questionnaire, an interactive workshop, community meetings, transcript analysis and a caribou flesh exposure assessment. Each methodological component involved consultation, data collection, data verification through on-site community presentations and public reporting via written and oral presentations. A research coordinator and research assistants were hired to assist with data collection, verification and reporting. Here, we briefly describe each of the research components undertaken. The Council of Yukon First Nations was invited to join as a partner for the food security component to facilitate the sharing of research experiences with other communities in the Yukon.
Climate change impacts on food security and adaptation planning
Four focus groups and 41 interviews were conducted in October-November 2007 to gather information on traditional food consumption, availability of and access to traditional foods, and perceived reasons for dietary changes. A one-day workshop was held in October 2009 with 19 community leaders and decision-makers representing the government and local agencies, with a focus on health and environment sectors. The discussion centred on strategies to address issues and concerns raised by local residents about food security over the long term.
Traditional food use patterns and perspectives on food security Twenty-nine interviews were conducted with community members in April-May 2008 to determine (a) the frequency and quantity of traditional food consumption, and (b) local perspectives on food security. Food frequency results were compared with similar data collected by Wein and Freeman (38) in the early 1990s to identify any changes over time (24).
Food security adaptations documented in oral history
A wealth of information is contained in the oral history database of interview transcripts maintained by VGFN. Using the database index, we selected approximately 100 interviews with the highest density of food security-related keywords, and coded relevant sections to identify historical food security challenges and adaptations.
Training and community involvement
Capacity-building Community members were involved in multiple ways in this research program. This involvement was largely successful due to existing community capacity, which was bolstered through additional capacity-building opportunities. A major strength was the involvement of a Vuntut Gwitchin consultant who proved to be a very competent community coordinator. This support on the ground ensured the smooth operation of community meetings, focus groups, workshops and the hiring and coordination of local research assistants.
Several people were hired and trained as research assistants to collect data through one-on-one interviews. One of the local partners coordinated the hiring process, spreading word of the opportunities and collecting names of interested parties. Two individuals were hired to conduct qualitative interviews on food security and climate change in the fall of 2007. Subsequently, three additional research assistants developed their skills during a thorough two-day training in March 2008, which focused on food frequency questionnaires and quantitative food security interviews. They then conducted interviews over the subsequent two months. A UNBC researcher was on-site during the first two weeks to supervise and address any questions that arose, and later provided support via telephone and email.
Nutrition program
In April 2008 one of the UNBC researchers presented an interactive nutrition program for students in Grades 4 to 6 at Chief Zzeh Gittlet School in Old Crow, entitled "What Makes Food 'Healthy'? (And Why Does It Matter?). " Students explored why certain nutrients are important for well-being and in which foods they can be found, with an emphasis on traditional foods. A second component focused on making informed dietary choices. Students compared nutrition labels from market foods from the local Northern store (e.g., whole wheat versus white bread) and also compared the nutritional value of traditional and market foods (e.g., salmon versus pre-packaged lunch meat). They were also asked to keep a journal of the foods they ate over several days. Following the nutrition program, the researcher visited the annual youth culture camp organized by the school and VGFN, where youth participated in hands-on experiences that included muskrat trapping and storytelling.
Community-based health research led by VGFN
Youth climate change workshops IPY researchers participated in the January 2009 "Nits' oo nakhwanan, nits' oo gwiidandaii juk ch'ijuk gweedhaa, Our Changing Homelands, Our Changing Lives Conference. " Organized by the Arctic Health Research Network-Yukon and supported by the VGFN, the conference brought together Vuntut Gwitchin youth to discuss the challenges facing their community, particularly climate change. Vuntut Gwitchin elders and traditional knowledge experts shared their experiences with youth, and IPY scientists held interactive workshops based on their IPY research on topics ranging from permafrost to tree ring history. In the workshop entitled "What's For Lunch? Climate Change Impacts on Wildlife and What You Eat, " IPY food security research team members facilitated a discussion among youth about their perceptions of the factors affecting their community's food security. In the second workshop component, organized by IPY wildlife researchers, students participated in a radiotracking activity and spent time learning about the furs and biology of animals trapped in the Yukon. IPY researchers also made public presentations in the evenings to update community members on the progress of their respective studies. Since validation meetings had already been held for the food security work, these presentations incorporated the initial community feedback for a second round of verification.
Summer institute on global Indigenous health research
The partnership with Old Crow enabled the participation of a paired team (a food security researcher with a Vuntut Gwitchin youth leader) in the weeklong 5 th annual Summer Institute (SI-5) sponsored by the Canadian Coalition for Global Health Research in Duncan, British Columbia. The 2008 SI-5 focused specifically on addressing Indigenous health research challenges, with the intention of engaging those who are new to the field of global health research. The joint participation allowed the development of a genuine personal and professional connection between the researcher and VGFN member, while also allowing both members to build their own research capacity and networks. SI-5 encouraged participants to explore the challenges faced by the researchers and Indigenous populations working together; strengthened participants' understanding of selected global forces that affect the health of Indigenous peoples; provided opportunities for skill development of relevant competencies such as advocacy, leadership, partnership building and knowledge translation; and discussed issues related to global health research of particular interest and importance to those considering a career in this field. Field trips and speaker presentations exposed participants to the health research challenges faced by many First Nations. The SI-5 culminated in the one-day "Global Indigenous Health Research Symposium" at the University of Victoria, where participants co-presented research posters.
Collaborations
This multidisciplinary, community-based program has offered many opportunities for collaboration, and indeed would not have been feasible without strong partnerships among the IPY researchers, the VGG and the Council of Yukon First Nations (CYFN). In a novel arrangement for a research program of this size, the VGG is the Principal Investigator on the grant, while the researchers are Co-investigators. From the initial proposal planning meeting in Old Crow in January 2006, VGFN community members, local organizations such as the North Yukon Renewable Resources Council, Parks Canada and VGG representatives have played an active role in shaping program objectives, guiding program development, providing feedback on findings and determining appropriate products and outputs to ensure that results can be effectively communicated to a range of stakeholders for use at various levels of organization. Community members also participated in the research process in multiple ways as community coordinators and research assistants. Capacity-building initiatives were implemented where feasible (see above).
Throughout the program, the eight lead researchers and their team members maintained frequent contact with the group and with the VGFN research coordinators, freely sharing updates and results. We also committed to annual face-to-face meetings in Old Crow, where members from each research team and other partner organizations gathered for a week of community meetings.
The validation of food security study results via community visits in March 2008 and January 2009 was central to the success of the research. In each case, the researchers provided a draft report and a one-page summary for each research component, and orally presented preliminary results in a public meeting. Among other feedback, the food frequency data provoked a lively discussion that provided a critical context for the interpretation of results and highlighted important themes.
The Council of Yukon First Nations (CYFN) played a vital role in helping the researchers liaise with appropriate community contacts when necessary (especially in early program stages). Both the Circumpolar Relations and Health Departments supported the program and provided feedback on research methods and findings. CYFN also hosted monthly meetings of the Health Commissioners from all Yukon First Nations, where the food security team presented periodic updates.
An additional collaboration was initiated with the Arctic Health Research Network-Yukon (AHRN-Yukon), leading to participation in multiple phases of their project, entitled "Vuntut Gwitchin Climate Change and Health Research in Northern Yukon: What Do Our Changing Homelands Mean for Our Health?" Phase 1 included the "Youth Climate Change Conference" held in Old Crow in January 2009 (described above), where UNBC research team members participated as workshop leaders and keynote presenters. During Phase 2 -"Knowledge into Action" -UNBC researchers acted as members of the Advisory Committee to offer guidance on research design and training for community youth who carried out interviews on food security adaptation strategies. Phase 3 focused on implementing adaptation strategies, with the UNBC team continuing to offer support.
Food security research
Together, the results from the food security and other multidisciplinary studies in Old Crow and the VGFN traditional territory are contributing to a holistic understanding of the human-environment system and the manner in which it is evolving due to changing climatic and other sociocultural, economic, political and environmental conditions. The results from the food security component are informing the local adaptation planning process in Old Crow. They also contributed to a March 2010 regional workshop of leaders in health and environment sectors from all Yukon First Nations, which resulted in a plan of action for addressing food security issues in the Territory.
Collaborative process
The collaboration among the VGG, researchers and other organizations led to a number of outcomes during the process, as described above (e.g., youth climate change workshops, a school nutrition program, co-participation in a week-long summer institute on global Indigenous health). At the same time, the need for effective translation of research results to policymakers and other stakeholders was recognized, stimulating significant discussion about the project's legacy. Building on periodic interactions throughout the project, the project leaders organized a structured "Leaving a Legacy" workshop to discuss and develop research products, held during the February 2010 annual researcher-community meeting.
As well, we discussed a range of possible outputs that would target our diverse array of stakeholders. In addition to having manuscripts published in academic journals, the Natural Resource Department requested a series of structured research summaries for each component, based on a co-developed template. The primary product for community members at large was a coffee table book, which included images of project partners and community members matched with text describing each research component and highlighting researcher-community relationships. Further plans were made to write up a series of articles in Yukon's popular North of Ordinary magazine. In the first of these articles, the Natural Resources Director of the VGG and the YNNK Principal Investigator outlined some of the benefits of the research collaboration from the community's perspective. The project provided the impetus for the construction of an Old Crow Arctic Research Facility and has led to the process of establishing a community-based environmental monitoring program (ongoing). It also "exposed the realities of the changes affecting the Vuntut Gwitchin way of life, fostered community spirit, encouraged awareness of career opportunities for youth in science, and prioritized the importance of partnerships in science" (39, p. 13). Several research components are continuing beyond IPY as a result of community interest.
DISCUSSION
The community of Old Crow and surrounding Vuntut Gwitchin traditional territory are distinctive in the Yukon largely due to their unique geography. The territory is both remote and home to the Crow Flats, an internationally recognized wetland ecosystem that acts as a breeding and staging area for migratory waterfowl, a passageway for the Porcupine Caribou Herd and a refuge for other arctic wildlife. The traditional territory's natural resources are integral to the culture and traditional activities of the Vuntut Gwitchin. Much of the territory is currently under protected area status or falls within the Integrated Management area, much of which is designated as Zone 1 (lowest development) (40).
The community's remoteness and continued reliance on the Porcupine Caribou Herd has led to a concerted, community-sanctioned effort to draw attention to the plight of the herd and, through their intimate connection, the predicament of the Vuntut Gwitchin themselves. Community leaders have taken bold action in periodically sending representatives to lobby the United States government against drilling for oil in the Arctic National Wildlife Refuge in Alaska (the Porcupine Caribou Herd's calving grounds), and in drawing international attention to this human-environment conundrum in an area of great ecological sensitivity where there are also strong resource development interests. The leadership displayed in dealing with this issue has brought community members together in support of a common cause.
Community-based health research led by VGFN
Together, the sensitivity of the location -further heightened by climate change impacts -and the salience of the issue -which engages a broad range of stakeholders -have drawn the attention of researchers in both natural and social sciences. VGFN has capitalized on this interest to engage researchers in order to generate information and interest about their cause and their land. Furthermore, they have a history of effectively nurturing internal talent while at the same time drawing on external human and financial resources, as necessary. This combination of a unique geography, a salient livelihood issue of communal importance, significant human resources and capacity, and an openness to working with outside researchers has made this multidisciplinary, community-based research possible.
The essential role of healthy ecosystems in providing a context for healthy people and livelihoods has been brought forward in several recent international initiatives (e.g., the Millennium Ecosystem Assessment, and the World Health Organization's Commission on the Social Determinants of Health). In recognition of the interrelatedness of ecological and social well-being, it behooves environmental health researchers to approach these issues from a more holistic, systems-oriented perspective, one that clearly resonates with the Aboriginal community. Parkes and Horwitz promote "reciprocal exchange between different modes of thinking, and a flow of new ideas into areas where such thinking has been non-traditional -including growing awareness of the cross-cutting relevance of (eco)systemic approaches and thinking" (41, p. 98). This collaborative, stakeholder-driven, multidisciplinary research program attempts to follow these guidelines while adhering to the new northern research paradigm. It draws on many different perspectives and types of knowledge to better understand the system that supports the many, interconnected facets of continued health and well-being of the Vuntut Gwitchin people.
This research project developed through a fully participatory partnership that was initiated before funding was sought and research began, which meant that VGFN members could shape the research agenda and help to determine the types of outputs that would best suit their needs. As a result, the research will produce recommendations that are applicable at both local and regional levels. Policy recommendations already have support from stakeholders, improving possibilities for implementation.
Participation in this research continues to have multiple benefits for all those involved. While we actively addressed community interests and pushed to improve the manner in which academics partner with Aboriginal communities, we also benefitted on a personal level. The participatory design of this research program allowed us to hear the unfiltered voices of community members speak about the challenges facing their community. The warm welcome that we received each time we visited stems from a long-standing relationship of mutual respect between community members and researchers. These personal connections afforded us unique opportunities to get to know families and spend time on the land, contributing additional perspectives to our work. We were fortunate to share stories and traditional foods with community members, and to participate in activities such as snowshoeing, dog-sledding and pulling fish nets. These experiences helped us to learn about and connect with an exceptional environment and people in a remote part of our country, and to develop an appreciation for Vuntut Gwitchin knowledge, culture and tenacity in the face of many changes. These relationships are fundamental to effective collaboration.
Community-based health research led by VGFN
Through this research, we are linking environment and health research in a novel manner, using food security as a cornerstone to engage community members and scientists from multiple disciplines, as well as relevant regional organizations. Through active engagement with Vuntut Gwitchin community members in Old Crow, our research has helped to improve understandings of environmental change and community-environment relationships, while offering a range of training and capacity-building opportunities.
While all research relationships have their challenges, our project's success was largely dependent on certain community characteristics and external factors. First, the community of Old Crow has a positive history of working with researchers. Second, it has significant capacity to take a lead role in carrying out a sustained, collaborative research program. Third, both the researchers and community showed substantial and sustained leadership throughout. Fourth, the northern Yukon provided a unique context in which to study environmental change. The distinctive community context (remoteness, cultural connectivity, reliance on the Porcupine Caribou Herd, self-governance) combined with that of the surrounding territory (particularly Crow Flats, a Ramsar Wetland of International Significance) warranted the attention of a multi-disciplinary team of researchers. Fifth, the nature of the research was such that it required the input of both science and traditional knowledge. Sixth, community priorities were well matched with researcher interests. Finally, the timing and availability of a major source of funding for a large, multiyear project matched well with researcher and community intentions. While each research context will vary, the type of research collaboration described in this paper provides one useful model for new types of participatory health research with northern communities.
|
2018-04-03T00:22:34.405Z
|
2011-02-18T00:00:00.000
|
{
"year": 2011,
"sha1": "22c2585583d859cb575e2bcabca0a110d9717d24",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3402/ijch.v70i4.17846",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "f5d46a549d7f53c74473b6896f5bcb77a2e53651",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195458033
|
pes2o/s2orc
|
v3-fos-license
|
The schedule of World Cup and its impact on the team
This paper discusses the organization of the World Cup competition, in which through the establishment of a number of scheduling model, the use of analytic hierarchy process and stochastic simulation and other methods to solve this problem and obtain a more reasonable arrangement of the competition. For the arrangement of the Order of the game, first of all, the existing competition is studied. On the premise of the increase in the number of teams and the previous arrangement of the group points, it is considered from the appropriateness of the field, whether the schedule is wonderful, whether the arrangement is simple and whether the ranking is reasonable. 3 new competition competitions are proposed for the improvement of the original competition. Then, the analytic hierarchy process is used to compare and analyze the 3 new competition methods, and the optimal competition method is selected. Finally, under the previously established game, FIFA’s integral algorithm formula is used to analyze and calculate the impact of the results produced under the competition on FIFA rankings. In the application of formula calculation, the stochastic factor is introduced, and the results are simulated by computer, which can get the approximate upward trend of the team in different ranking intervals.
INTRODUCTION
The FIFA World Cup, referred to as the "World Cup", is the world's most coveted sacred glory in the field of football, but also the ultimate dream of all football players in various countries (or regions). The World Cup is held every four years, and any FIFA Member State (region) can send a team to sign up for the event. The World Cup is the source and foundation of the development and popularization of football, so it is also known as the "Cup of Life". A total of 32 teams took part in the final stages of the World Cup in 2018, using the group plus knockout approach. The 32 teams were divided into eight teams, each with four teams, each of which had to play with the other three teams and play only one game, with 4 teams in each group playing a total of 6 games. Each group points of the top two teams out into the knockout stage of the 1/8 final, a total of 16 teams, followed by 4 rounds of knockout until the ranking, lasted one months to play a total of 64 games. But starting in 2026, the team will increase to 48 units. Due to time constraints, a team can't play too many games. As a result, FIFA has proposed a change in the competition, with each group changed from 4 to 3, with the first two teams eligible for the second round. In order to stay excited, there can't be too many games in the game, and the results won't affect the team's qualifying. In order to compete fairly, there can not be too many games, collusion is beneficial to both sides, the final result of the game should not contain a lot of luck factors. So the study of how to arrange the order of the game and the impact of the final results on Coca-Cola's FIFA rankings is crucial to the game as a whole.
Knockout
The knockout is that the participants play according to a certain combination, the negative are eliminated, the winner continues to play, to win the title runner-up. The advantage is to complete the whole game with the fewest field, the economy is fierce. The disadvantage is that the weak team learning to observe less opportunities, and the two strong encounter prematurely and be eliminated, there is no justice. Knockout because of the competitive process of the opportunity is strong, the result of the competition is large, although seeds, wheel and so on as a technical supplement, but its defects can not be completely overcome. We don't want the game to have too many luck factors so we can't simply choose a knockout.
round robin
Round-robin competition is a kind of sports competition in which teams take turns to compete with each other in a certain combination and finally decide the ranking based on the result of all competitions. In a round-robin competition, each competitor must compete against all competitors except themselves and complete all sessions. The round-robin competition can obtain the results of all the matches, so as to arrange the ranking of all the contestants, and the ranking results can objectively reflect the level differences of each participant.
However, there are two hidden dangers in round-robin competitions. Second, in the process of completing all the games, not every game is related to the final ranking, so the round-robin game is likely to appear in the prevarication and falsification phenomenon. We don't want to have too many games that don't interfere with qualifying and we don't want to have too many games where teams are colluding in ways that are mutually beneficial, so we can't just use ro-und-robin games.
grouping cycle plus elimination
Grouping cycle plus elimination is a competition system that absorbs the advantages of roundrobin system and elimination system. The grouping cycle is to divide all participating teams into several groups for the first stage of preliminary competition, and then between the winning teams of each group for the second stage of the final, to determine the ranking. The single cycle competition method is adopted in the group preliminary competition, and the single cycle competition, parallel competition and cross competition can be adopted in the final competition. Therefore, this competition method is also called mixed cycle system. The grouping cycle is suitable for competitions in which more teams participate, and the competition tasks can be completed reasonably and fairly within a short period of time.
Therefore the World Cup usually USES this kind of competition system, first divides the group to carry on the group circulation integral, after promotion according to the rank again carries on the elimination match, guaranteed the weak team to play the field not to be too few also reduced some fortuity, finally used the elimination match to also guaranteed the competition intense and the viewing point. The shortcoming of the grouping cycle is that the teams are different in strength. If they are not evenly distributed, the strong teams may be cut in advance and the weak teams may be placed in front. In order to overcome this defect, a "seeding team" should be set up in the ar-rangement. The so-called "seed team", is the strength and the result is relatively strong team, should be reasonably separated; The seeding team can be determined through negotiation or according to the ranking of the previous competition. We assume that based on the known seeded teams in previous matches and rankings, the seeded teams will be equally divided into groups to avoid the strong and strong meeting early.
Therefore, the method that requires the best scheme to arrange the competition order in advance needs to make clear what is the best competition arrangement system, and then consider the specific arrangement of the competition time and strength of each team. In order to determine the best competitive arrangements, we need to know what the arrangements are and determine the criteria to judge them quantitatively. As suggested by FIFA, the first round is a group round with 32 teams entering the second round after reaching the last two of the last three. Therefore, this paper gives three schemes based on the advantages of roundrobin competition and knockout competition, respectively explores their optimal arrangement, sets the optimal and benign indicators of the competition system through the analytic hierarchy process, and makes comparative analysis to select the best competition system and arrangement.
Proposal of the scheme
In order to meet the needs of World Cup event arrangement, we propose the following three schemes: Scenario I: On the premise of 3 into 2, there are 32 teams left after the first round of the breakout. This program is followed by a single knockout, and in order to avoid early encounters, we let the first place in each group of the group play with the second place in the other group.
This not only ensures the strength of the strong team out of the probability and the strength of the weak team into the next round of the probability is small, and has the strength of the team to play, to ensure the game. After the previous knockout round, there are 16 teams left, and they are randomly assigned to a second knockout round to determine the top eight. Another round will determine the top four, the top four will be a round of competition, the victory of the two teams as a candidate for the championship, the failure of the two teams again round, will determine three or four, the last natural is the championship battle, so after the previous division and 32 elimination games, a total of 80 games on the end of the game. After the first round of the group Points race, we will have a rough ranking. We propose that the two teams that performed particularly well in the first round of the division skip this round and go straight to the next round. And this round is divided into 10 groups, in addition to the selection of the two groups, the previous points in the top 10 separate into 10 groups.
The two teams left in each team are filled by random lots, with each group of three teams having another cycle points race, and this time three teams pick one, and only one team can get out of the group. After this round, plus the two teams that were promoted earlier, there were 12 teams. Then randomly divided the 12 team into 4 groups, each group of three teams, repeat the last round of arrangements. three teams to advance to another team, then four groups will get four teams, that is, the top four.
This arrangement is to enable the teams to play their full strength and avoid losing their chances by accidentally playing an aberration, but this can lead to too many games.
The top four to another round of knockout, the victory of the two teams as a champion candidate, the failure of the two teams than a round, decided three or four. And the last one is also the title race, so after three rounds of cycle points and the final two rounds of the knockout, a total of 98 games. After the first round of the group stage, we had 32 teams out. In order to ensure the tension of the schedule, we will use a single-wheeled knockout in the next round, so that we can get 16 teams, dividing them into two groups, each group of 8 teams. Eight teams to compete with each other to produce four wins and four losses, four negative teams to play again, to get two wins and two losses, at this time two negative teams eliminated. In this round, four teams in the two groups were eliminated, with a total of 12 teams left. With this multi-round knockout can let the strong have more strong chance to win the title, and the weak also reasonable out do not leave regret. The remaining 12 teams, in order to ensure the tension of the schedule once again, this round and the use of a single-round knockout system, came to the top six. The top six play each other again to reach the top three. The last three use a round of cycle points, in order to ensure the last one for the grand final, let the first loser and the third team first, the winner and the first winner of the grand final, so as to arrive at the second runner-up of the crown. In this way, the three kinds of competitions are integrated with each other, combining the advantages of the three, effectively avoiding the drawbacks of the three separate competition, a total of 88 games.
Establish a hierarchical comparison model
The target layer involved in this question is the excellent index of the competition system. The criterion layer consists of four parts, which are appropriate for the event, wonderful highlights, simple arrangement and reasonable ranking. 4 The program layer consists of three schemes we have developed, as shown below:
Constructing a pairwise comparison matrix
Set the judgment criteria to be: Scaling meaning 1 Expressing the same importance compared to two factors 3 Compared with the two factors, the former is slightly more important than the latter 5 Compared with the two factors, the former is obviously more important than the latter.
7
Compared with the two factors, the former is more important than the latter.
9
Compared with two factors, the former is more important than the latter.
2,4,6,8
Indicates the intermediate value of the above adjacent judgment The judgment matrix of the criterion layer is as shown in the following table: The judgment matrix of the solution layer is shown in the following table.
Consistency test
A positive reciprocal matrix that satisfies the following relationship is called a consistency matrix.
, , , 1, 2, The value of RI is obtained by constructing 500 sample matrices by random method: randomly extracting digital constructive positive reciprocal matrices from 1~9 and its reciprocal, and obtaining the average of the largest eigenvalues. When CR<10,It is considered that the consistency of the judgment matrix is acceptable, otherwise the judgment matrix should be appropriately modified. 4、The results of the total ranking of the solution are as follows: As can be seen from the total weight of the table, the optimal scheme 3.
FIFA rankings
By querying relevant information, it can be seen that FIFA's ranking is based on the implemen-tation of the points system, and the points rules for one match are: P M x I x T x C = Among them: 1) M is the game score: win 3 draws and 1 lose 0, the team won by penalty shootout 2 points, the loser 1 point. 2) I is the competition coefficient: 1 for the friendly match; 2.5 for the World Cup and the Continent Championships; 3 for the Continent Championships and Confederations Cup; 4. 3) T is the opponent coefficient: (200 -opponent ranking) / 100, if the opponent ranks below 150, directly use 0.5 as the coefficient. 4) C is the regional coefficient (the regional co- As the final result of the game is accumulated between the previous wins and losses, and each game's victory will have points accumulated, the higher the score of the final result of the team, the more points accumulated in the team, the corresponding The more likely the FIFA rankings will rise or the higher the ranking. The following is a quantitative calculation of the model to describe the cumulative increment and ranking impact of the points in the Football Association rankings when the participating teams get the corresponding rankings in the previous established system. The accumulated cumulative increment is: Due to the different stages that each ranking team can participate in, the final accumulated points are not the same.
conclusion
Through the analysis and demonstration above, among the three schemes proposed above, the combination of single knockout round and mul-tiple knockout round in scheme 3 has the best effect. In the case of scheme 3, the integral model can calculate the result easily and quickly.
|
2019-06-26T13:15:25.745Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "4776b6ca90318f7a7881db8cd32c18e1717bfb9d",
"oa_license": null,
"oa_url": "https://escipub.com/Articles/SRR/SRR-2019-05-2206.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bbd8247010bffa0b26f2adbc0874a972162ee63e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
221880248
|
pes2o/s2orc
|
v3-fos-license
|
The Economic Burden of Bipolar Disorder in the United States: A Systematic Literature Review
Abstract Bipolar disorder (BD) is a mood disorder with subtypes characterized by episodes of mania, hypomania, and/or depression. BD is associated with substantial economic burden, and the bipolar I disorder (BD-I) subtype is associated with high medical costs. This review further evaluated the economic burden of BD and BD-I in the United States (US), describing health-care resource utilization (HCRU) and sources of direct medical and indirect costs. Data were obtained from systematic searches of MEDLINE®, EMBASE®, and National Health Service Economic Evaluation Database. Citations were screened to identify primary research studies (published 2008–2018) on the economic burden of BD/BD-I or its treatment in real-world settings. Reported costs were converted to 2018 US dollars. Of identified abstracts (N=4111), 56 studies were included. The estimated total annual national economic burden of BD/BD-I was more than $195 billion, with approximately 25% attributed to direct medical costs. Individuals with BD/BD-I used health-care services more frequently and had higher direct medical costs than matched individuals without the disease. Drivers of higher direct costs included frequent psychiatric interventions, presence of comorbid medical/psychiatric conditions, and both suboptimal medication adherence and clinical management. Indirect costs (eg, unemployment, lost work productivity for patients/caregivers) accounted for 72–80% of the national economic burden of BD/BD-I. Different definitions for study populations and cost categories limited comparisons of economic outcomes. This review builds on existing literature describing the economic burden of BD and confirmed cost drivers of BD/BD-I. Improved clinical management of BD/BD-I and associated comorbidities, together with better medication adherence, may reduce health-care costs and improve patient outcomes.
Introduction
Bipolar disorder (BD) is a severe and complex mental health disorder composed of different subtypes that present variably. The disorder is characterized by shifts in mood (ie, alternating periods of elation, irritability, and depression), energy, and behavior. 1,2 The lifetime prevalence of BD is estimated to be 4.4% in the United States (US), with most cases emerging during adolescence or early adulthood. 3 BD is a leading cause of disability among young people, 2,4 and is associated with impairments that negatively impact personal, social, and occupational functioning, and reduce quality of life. 1,2,[5][6][7][8] Prior reviews of cost of illness studies have found a substantial economic burden associated with BD, and that cost estimates for the disorder vary considerably across studies. For example, one analysis estimated the per-person total lifetime costs of BD in the US ranged from $11,720 for a single manic episode to $624,785 for a disease course marked by nonresponsive/ chronic episodes (1998 US dollars [USD]). 9 Sources of direct health-care costs for individuals with BD include medical expenses associated with psychiatric care (both inpatient and outpatient), treatment (pharmacological and non-pharmacological), and emergency room (ER) visits.- 6,[9][10][11] Persons with BD tend to have higher rates of comorbid medical (eg, metabolic syndrome, hypertension) and psychiatric (eg, substance use disorder, anxiety) conditions, which contribute to higher utilization of general medical services compared to the general population. 1,[12][13][14][15] Fewer studies have examined indirect costs (eg, expenditures associated with reduced work productivity, use of caregivers) for those with BD; 6,10,16 yet, their impact is sizable with losses in work productivity previously estimated to represent 20% to 94% of the total societal cost of BD. 6 Bipolar I disorder (BD-I) is a subtype of BD in which individuals experience one or more manic episodes, and accounts for approximately one-quarter of all cases of BD. 3,17 The disease course for BD-I is typically chronic and is associated with significant functional disability and premature mortality. 3,5,[18][19][20] Some evidence suggests that BD-I may also be associated with higher direct medical costs compared to other subtypes of BD; however, the reasons for this are poorly understood. 6 Few studies have elucidated the different drivers that may contribute to greater cost burden for those living with BD, in general, and those with BD-I, specifically.
The objective of this systematic review is to provide an updated report of the economic burden of BD in the US, including a broader spectrum of cost and/or health-care resource use (HCRU) estimates compared with previous reviews. 6,10 Direct and indirect costs of the disorder are summarized, and drivers of these costs are identified. Where specific data existed for BD-I, these estimates are reported separately from those for BD overall. While BD-I has been associated with higher direct medical costs compared with other BD subtypes, 6 this review examines broader cost outcomes and drivers of these costs specifically for patients with BD-I.
Materials and Methods
MEDLINE ® , MEDLINE ® in-process, EMBASE ® , and National Health Service Economic Evaluation Database (NHS EED) databases were searched for primary research studies published between 1 January 2008 and 9 July 2018 on the economic burden of BD and BD-I in the US. Search strategies combined terms related to disease and outcomes and were limited to English-language publications only. The full search strategies and search terms are available in the electronic supplementary materials Tables 1 and 2.
The start year (2008) was selected because it captured a decade of published literature at the time the review was conducted. From 2008 to 2018, several new medications became available for the treatment of BD/BD-I 21 and multiple international guidelines, including a major North American clinical guideline, were revised. [22][23][24][25][26][27][28] In addition, two federal laws were passed in 2008 and 2010 (Mental Health Parity and Addictions Equity Act [MHPAEA] and Affordable Care Act [ACA], respectively) that substantially changed the insurance landscape and availability of mental health benefits in the US. 29 Given these collective events, conducting an updated review to understand the contemporary economic burden of BD/BD-I in the US was warranted.
Publications were included if the population of interest was adults with BD (generally or not otherwise specified) or BD-I, and the economic burden of the disorder or its treatment in the US was reported or could be derived. Economic burden was defined broadly; studies that discussed patterns of HCRU without cost estimates and papers that described other economic impacts associated with BD or BD-I (eg, workplace productivity, disability) were included in this review. Inclusion was restricted to studies conducted in a real-world setting (ie, not randomized controlled trials) and studies that included cohorts of at least 100 patients. Studies that focused on bipolar depression only, or on subtypes other than BD-I, were excluded, as were case reports and costeffectiveness analyses and similar economic evaluations of specific medications. Reviews were not included but their bibliographies were screened for relevant studies.
Citations from all database searches were combined; duplicates and excluded publication types (eg, randomized controlled trials, case reports) were flagged electronically and removed. Titles and abstracts for the remaining articles were screened by one reviewer, with independent review by a second reviewer if inclusion/exclusion was unclear. Inclusion was confirmed by review of full-text publications by one reviewer, with queries resolved by discussion with a second reviewer. Data from included studies were extracted into a structured spreadsheet by two reviewers, and disagreements were resolved by consensus. Data specific to BD-I were extracted separately wherever possible. The extraction spreadsheet was organized to capture discrete categories of economic outcomes to facilitate descriptive summary of the findings for this review. The methodological characteristics of included cost of illness studies were assessed with the checklist utilized by Kleine-Budde et al, 10 and these results are included in the electronic supplementary materials Tables 4 and 5 .
Costs were converted to a common year currency (2018 USD) using the Consumer Price Index (CPI) for Medical Care. 30 If cost-year was not reported in a study, it was assumed to be the last year of the observation period mentioned in the source publication.
Literature Search Results
A total of 4111 abstracts were identified. Following screening, 99 articles were selected for full-text review. After inclusion and exclusion criteria were applied, 56 reported data specific to BD-I, whereas the other 43 studies (76.8%) reported data on BD (generally or not otherwise specified). The study selection process is shown in Figure 1.
Of the 56 included studies, 30 studies (54%) reported cost data. The assessment of methodological characteristics of these studies found that most reported their data sources and analysis perspective; however, only 15 studies (50%) reported the monetary value of all HCRU and 13 studies (23%) provided separate information about the number of services (eg, health-care) and costs for the cost categories described. Inclusion of sensitivity analyses in these cost studies was uncommon. Of the papers not reporting cost data, 4 studies (7%) reported on HCRU, and 22 studies (39%) described other topics associated with economic burden (eg, work productivity, caregiver burden).
The electronic supplementary materials Tables 3-5 provide a list of all studies included in this review and the assessment of methodological characteristics for the cost studies identified.
Total National Economic Burden
Two studies estimated national costs using prevalence data for BD-I and bipolar II disorder (BD-II) in the US population. Cloutier et al estimated the total annual costs of BD-I in the US at $219.1 billion, corresponding to an average of $88,443 per person with BD-I per year. This figure included $50.9 billion in direct health-care costs (ie, medical and pharmacy); $9.7 billion in direct non-healthcare costs (eg, BD-related substance use disorder, criminal justice involvement for those who commit or are victims of crime, prevention/research costs); and $158.5 billion in indirect costs (eg, loss of work productivity or premature mortality). The excess costs of BD-I (the difference between costs incurred by individuals with and without BD-I) were reported to be $129.9 billion annually, an average of $52,413 per person with BD-I per year. Total costs for individuals with BD-I were 2.46 times greater than for controls without BD-I. 31 A second study estimated the total annual cost burden of BD-I and BD-II at $194.8 billion, including direct costs of $39.6 billion and indirect costs of $155.2 billion. However, the author explicitly acknowledged that these cost estimates were likely to be substantially underestimated, due to certain assumptions on which the analysis was based (eg, prevalence figures that did not include all subtypes of BD; direct and indirect cost estimates sourced from a 20-year-old cost analysis; the assumption that BD-I and BD-II are equally costly disorders; pharmacy costs that only included lithium). 32
Direct Health-Care Costs
Seventeen studies (two for BD-I, 15 for BD) reported on direct all-cause and/or mental health-related costs ( Table 1). Some cost estimates for cohorts with BD were higher and spanned a wider range than those reported for patients with BD-I. Several factors may contribute to this variation, including methodological differences between studies, consideration of different cost components (eg, inclusion of emergency room or other costs), and differences in clinical management and available treatments during the periods studied (2004 to 2007 for BD-I and 1998 to 2014 for BD, respectively).
All-Cause Health-Care Costs
Fourteen studies (two for BD-I, 12 for BD) provided estimates of annual all-cause direct health-care costs ( Table 1). Among cohorts with a BD-I diagnosis, annual all-cause direct health-care costs varied from $11,239 to $19,446 per-person-per-year (PPPY). 33,34 Estimates of annual all-cause direct costs reported for patients with BD spanned a wider range, from $11,051 to $46,971 PPPY. 35,36 A retrospective study of commercial health-care claims reported that PPPY all-cause health-care costs (ie, inpatient, outpatient, prescription medications) for individuals with BD were about four times higher than for matched individuals with no mental health disorders and no psychotropic medication use ($19,131 [BD] vs $4706 [no mental health disorders]). 12 A separate study reported that individuals in an employer-based health plan diagnosed with BD had higher allcause, mean per-member-per-month health-care costs than those with diabetes, depression, asthma, or coronary artery disease. This was largely due to higher costs for medications and psychiatric care (inpatient and outpatient) among those with BD. Only individuals diagnosed with both diabetes and coronary artery disease had higher all-cause health-care costs than those with BD. Sixty-four percent of total costs for the BD group were incurred by a small subgroup (20%) of patients; who were more likely to be female, have frequent hospital stays, and have a higher number of comorbidities. 37 In a cohort of community-dwelling dual-eligible Medicare/Medicaid beneficiaries with a mental health disorder in 2005, individuals with a diagnosis of BD had 34% higher medical care costs, and 59% higher prescription drug expenditures than those without a diagnosis of BD.
Among members of this group who used Medicaid-paid community-based long-term care services (eg, in-home services), a diagnosis of BD was associated with 5% higher medical care costs, 15% higher long-term care costs, and 55% higher prescription drug expenditures than those without a diagnosis of BD. The authors reported a similar pattern for individuals who resided in Medicaidpaid institutional (eg, nursing home) long-term care facilities, noting the increased medication costs relative to those with other mental health diagnoses were expected due to this population's greater reliance on pharmacotherapy and having more comorbid conditions. 38
Costs Related to Mental Health Care and Psychiatric Hospitalization
Eleven studies (two for BD-I, nine for BD) evaluated mental health-related costs (Table 1). These studies estimated that annual mental health-related costs totaled between $4521 and $9132 PPPY for individuals with BD-I 33,34 and between $6374 and $21,523 PPPY for individuals with BD. 35,39 Two studies examined the cost of psychiatric hospitalization. The first estimated the cost of a psychiatric hospitalization in patients with BD-I to be $9544. 40 This figure is within the range reported by Stensland et al, who reported that the average cost of community hospital-based inpatient psychiatric care for patients with BD was $1159 to $1262 per day, depending on payer, with an average length of stay between 5.5 days (uninsured) and 9.4 days (Medicare). 41 Health-Care Resource Utilization HCRU was reported in four studies (Table 2); however, no study reported data specific to patients with BD-I. A diagnosis of BD was associated with high use of outpatient, inpatient, emergency, pharmaceutical, medical, and mental health services (eg, psychotherapy, BD-related acute care).
One study found that having a BD diagnosis increased the odds of being a "high-use consumer" of health care by 70% relative to a diagnosis of depression ("high-use" was defined as using inpatient, mental health ER services, or crisis residential visits three or more times in a fiscal year). 42 Another study found individuals with a diagnosis of BD had greater HCRU 12 Two other studies providing descriptive data for annual HCRU among commercially insured patients with BD 43,44 and are summarized in Table 2.
Drivers of Direct Health-Care Costs
Eleven studies reported factors that were associated with either increased or decreased direct health-care costs in individuals with BD-I or BD. Factors associated with increased direct health-care costs included having frequent psychiatric interventions (ie, hospitalization, ER visit), the presence of comorbid medical and psychiatric conditions, nonadherence to BD-related medication, approach to pharmacotherapy (eg, use of certain combination treatments), and suboptimal clinical management due to a misdiagnosis of unipolar depression following a BD diagnosis. 12,33,34,37,39,[45][46][47][48][49][50]
Frequent Psychiatric Intervention
Three studies (two for BD-I, one for BD) examined patients who had "frequent psychiatric intervention" (FPI) over a 12-month period (Year 1), evaluating their health-care costs over the subsequent 12-month period (Year 2), relative to patients without FPI. Two studies defined FPI as ≥2 ER visits or hospitalizations with a principal diagnosis of BD, addition of a new medication to the first observed treatment regimen, or ≥50% increase
years
Examined factors associated with high utilization of acute mental health services (defined as ≥3 acute care episodes) from analyses of records from a regional public mental health services database.
• Across mental health diagnoses in the sample, 20% of enrollees were classified as high utilizers in one or more years.
• BD and other psychotic disorder diagnoses increased the likelihood of being a high utilizer by ~70% compared to a depression diagnosis (OR 1.71; p<0.0001).
in BD medication dose, within a 12-month period. 33,34 The third study utilized a similar definition for FPI but specified a frequency of ≥4 such events within a 12-month period. 39 FPI was common, with a prevalence of 40% to 53% in BD-I cohorts 33,34 and 52.5% in a group with BD-I or BD-II. 39 Compared to those without FPI, individuals who had FPI incurred greater mental health-related and all-cause medical costs in the year following the FPI (Table 3). They also had a 3.7-fold higher risk of subsequent mental health hospitalization and 3.1-fold higher risk of subsequent ER visits in the year following the FPI. 34 Comorbidities Seven studies (two for BD-I, five for BD) reported on the economic burden of comorbidities among individuals with the disorder (Table 4). Across studies, cardiometabolic comorbidities (eg, hyperglycemia or diabetes, • In this inpatient sample, each additional cardiometabolic comorbidity was associated with: • Direct costs that were higher by 12.3% (medical), 26.6% (pharmacy), and 13.4% (total; all p<0.0001).
• Significantly higher in-hospital mortality rate, longer hospital stays, and greater like- 1 year • Over a 1-year period, higher (vs lower) comorbidity burden in patients with FPI (vs without FPI) was associated with higher total medical and psychiatric costs, and higher probability of mental health-related hospitalization and mental health-related ER visit in a Medicaid sample.
• In the cost analysis, comorbidity factors associated with higher total costs for those with FPI included having a higher mean CCI score and having hypertension or dyslipidemia.
• Overall, the FPI group had greater comorbidity burden relative to the patients with- cardiovascular disease, dyslipidemia, hypertension, and obesity) and psychiatric comorbidities (eg, substance/alcohol abuse, anxiety disorder) were associated with higher medical care costs and/or increased HCRU in patients with the disorder. Patients with FPI incurred significantly higher medical care costs and had significantly greater comorbidity burden compared to those without FPI. 33,34,39 Comorbidities with significantly higher prevalence rates in patients with FPI included anxiety disorder, substance use disorder, and depressive disorder. Additionally, in the two studies of BD-I populations, patients with FPI had significantly higher comorbidity scores and significantly greater rates of hypertension and dyslipidemia.- 33,34 In a cost analysis by Durden et al, these three factors were associated with the higher total annual adjusted all-cause medical costs for patients with FPI relative to those without FPI. 34 Studies of BD in general also reported that comorbidities contribute significantly to the economic burden of BD. Guo et al estimated that 33% of PPPY direct health-care costs were related to the treatment of BD, while the remaining 67% were attributable to treatment of psychiatric (eg, substance/ alcohol use disorders, personality disorder) and medical (eg, obesity, diabetes) comorbidities. 46 Similarly, analyses of health-care claims from an employer-based health plan found that health-care costs associated with BD were driven, in part, by patients' comorbidity burden. 37 Another two studies reported associations between cardiometabolic comorbidity burden and increased acute care HCRU and costs for BD patients. In an evaluation of administrative hospital data for 30 days post-discharge, 60.5% of patients with an inpatient diagnosis of BD had at least one cardiometabolic comorbidity, and 33.4% had two or more. Those with one or more cardiometabolic conditions (vs none) had an increased likelihood of hospital readmission in the 30 days post-discharge, higher costs, longer lengths of stay, and higher in-hospital mortality. 50 Centorrino et al reported that individuals with BD had significantly more metabolic comorbidities than matched individuals from the general population (prevalence 37% vs 30%, p<0.0001). This was reflected in significantly higher medical service, particularly due to inpatient admissions, ER visits, and prescription costs for these conditions in the BD cohort. 12
Adherence to BD-Related Medication
Six studies (one for BD-I, five for BD) reported on economic aspects of adherence to BD-related medications, using various definitions of adherence. 40,45,[51][52][53][54] Suboptimal adherence to BD medication was common. In a claims database study, only 35.3% of individuals with BD were adherent as measured by medication possession ratio (MPR) ≥0.80 over 12 months. 53 Individuals with lower antipsychotic adherence, as measured by MPR, had higher direct health-care costs in the form of inpatient and outpatient mental healthrelated HCRU and expenditures. 40,45,51,52 For example, one DovePress retrospective study reported that improved adherence to SGA therapy (ie, a 1-unit increase in MPR) was associated with lower quarterly mental health-related medical costs of $192 to $686 per patient. 45 Additionally, suboptimal adherence to BD-related medications resulted in higher indirect costs in the form of reduced workplace productivity ($427 -$1156 PPPY) 53 and reduced functional status, 53,54 compared to those who maintained higher levels of adherence.
Approach to Pharmacotherapy
Seven studies evaluated associations between medication regimens and health-care service use and/or costs among individuals with BD, predominately with antipsychotics. 46,47,[55][56][57][58][59] One of them, a longitudinal cohort study, found that more than 8% of patients with BD receiving a second-generation antipsychotic (SGA) received combination treatment with more than one SGA concurrently; analyses found no association between disease severity and use of combination SGA treatment. Patients receiving a combination SGA regimen had greater rates of adverse events (eg, dry mouth, tremor, and sedation), nearly two-to three-times greater HCRU for medical and psychiatric services, respectively, and this regimen was associated with slightly worse global functioning relative to those treated with SGA monotherapy. 55 A second study of Medicaid claims for patients initiated on SGA therapy found only 58% were prescribed a clinically recommended dose of their index SGA. In this subset, there were no significant differences in annual medical and mental health-related costs across individual SGA treatment groups. 57 Other studies evaluated use of SGAs as a class compared to use of traditional mood stabilizers or in combination with traditional mood stabilizers 46,59 or examined costs differences for BD patients treated with different SGA agents alone or in combination with a mood stabilizer. 47,56,58 In general, studies that reported significant cost differences between treatment groups were driven by the risk or use of hospital services during the study period. 46,47,56
BD Patients with Subsequent Diagnoses of Unipolar Depression
Two studies estimated the occurrence of "incongruent diagnoses" among patients previously diagnosed with BD, defined as receipt of a diagnosis of unipolar depression 12 months following an initial BD diagnosis (17.5% to 27.5% of BD patients). 48,49 Unipolar depression diagnoses were considered "incongruent" as depressive episodes among BD patients would be expected to be treated as BD. The BD patients who received a subsequent diagnosis of unipolar depression had significantly higher average annual health-care costs (+$2676 PPPY), three times more psychiatric hospitalizations, and twice as many psychiatric ER visits than BD patients without a subsequent unipolar depression diagnosis. 48 A chart review for a subset of patients with incongruent diagnoses found different providers were documented for the initial BD diagnosis vs the subsequent unipolar depression diagnosis 76% of the time, suggesting gaps in continuity of care may have contributed to these patterns of "incongruent diagnoses". 49 Direct Non-Health-Care Costs
Criminal Justice System
There were two studies (both in BD-I populations) that reported incremental costs of BD to the criminal justice system, such as costs of incarceration, policing, and legal costs. 60,61 In a patient survey, employees with BD-I were more likely to report having been involved in a crime than co-workers without a diagnosis of BD-I. 60 The second study examined public expenditures related to criminal justice, medical, mental health, and social welfare services for persons arrested in a large Florida county who had serious mental illnesses (SMIs). This analysis found psychiatric diagnosis influenced total expenditures; individuals with BD-I had the second-highest total quarterly costs ($2525) behind those with a psychotic disorder ($4209). 61
Indirect Costs National Burden
The total annual indirect costs of BD-I in the US was estimated at $158.5 billion, constituting 72.3% of the total economic burden of the disorder. 31 About half (50.3%) of indirect costs were related to unemployment; the rest were attributed to caregiving productivity loss (34.1%); all-cause premature mortality among individuals with BD-I (8.6%); productivity loss among individuals with BD-I (6.4%); and direct health-care costs for caregivers (0.6%). The annual all-cause mortality rate for the BD-I population was found to be 3.4-to 11.4-times higher than for the US general population, depending on age group. Suicide was 10.3-to 16.2-times more common than for the general US population, and was responsible for an estimated 19% of the costs (measured as productivity loss) associated with premature deaths among those with BD-I. These findings were similar to another study that estimated total annual indirect costs for BD-I and BD-II of $155.2 billion (79.7% of the total cost burden). 32
Workplace Productivity
Seven studies (three for BD-I, four for BD) evaluated effects on workplace productivity or employment, and all found that the disorder had a negative economic impact for employed individuals and their employers (Table 5). Individuals with BD or BD-I were more likely to be unemployed, miss work, have reduced work hours due to medical or mental health-related reasons, receive disability payments, or have been fired or laid off compared with those with no mood disorders. 60,62 Studies of employed persons with BD reported increased indirect costs due to work absence and disability, 53 as well as functional deficits that adversely affected work quality, work attendance, and ability to maintain employment. [63][64][65] Moreover, an increased number of lifetime mood episodes was associated with higher likelihood of permanent disability and unemployment. 64
Caregivers and Families
The economic burden of BD often extends to families and caregivers of these patients. In an analysis of the national burden of BD-I in the US, it was estimated that caregivers' productivity loss and direct health-care costs accounted for more than a third of the total annual indirect costs of the disorder. These estimates were based on assumptions that caregivers devoted an average of almost 29 hours per week to caring for an individual with BD-I and that more than half (57.6%) of individuals with BD-I resided with family members. 31 A second study reported that total annual health-care costs were 239% higher for families containing a member with BD compared to matched families without a diagnosis of SMI. Specifically, families including a member with BD made more outpatient visits, had more inpatient hospital stays, and filled more prescription medications than the matched families. Notably, most of the total HCRU and costs related to conditions other than BD. The authors suggested that this may be related to the psychological stress of living with and/or caring for a family member with BD. Another possibility for the greater HCRU costs observed is that BD families may have more frequent interactions with the health-care system (on behalf of the member with BD), providing them with additional opportunities to discuss and/or pursue help with their own health concerns compared to families that do not include a member with a SMI. 66
Discussion
This literature synthesis presents a comprehensive review of contemporary literature describing the direct and indirect costs associated with BD and BD-I in real-world settings in the US, and the drivers of those costs. It builds on the findings of prior reviews describing the disease burden of BD, which were more focused on methodologic differences among published studies that specifically described cost data related to the burden of BD. 6,10 This review included a broader collection of research than prior reviews, such as papers describing resource use or changes in work productivity without associated cost estimates, for additional perspective on the economic burden of BD/BD-I. Because BD encompasses multiple disease subtypes, this review reflected economic drivers that are applicable to both BD as a whole as well as the subset of patients who live with BD-I. While previous research identified differences in direct medical costs between BD-I and other BD subtypes, 6 this review allowed for identification of broader cost outcomes and drivers of these costs among patients with BD-I specifically.
National burden estimates for BD and BD-I in this review show the costs associated with this disorder are substantial. Two studies estimated total annual costs of $195 billion (BD-I and BD-II) and $219 billion for BD-I (both 2018 USD), in analyses that assumed a lifetime prevalence of 2.1% (BD-I and BD-II) and 1.0% (BD-I) of the adult population, respectively. 31,32 In contrast, the total annual cost of diagnosed diabetes was estimated at $333 billion (2018 USD) for a disease that affects 9.7% of the adult population. 67 For both BD and BD-I, the majority of total economic burden (72% to 80%) was attributed to indirect costs, such as losses in work productivity (eg, unemployment, absences associated with morbidity) and caregiving. These population-level findings were aligned with other studies in this review that reported reduced work attendance, functioning, and ability to maintain employment for individuals with BD or BD-I 53,60,62,65 as well as increased HCRU and worsening health status for family members of affected individuals. 31,66,68 Given the degree to which indirect costs impact the total costs of BD and BD-I, this topic should remain a research priority.
Thirteen studies in this review reported data specific to BD-I populations, summarizing considerable indirect and direct medical costs, with many of the cost drivers reported similar to those in studies of BD as a whole. Functional impairments among individuals with BD-I were associated with increased risk of unemployment or becoming disabled. 31,60,64 This risk was higher in persons who experienced recurrent mood episodes. 64 High direct medical costs, particularly for acute care, were reported for patients with BD-I specifically. 31,33,34,40 Among those with greater HCRU, nonadherence to pharmacotherapy and presence of comorbid conditions (eg, substance use disorder, hypertension) contributed to higher cost burden. 33,34,40 These data did not clarify if the BD-I • Multivariate models showed increases in the total number of lifetime mood episodes were associated with small, but significantly higher likelihood of permanent disability (b=0.01) and unemployment (b=0.01).
• Recurrent mood episodes and repeated depressive episodes (vs mania) were a consistent predictor of functional impairments in patients with BD-I.
• Analyses examining the relative effects by type of episode (manic vs depressive) found repeated manic episodes were a significant predictor of the likelihood of unemployment whereas repeated depressive episodes were not. subtype is a more costly form of the disease; thus, additional research into real-world indirect and direct costs associated with the clinical management and treatment of BD-I are needed to help inform key stakeholders and public policy decisions for this population. Taken together, these observations underscore the need to improve patient outcomes and reduce overall economic burden by implementing strategies of disease and medication management. Treatment guidelines recommend patients receive long-term pharmacotherapy to reduce the recurrence of mood episodes and improve the stability of patients' psychiatric and general health, their general functioning, and quality of life; however, this is a population in which medication adherence is typically poor. 22,28,69 Most currently prescribed mood stabilizing agents have undesirable side effects (eg, changes in cognitive function, tremor) that are poorly tolerated by patients 70,71 and also have the potential to induce or exacerbate comorbid conditions that may require intervention. 22,28,69 Choice of medication for BD is complex, balancing patient needs, symptoms, and treatment preferences with the risks of available therapies.
Interventions aimed at optimizing care delivery, such as integrated health-care programs combining primary care with specialists (eg, psychiatrists, pharmacists), have promise for improving the clinical management of BD and its comorbidities. [72][73][74][75] These teams work collaboratively to tailor treatment choices to patients' psychiatric and medical care needs and to efficiently intervene to address factors that may be barriers to medication adherence. 72,75 Collaborative care of this kind has the potential to reduce the need for acute, intensive, or emergency health-care interventions by providing better continuity of care, while simultaneously reducing the indirect costs of BD and improving patients' lives. 75 Our review should be considered in context of its limitations. Only studies that were published between 2008 and 2018 were included. Importantly, many of the studies characterized costs and burden using definitions of BD that predate the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). Because the DSM-5 criteria broaden the definition of BD, 76 cost estimates reported from studies using earlier DSM criteria may not be representative of contemporary clinical experience. Included studies did not shed light on cost differences between persons with BD-I relative to other BD subtypes; however, results from Cloutier et al's recent analysis of the national burden of BD-I in the US suggest that the cost burden of BD-I is similar to figures reported for BD generally when costs were adjusted to a common year. 31 The assessment of methodological characteristics of included cost studies found that most sufficiently reported the components in the quality checklist; however, only 4 studies provided results of sensitivity analyses, which may increase the level of uncertainty around some of these estimates. Categorization of nonmedical costs was inconsistent in the literature and limits comparability; for example, costs of criminal justice involvement were categorized differently across studies where it was included in the analysis. In addition, aggregate cost estimates were bundled in ways that made it challenging to reliably separate component costs. Therefore, the CPI for Medical Care as the standard for converting costs to common-year currency (2018) may not accurately reflect BDrelated non-health-care costs.
Other limitations inherent to the literature summarized included methodological variability in approaches to costing of data sources, selection of cost items and comorbidities, statistical methods used, and patient selection. Similar to prior reviews, 6,10 which discussed the methodological and quality issues in cost studies of BD in greater detail, this review found that greater transparency and specificity in study methods is needed to improve the comparability of results across studies. Many included studies relied on administrative claims data, thus limiting their analyses to those costs for which a payer is responsible. Other relevant costs, such as out-of-pocket payments or expenses carried by other payers (eg, rehabilitation paid from pension funds), were rarely included. Some studies focused on general cost categories (inpatient or outpatient care), while others also included the services of supporting departments such as ER, laboratory, and social work. Some studies matched their samples according to individuals' age and gender; others used statistical methods to adjust for sociodemographic characteristics. Most studies recruited from special populations (eg, privately insured individuals; recent hospital discharges; employed persons) in which reported costs may not be representative of the general BD population. Some cost studies focused on newly available treatments, which may have resulted in higher reported costs compared to studies that included a broader selection of treatments due to increased pharmacy costs. Additionally, inconsistency in the way that "indirect" costs were defined or apportioned across publications led to heterogeneous definitions of indirect cost categories, resulting in widely differing estimates. This variability in costing methodologies and definitions limited comparisons across and between studies. However, this review summarized a broad range of studies to provide a comprehensive picture of the economic burden of BD and BD-I.
Conclusion
There is clear evidence from the published literature that BD (including BD-I specifically) and its comorbidities exert a large economic burden in the US on patients, caregivers, families, employers, and society. This burden encompasses direct health-care utilization and costs, loss of workplace productivity, caregiving, and other indirect costs. While estimates of indirect costs associated with BD and BD-I are substantial, they are infrequently quantified in the literature and warrant further study. Interventions that target better disease management and medication adherence may reduce the direct and indirect cost burden of BD and BD-I and improve patient outcomes.
|
2020-09-10T10:24:27.796Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "4b10052566feb7dd1f10943dfd9f816c3f7583ab",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=61216",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "233c1565ece2b11183191f00d8e3392732762cf9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10922442
|
pes2o/s2orc
|
v3-fos-license
|
Correlation of expression of multidrug resistance protein and messenger RNA with 99m Tc-methoxyisobutyl isonitrile (MIBI) imaging in patients with hepatocellular carcinoma
AIM: To explore whether P-glycoprotein (Pgp) and other pumps, multidrug resistance-associated protein (MRP) and lung resistance protein (LRP), could affect tumor accumulation and efflux of 99m Tc-MIBI in liver cancer. METHODS: Surgically treated 78 liver cancer patients were included in this study. Before surgery, 99m Tc-MIBI SPECT was performed 15 min and 120 min after injection of 20 mCi 99m Tc-MIBI, respectively. Early uptake, delayed uptake (L/Nd), and washout rate (L/Nwr) of 99m Tc-MIBI were obtained. Expressions of Pgp, MRP and LRP were investigated with Western blotting and immunohistochemistry. Messenger RNA (mRNA) level of Pgp, MRP and LRP was determined by RT-PCR. RESULTS: No 99m Tc-MIBI uptakes in tumor lesions of 68 of 78 (87.2%) patients with hepatocellular carcinoma were found on 99m Tc-MIBI SPECT. P-gp expression was observed in tumor tissues of the patients with no uptake of 99m Tc-MIBI ( P <0.017). No appreciable correlation was found between liver cancer 99m Tc-MIBI images and expression of MRP or LRP on the level of protein or mRNA. CONCLUSION: 99m Tc-MIBI SPECT is noninvasive, and useful in predicting the presence of MDR1 gene-encoded Pgp in patients with hepatocellular carcinoma. H, Chen
INTRODUCTION
Multidrug resistance (MDR) is the main barrier to efficient chemotherapy of human malignancies. MDR has been closely associated with overexpression of multidrug resistance genes (MDRI) [1] and has been observed in hepatocellular carcinomas (HCC) [2] . The MDR phenotype has been defined on the basis of cellular drug targets involved Pgp, MRP, LRP and atypical MDR (mediated through altered expression of topoisomerase type II) [3][4][5][6] . Pgp, encoded by MDRI gene, is a 170-ku transmembrane glycoprotein and acts as an adenosine triphosphate (ATP)driven drug efflux pump to reduce drug accumulation [7] . MRP is a 190-ku membrane-bound glycoprotein and can act as a glutathione S-conjugate efflux pump by transporting drugs that are conjugated or cotransported with glutathione [8,9] . Both Pgp and MRP are integral membrane proteins belonging to the ATPbinding cassette (ABC) superfamily of transporter proteins, which appear to confer resistance by decreasing intracellular drug accumulation [10] . In contrast, LRP is not an ABC transporter protein. LRP has recently been identified as a vault protein, which is a typical multisubunit structure involved in nucleocytoplasmic transporter [11] . Determination of these MDR proteins at the time of diagnosis is imperative to the development of rational therapeutic strategies for preventing drug resistance. 99m Tc-MIBI is a cationic lipophilic agent, widely used for myocardial perfusion imaging to detect various tumors [12][13][14][15][16][17][18] . Recent evidence has shown that 99m Tc-MIBI is a suitable transport substrate for Pgp and may provide additional information about the Pgp status of tumor cells [19,20] . It has been reported that MIBI is accumulated within mitochondria and cytoplasm of cells based on transmembrane electrical potentials. Malignant tumors show increased transmembrane potential as a result of increased metabolic requirements that induce increased accumulation of MIBI in tumors [21] . The potential advantage of 99m Tc-MIBI imaging lies in its superiority in detecting the presence of Pgp overexpression in vivo noninvasively [22,23] . Recently, 99m Tc-MIBI efflux has been shown to be a substrate for MRP in vivo [24] . 99m Tc-MIBI imaging or SPECT was performed in various cancers [25,26] , but no clinical studies in HCC have been found. The aim of this study was to determine whether Pgp and other pumps, MRP and LRP, could affect tumor accumulation and efflux of 99m Tc-MIBI in hepatocellular carcinoma. on all patients. For SPECT of the liver, 72 projections were obtained using 64×64 matrix at 45 s per view. Image reconstruction was performed using filtered back projection with Butter-worth and ramp filters. Transverse, coronal, and sagittal sections were reconstructed. Attenuation correction was not applied. SPECT images were compared with liver CT images, and accumulation in liver tumors was interpreted by nuclear medicine physicians. The findings on 99m Tc-MIBI livers were measured semiquantitatively. Regions of interest (ROIs) were manually defined on the transaxial tomograms showing the lesion's highest uptake in center of the tumor. ROIs placed on the lesions (L) encompassed all pixels that had uptake values of >90% of the maximum uptake in that slice, and the average rate in each ROI was calculated. Another ROI of the same size was then drawn over the normal lung (N) on the same transverse section. The early uptake (L/Ne) and the delayed uptake (L/Nd) were obtained. The washout rate (L/Nwr) was calculated using the following formula: L/Nwr = (L/Ne-L/Nd)×100 (L/Ne).
Immunohistochemical study
After resection of HCC, immunohistochemiscal study of the biopsy or resected tumor tissues and surrounding nontumorous liver parenchyma was performed. Four-micrometer-thick, formalin-fixed, paraffin-embeded tissue sections were cut from the specimens and mounted on poly-L-lysine-coated glass slides (Sigma Chemical Co., St. Louis, MO). The standard avidinbiotin-peroxidase complex (ABC) technique was used for immunostaining using a LSAB kit (Dako Co., Carpinteria, CA).
After deparaffinization and rehydration, the sections were treated with 1 mL/L methanol hydrogen peroxide for 20 min to block endogenous peroxidase activity, incubated with normal horse serum for 30 min at 37 , and with primary antibody, JSB-1 (1:20), MRP1 (1:10), LRP-56 (1:10) overnight in a moist chamber at 4 . The tissue sections were incubated with avidinbiotin-peroxidase complex. The final reaction product was revealed by exposure to 0.3 g/L diaminobenzidine, and the nuclei were counterstained with Mayer's hematoxylin.
A negative control was obtained by staining the sample with secondary antibody and a positive control by inclusive of a normal liver. The results of immunostaining were interpreted independently by two pathologists who were unaware of the imaging studies. Expressions of Pgp, MRP and LRP were scored as follows: -, negative; +, 10% positive tumor cells; ++, 30% positive tumor cells; +++, >30% positive tumor cells.
Quantitative RT-PCR
RT was performed with random primers with a complementary DNA (cDNA) synthesis kit (Promega, Madison, WI). RT-reaction reagents were added as follows: 2 µL of MgCL 2 (50 mmol/L), 2 µL of reverse transcription buffer (Tris-HCL [PH8.3], 100 mmol/L, KCL 500 mmol/L and Triton X-10 010 g/L), 2 µLof deoxynucleotide mixture (10 mmol/L), 0.5 µL of RNase inhibitor (20U), 2 (15U) of avian myeliblastoma virus reverse transcriptase, 1 µL of random primers (500 µg/mL) and 5 µg substrate RNA. The final volume of the reaction (20 µL) was completed with RNase free water. First strand cDNA synthesis was carried out at 42 for 30 min in the DNA thermal cycler (PTC-100, MJ Reserch Inc., Watertown, MA). Afterwards, the tubes were incubated at 99 for 5 min to stop the reaction. Then each tube was kept at 4 until PCR was performed. Expression of the target genes (MDR1, MRP and LRP) and endogenous referenceβ-actin was quantified using the primers and standards. The primers were designed using the software Primer Express (Applied Biosystems) (Table1).
RT -PCR
Expressions of the target genes (MDR1, MRP and LRP) and GAPDH gene were quantified using the primers and standards. The primers were designed using the software Primer Express (Applied Biosystems) (Table2).
RT-PCR was performed according to the TaqMan 2-step method using the ABI PRISM 7 700 sequence detection system (Applied Biosystems). The nontemplate controls, standard dilutions, and samples were assayed. A 25-mm volume of PCR reaction mixture was used, containing 200 ng of the sample cDNA, TaqMan buffer, 200 mmol/L deoxy-ATP, deoxycytidine triphosphate, and deoxy-guanosine triphosphate, 400 mmol/L deoxyuridine triphosphate, 5.5 mmol/L magnesium chloride, 0.025 U/mL AmpliTaq Gold DNA polymease (Applied Biosystems), 0.01 U/mL AmpErase uracil N-glycosylase (Applied Biosystems), 200 nmol/L forward and reverse primers, and 100 nmol/L probe. PCR cycling conditions included an initial phase at 50 for 2 min, followed by at 95 for 10 min for AmpErase, 40 cycles at 95 for 15 s, and at 60 for 1 min. Quantification of the PCR products was based on the TaqMan 5' nuclease assay using the ABI PRISM 7 700 sequence detection system. The starting quantity of a specific mRNA in an unknown sample was determined by preparing a standard cDNA. The standard curve was generated on the basis of the linear relationship between CT value (corresponding to the cycle number at which a significant increase in fluorescence signal was first detected) and logarithm of the starting quantity. The unknown samples were quantified by the software of the ABI PRISM 7 700 sequence detector system, which calculated the CT value for each sample and then determined the initial quantity of the target using the standard curve. The amount of expressed target gene was normalized to that of GAPDH.
Western blotting
Liver cancer samples were analyzed for the presence of Pgp, MRP and LRP protein. Samples were washed in PBS and homogenized in a lysis buffer [27] , Protein supernatants were quantitated using the Lowry assay, and equal amounts of protein from each sample were separated by SDS-PAGE and electroblotted onto nitrocellulose membranes. Membranes were probed with mAb recognizing Pgp, MRP and LRP (Sigma, Co), respectively. Enhanced chemiluminescence was used for protein detection.
Statistical analysis
The results of L/Ne, L/Nd, and L/Nwr were expressed as mean±SD. The differences in L/Ne, L/Nd, and L/Nwr between patients with (-), (+), and (++) Pgp, MRP, and LRP expressions were determined using Student t test. The differences in L/Ne, L/Nd, and L/Nwr between patients with high and low Pgp mRNA, MRP mRNA, and LRP mRNA expressions were determined using Student t test. If P was <0.05, the difference was considered statistically significant.
RESULTS
All the 78 surgically obtained tissue samples were assessed to estimate the levels of Pgp, MRP, and LRP expression on protein and mRNA. Table 3 summarizes the immunohistochemical results and RT PCR data.
Correlation of 99m Tc-MIBI results with immunohistochemical results
Significant MIBI uptake on 99m Tc-MIBI SPECT was noted in tumor lesions of 10 (12.8%) patients with HCC, but not in tumor lesions of 68 (87.2%) patients with HCC. In patients with MIBI uptake, immunohistochemical analysis of tumor tissues showed no detectable P-glycoprotein-positive cells. But immunohistochemical analysis of tumor lesions in patients without MIBI uptake revealed uniformly distributed P-gp-positive cells. We noted a significant correlation between 99m Tc-MIBI SPECT findings and P-gp expression in tumors of patients with HCC.
MRP and LRP protein expression was found in tumor lesions of 7 and 4 patients, respectively, and no correlation was found with 99m Tc-MIBI.
Correlation of 99m Tc-MIBI with RT-PCR results
No correlation was found between L/Ne and the level of Pgp mRNA, MRP mRNA, and LRP mRNA. The mean L/Nd (2.78±0.64) of the Pgp mRNA low expression group was significantly higher than that (1.25±0.43) of the Pgp high-expression group (P=0.0115, Figure 1). L/Nd was not related to the level of MRP mRNA or LRP mRNA. Statistical support was found toward a significant difference in L/Nwr between the Pgp mRNA high-expression group and the Pgp mRNA low-expression group. L/Nwr was not related to the level of MRP mRNA or LRP mRNA. The grouping was according to the positive mmunohistochemistry of Pgp, there were 4 cases positive of MRP and 3 cases positive of LRP in group I, there were 3 cases positive of MRP and 1case positive of LRP in group II.
DISCUSSION
The resistance of tumors to multiple drugs is a major problem in cancer chemotherapy. Pgp, a transmembrane ATP-dependent efflux pump encoded by MDR1 gene, has a central role in multidrug resistance. Increased amounts of Pgp may confer multidrug resistance to cells by preventing intracellular accumulation of a variety of cytotoxic drugs. A unique feature of multidrug resistance is the apparent capacity of Pgp for recognizing and transporting a large group of cytotoxic compounds sharing little structural or functional similarity other than being relatively small hydrophobic and cationic agents, including anthracyclines, Vinca alkaloids, and actinomycin D. Evidence has shown that Pgp as a drug efflux pump extrudes 99m Tc-MIBI and other drugs from cells and that Pgp expression and enhanced efflux of 99m Tc-MIBI from these cells are closely connected [28,29] . In animal models, faster clearance of 99m Tc-MIBI was observed in tumors with or without Pgp expression [30,31] . Our study revealed that the mean L/Nd in Pgp (-) patients was significantly higher than that in Pgp (++) patients (P=0.035). Pgp (++) patients had a higher L/Nwr than Pgp (-)patients (P=0.027). No correlation was found between L/Ne and Pgp expression. The same results were obtained from mRNA level. Moreover, no correlation was found between MRP and 99m Tc-MIBI SPECT. The same result was obtained from LRP protein and mRNA level. 99m Tc-MIBI uptake by tumor is associated with many factors, including direct mechanisms such as negative transmembrane potential and drug efflux pump and indirect mechanisms such as blood flow and capillary permeability. We considered L/Ne to be more affected by blood flow. In contrast, L/Nd and L/Nwr clearly reflected Pgp expression of intrinsic properties of the tumor.
Until now, there have been some reports about the expression of Pgp in tumor tissues of patients with HCC [32,33] . Resistance to cancer chemotherapy in HCC resulted from Pgp expression. The specific localization of Pgp and the incidence of Pgp expression in each histological type of HCC were observed. The analysis indicated that the incidence was the lowest in the compact type of HCC, and it was significantly lower than that in the pseudo glandular and trabecular types [34] . But in our study, mRNA and protein level of Pgp revealed no significant difference in the incidence of Pgp expression in each histological type.
It has been established that MRP belongs to the superfamily of ABC transmembrane transporter proteins and can act as a glutathione S-conjugate efflux pump [35] . 99m Tc-MIBI was shown to be a substrate for MRP in vitro [36] . The abilities of Pgp and MRP transporters to wash out 99m Tc-MIBI have been reported to be similar in cell lines, in spite of different possible mechanisms of transport [37] . However, cardiac muscle showed a low L/Nwr of 99m Tc-MIBI and a low level of Pgp expression but a high level of MRP expression [38] . In our study, we did not observe any correlation between tumor accumulation or efflux of 99m Tc-MIBI and expression of MRP on protein level or mRNA level in liver cancer. The mechanisms are also unclear.
LRP has been identified as the vault protein involved in nucleocytoplasmic transport. Recently, subcellular accumulation of drugs was found to be localized in cytoplasm and minimally in nuclei in LRP overexpression cells [39] . As increased cytoplasm concentration of the drug could intensify its contact with the membrane, we proposed that efflux of the drug might be enhanced in LRP overexpression cells. However, we did not find a correlation between tumor accumulation or efflux of 99m Tc-MIBI and expression of LRP. Subcellular accumulation of 99m Tc-MIBI within mitochondria and cytoplasm of cells has been reported to be based on transmembrane electric potentials [40] . Therefore, efflux of 99m Tc-MIBI was rarely affected by expression of LRP.
Until now, there have been few clinical studies on the relation between Pgp expression and 99m Tc-MIBI uptake in HCC. As 99m Tc-MIBI is cleared through the liver, and it is not easy to detect liver tumors. To the best of our knowledge, our study was the first to show an inverse correlation in MDR1/Pgp expression and 99m Tc-MIBI uptake in HCC. But 99m Tc-MIBI SPECT has some limitations, because it depends on the optimal perfusion of tumor tissues. Poor MIBI penetration could be attributable to poor tumor perfusion in tumors larger than 2.5 cm, where tumor necrosis would be expected. Therefore, perfusion studies such as a Tl-201 scan could be used to eliminate the possibility of poor penetration.
In conclusion, our results suggest that L/Nd and L/Nwr of 99m Tc-MIBI are noninvasive and useful in detecting the expression of Pgp in patients with HCC
|
2018-04-03T00:20:54.777Z
|
2004-05-01T00:00:00.000
|
{
"year": 2004,
"sha1": "ac306419fad2c5bf30b5828d6d6996767a405ae4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v10.i9.1281",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b3b89093644294d52851088182a25d8e67788210",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
257169310
|
pes2o/s2orc
|
v3-fos-license
|
A randomised, controlled, feasibility trial of an online, self-guided breathlessness supportive intervention (SELF-BREATHE) for individuals with chronic breathlessness due to advanced disease
Introduction SELF-BREATHE is a complex, transdiagnostic, supportive, digital breathlessness intervention co-developed with patients. SELF-BREATHE seeks to build capacity and resilience within health services by improving the lives of people with chronic breathlessness using nonpharmacological, self-management approaches. This study aimed to determine whether SELF-BREATHE is feasible to deliver and acceptable to patients living with chronic breathlessness. Methods A parallel, two-arm, single-blind, single-centre, randomised controlled, mixed-methods feasibility trial with participants allocated to 1) intervention group (SELF-BREATHE) or 2) control group (usual National Health Service (NHS) care). The setting was a large multisite NHS foundation trust in south-east London, UK. The participants were patients living with chronic breathlessness due to advanced malignant or nonmalignant disease(s). Participants were randomly allocated (1:1) to an online, self-guided, breathlessness supportive intervention (SELF-BREATHE) and usual care or usual care alone, over 6 weeks. The a priori progression criteria were ≥30% of eligible patients given an information sheet consented to participate; ≥60% of participants logged on and accessed SELF-BREATHE within 2 weeks; and ≥70% of patients reported the methodology and intervention as acceptable. Results Between January 2021 and January 2022, 52 (47%) out of 110 eligible patients consented and were randomised. Of those randomised to SELF-BREATHE, 19 (73%) out of 26 logged on and used SELF-BREATHE for a mean±sd (range) 9±8 (1–33) times over 6 weeks. 36 (70%) of the 52 randomised participants completed and returned the end-of-study postal questionnaires. SELF-BREATHE users reported it to be acceptable. Post-intervention qualitative interviews demonstrated that SELF-BREATHE was acceptable and valued by users, improving breathlessness during daily life and at points of breathlessness crisis. Conclusion These data support the feasibility of moving to a fully powered, randomised controlled efficacy trial with minor modifications to minimise missing data (i.e. multiple methods of data collection: face-to-face, telephone, video assessment and by post).
Introduction
Worldwide, >75 million people have breathlessness, including >90% of the 65 million people with severe lung disease [1], >50% of the 10 million with incurable cancer and 50% of the 23 million with heart failure [2,3]. More than two-thirds of those living with breathlessness have multimorbidities [4]. Breathlessness is a transdiagnostic problem, worsened by social, environmental and economic problems. The burden of breathlessness on individuals, family, society and health systems is increasing with population ageing and multimorbidity, amplified by the coronavirus disease 2019 (COVID-19) pandemic, with data suggesting that >40% of COVID-19 survivors have persistent (chronic) breathlessness [5,6]. Proactive approaches to management of breathlessness are required to build capacity and resilience within healthcare systems, especially given rising health and social care costs, and workforce challenges.
Clinical management of breathlessness is challenging; optimal pharmacological treatment of the underlying disease is the first step. Disease specific management alone does not guarantee symptom control. Breathlessness increases with disease progression, resulting in poor quality of life [7,8], increased disability and high health and social care costs [9]. This is often driven by repeated emergency department attendance and hospitalisations [10][11][12].
There is good evidence for breathlessness supportive services delivered face-to-face, which focus on education and nonpharmacological approaches to chronic breathlessness self-management [13,14]. Breathlessness supportive service models demonstrate cost effectiveness [15]. However, an implementation gap remains. Traditional face-to-face clinical consultations as standard are being re-examined peri-pandemic, and innovative healthcare solutions are sought. Online services may offer one possible solution. Internet connectivity is available to ⩾55% of the global population [16]. In the UK, 95% of the adult population are internet users, and this is expected to increase to 98% by 2025 [17]. Global data suggest that internet use, and in particular the use of video communication applications, have increased exponentially during the COVID-19 pandemic [16]. An increase in internet access and digital literacy in people with chronic respiratory disease has been observed in the UK during the COVID-19 pandemic [18]. Those living with chronic breathlessness due to advanced disease and who have internet access are willing to use online breathlessness self-management interventions, if available [19].
Disease-specific digital supportive online interventions are feasible and acceptable to patients with asthma [20] and COPD [21], demonstrating improved quality of life [20], inhaler technique and hospital admission rates [20,21]. However, pre-pandemic, others had reported challenges with recruiting, retaining and engaging patients [22]. To date, digital interventions have been respiratory disease specific, rather than symptom focused. To address the lack of face-to-face transdiagnostic breathlessness supportive services and online alternatives, SELF-BREATHE was developed.
SELF-BREATHE is a complex, transdiagnostic, supportive breathlessness digital intervention co-developed with patients following the Integrate, Design, Assess and Share (IDEAS) and Medical Research Council (MRC) frameworks [19,23], theoretically underpinned by Leventhal's Common-Sense Model of Self-Regulation [24][25][26]. SELF-BREATHE aims to build capacity and resilience within health services to improve the lives of people living with chronic breathlessness using nonpharmacological self-management approaches [19,23].
The aim of this study was primarily to determine whether a randomised controlled trial (RCT) of SELF-BREATHE would be feasible to deliver and acceptable to patients living with chronic breathlessness due to advanced disease.
Study objectives
To determine the feasibility of: 1) method of evaluation: via recruitment and consent rates, randomisation procedure, completeness of data collection; 2) SELF-BREATHE as an intervention: number of participants who logged in to SELF-BREATHE; log-in frequency; and acceptability of SELF-BREATHE.
Methodology
This study followed the MRC framework for developing and evaluating complex interventions [25], the evaluating complex interventions in end of life care (MORECare) statement [27], the Consolidated Standards of Reporting Trials (CONSORT) statement (www.consort-statement.org) and the IDEAS (Integrate, DEsign, Assess and Share) framework for the development of digital behavioural change interventions [24].
Ethical approval
Ethical and local research and development approval was obtained prior to commencing this research (research ethics committee/Health Research Authority reference number 20/LO/1108). The study was registered at www.clinicaltrials.gov (identifier NCT04574050).
Study design
A single-blind (data checker/inputter), single-centre, parallel, two-arm RCT with participants allocated to either 1) intervention group (SELF-BREATHE plus usual national health service (NHS) care) or 2) control group (usual NHS care). The trial was evaluated using mixed methods (i.e. a RCT and qualitative interviews).
Setting
Patients were recruited from general and specialist clinics/services (virtual and face-to-face), at King's College Hospital NHS Foundation Trust (Denmark Hill and Princess Royal University Hospital sites) where there is high prevalence of chronic breathlessness, e.g. integrated respiratory teams, lung cancer, bronchiectasis and respiratory medicine clinical services/clinics.
Clinical staff checked the eligibility of patients during their routine hospital consultation (face-to-face or virtual). If eligible, clinical staff asked the patient for permission to pass their contact details to the research team, who provided them with a copy (paper or electronic) of the patient information sheet. The research team contacted the patient after a minimum of 24 h to discuss the study and answer any questions regarding the study and patient information sheet. If the patient was happy to take part in the study, a consent form was sent to them in the post. Potential participants received a telephone call ∼3 days later. During this call, the research team explained the content of the consent form and participant information sheet. They then answered any questions participants had about the study. Finally, the researcher asked the participant to consent verbally. Verbal consent was recorded.
Participants were sent a pre-paid return envelope to return the signed and dated copy of their consent form to the research team. Finally, a countersigned copy of the consent form was sent to participants in the post.
Population
Patients living with chronic breathlessness due to advanced malignant or nonmalignant disease.
• Chronic breathlessness defined as breathlessness that persists despite optimal pharmacological treatment of the underlying lung disease, including COPD, asthma, interstitial lung disease (ILD), chronic fibrotic lung disease following severe acute respiratory syndrome coronavirus 2 infection, bronchiectasis, cystic fibrosis and lung cancer. • Modified (m)MRC dyspnoea score ⩾2 (short of breath when hurrying on the level or walking up a slight hill) [28]. • Access to a computer or tablet or smartphone with internet access.
• Able to provide informed consent.
Exclusion criteria
• Breathlessness of unknown cause.
• Primary diagnosis of chronic hyperventilation syndrome.
• Currently participating in a rehabilitation programme, e.g. pulmonary/cardiac rehabilitation.
Data collection
Research data were collected simultaneously in both groups: at baseline ( prior to randomisation (T1)) and at 6 weeks post-randomisation (T2) using self-completed postal questionnaires.
Patient demographic and characterisation data
At baseline, participants were asked to self-complete a demographic questionnaire which included age, sex, ethnicity, educational level, employment status, smoking status, MRC dyspnoea score, living status (living alone versus living with others) and self-reported confidence in using the internet measured on a 0-10 numerical rating scale (NRS) (0=no confidence, 10=extremely confident).
Feasibility outcomes Primary outcome
The number of patients recruited into this study over a 12-month period. The recruitment target for this study was 40 patients.
Secondary outcomes
• Proportion of patients willing to be randomised.
• Proportion of patients remaining in the study at 6 weeks ( primary end-point; T2).
• Proportion of, and reasons for, patients with missing data, e.g. research questionnaires.
• Frequency of SELF-BREATHE logins.
• Number of reported technical faults.
A priori progression criteria Based on previous interventional studies in chronic breathlessness [11,12] and clinical services such as pulmonary rehabilitation, the following progression criteria have been set for this study.
• ⩾30% of eligible patients given an information sheet consent to participation in the study.
• ⩾60% of the patients log on and access SELF-BREATHE within 2 weeks.
• ⩾70% of patients report the methodology and intervention as acceptable.
Patient-reported outcome measures
To quantify the affective and effective components of chronic breathlessness, the following validated and responsive patient-reported outcome measures were measured at both time points (T1 and T2).
• Breathlessness severity at rest, on exertion and worst over the past 24 h assessed on a 0-10 NRS (0=no shortness of breath, 10=worst possible shortness of breath). • Dyspnea-12, which quantifies breathlessness using 12 descriptors that tap into the physical and affective aspects of dyspnoea [29]. • The London Chest Activity of Daily Living scale measures the functional impact of breathlessness on activities of daily living, e.g. self-care [30]. • Confidence in breathlessness self-management that was measured using the question "how confident are you that you can keep your shortness of breath from interfering with what you want to do?" scored on a 0-10-point scale (0=not at all confident, 10=totally confident) [31]. • Illness perception was measured using the Brief Illness Perception Questionnaire, a nine-item questionnaire designed to rapidly assess cognitive and emotional representations of illness [32]. • Acceptability of SELF-BREATHE was assessed via a Likert scale questionnaire (range 1-5).
Participants were asked to respond to specific questions reflecting the overall acceptability of SELF-BREATHE and its potential benefits [33].
Health service use Self-reported health service questions captured general practitioner (GP; family doctor) contacts, planned and unplanned hospital/emergency department attendances, and hospitalisations, the main cost drivers associated with chronic breathlessness.
Explanatory qualitative interviews
Participants allocated to the intervention group (SELF-BREATHE) were invited to take part in semi-structured in-depth interviews to understand the perceived value of SELF-BREATHE; positive and negative experiences of using an internet-based intervention; and possible refinements or improvements. Interviews were audio-recorded, transcribed verbatim and analysed using conventional content analysis [33]. This approach commences with immersion in the data. After reading each transcript word by word, codes are derived to capture key thoughts and concepts and subsequently refined and sorted into meaningful categories and clusters. Analysis included deductive coding structured around the interview topic guide, and inductive analysis to extract any other pertinent findings specifically in relation to potential modifications and improvements to the intervention. Coding was led by the principal investigator (C.C. Reilly), a physiotherapist experienced in qualitative research, and supported by the qualitative lead for the project (K. Bristowe), a qualitative methodologist, who reviewed the analysis and conducted line-by-line coding on a sample of data extracts. The coding frame and summary findings were reviewed by the extended research team and subsequently refined.
Sample size
This study was designed to assess the feasibility of conducting a RCT of SELF-BREATHE to determine the optimum method of evaluation, and understand users' experiences and perceived value of SELF-BREATHE; therefore, a formal power calculation was not required. Sample sizes between 20 and 50 have been recommended for feasibility trials [26,27]. Using a pragmatic approach, a target sample size of 40 patients was set for this study, as it was deemed sufficient to assess feasibility parameters including recruitment rates, trial compliance and willingness to be randomised and to explore potential primary and secondary outcome measures with standard deviations.
We aimed to conduct qualitative interviews in a purposive sample of 10-12 patients, with recruitment continuing until sufficient information power was achieved to address the qualitative objectives [29]. This was to be determined by preliminary analysis of detailed reflective notes taken immediately after interviews, and constant comparison of new data with existing findings [17]. We anticipated that due to the depth of knowledge and information participants held about their experience of SELF-BREATHE and the trial itself, ∼10-12 participants would be required to provide adequate information power.
Randomisation and blinding
Data from the baseline interview was sent by secure email to the King's Clinical Trials Unit (CTU). The CTU online randomisation system allocated participants to study arms, independent of the research and clinical teams. Randomisation was done by minimisation [28] to balance three potential confounders between trial arms identified from published data [14]: cancer versus noncancer, breathlessness severity (NRS >3 or not) and presence (or not) of an informal caregiver.
Following randomisation, the CTU team informed the SELF-BREATHE administrator of each patient's study arm via secure email. The administrator contacted participants to inform them of their allocated study arm. For participants allocated to SELF-BREATHE they were contacted by phone, email and letter providing them with their website username, temporary password, user guide and "go live" date. This was followed-up with a telephone call by the administrator, to ensure that the participant had been able to access SELF-BREATHE. The research assistant entering the data to the database was blind to trial arm allocation.
Participants allocated to the intervention group (SELF-BREATHE) continued to receive their usual NHS care, but they were also given a username and password, which provided unlimited access to SELF-BREATHE throughout the study duration.
SELF-BREATHE has seven core components, delivered via multimodal media (i.e. animations, written text, audio files, pictures and instructional videos). 1) Patient education about chronic breathlessness and self-management.
2) Patient self-monitoring of their breathlessness: breathlessness severity, distress due to breathlessness and impact of breathlessness on daily life, with real-time algorithm-based automated feedback. 3) Breathing exercises and techniques: methods to improve breathlessness self-management, e.g.
breathing control exercises, purse-lipped breathing, body positions to relieve breathlessness. 4) Breathlessness self-management planning: patients can formulate a personalised breathlessness crisis plan, which will include the breathlessness management techniques used at points of breathlessness crisis, e.g. breathing control. 5) Improving physical activity: advice on how to increase daily activity levels, self-directed and self-monitored home exercise programme of bed, chair and standing-based exercises. 6) Personalised goal setting: self-guided support for patients to set personalised goals and how to track achievement and success. 7) Ask the expert: inbuilt messaging service where patients can ask a question or get advice about any specific aspect of SELF-BREATHE (responses were provided by C.C. Reilly, consultant physiotherapist, within 48 h).
Behaviour-change techniques were identified from the development phase of SELF-BREATHE, which was conducted with patients [23]. The techniques identified include 1) information about health consequences; 2) self-monitoring; 3) demonstration and instruction of breathing techniques and home exercise programmes; 4) breathing technique practice and rehearsal sessions; 5) goal setting; and 6) action planning [30].
Participants were advised to log in to SELF-BREATHE within 72 h of receiving their login details, and over the 6-week period work through the seven component sections in a stepwise fashion, personalising and implementing suggested interventions, e.g. breathing control exercises, home exercise, etc. Establishment of these self-management techniques within participants' day-to-day lives was supported through optional interactive components of SELF-BREATHE, e.g. self-monitoring of their progress including self-reporting of their breathlessness severity, goal setting and attainment.
Participants were provided with a telephone number and email address where they could access help and support with any technical problems. Once participants had received their login details, they did not have any planned contact with the research team or health profession until the 6-week follow-up time point. SELF-BREATHE has an "ask the expert" function that participants could use.
SELF-BREATHE was hosted by UKFast, a tier III data centre with ISO 27001 certification, Information Governance toolkit level 2, in compliance with NHS data governance policy.
Control arm: usual NHS care
Patients randomised to the control group continued with their usual NHS care, as was available to them prior to entry into the trial. There are no widely used, NHS-commissioned breathlessness support services; therefore, the comparator was usual care.
All patients were registered with an NHS GP and had access to them as needed. All patients were under the care of a consultant respiratory physician, reviewing patients at regular intervals, usually every 6-12 months. All patients had access to NHS accident and emergency departments, where patients could attend by calling an emergency ambulance or by visiting the emergency department using their own transport. Emergency and planned hospital admission was available to all.
Patient and public involvement
Patient and public involvement (PPI) was imbedded within the initial project proposal and throughout the study. Six PPI representatives from the Cicely Saunders Institute PPI group, King's College London (www.csipublicinvolvement.co.uk) participated in different aspect of SELF-BREATHE development and trial processes, including providing feedback on SELF-BREATHE prototypes, SELF-BREATHE content development, development of study-related materials such as participant information sheets and attending trial steering group and management meetings.
Analysis
Simple descriptive statistics were used to summarise the number of patients referred, approached, consented and randomised (total and split by primary diagnosis), and summarised in line with the CONSORT statement. Proportions of participants who 1) logged in and used SELF-BREATHE and 2) remained in the study at 6 weeks (T2) were reported. In keeping with the feasibility design, baseline characteristics and clinical outcome data have been summarised descriptively with no formal statistical testing for superiority of SELF-BREATHE compared to usual care. Data were analysed and summarised in line with the a priori progression criteria.
Results
Between 18 January 2021 and 12 January 2022, 110 eligible patients were referred into the study and provided with a participant information sheet. 52 (47%) out of 110 consented and were randomised into the trial (figure 1), exceeding our recruitment target.
Participants had severe chronic breathlessness due to advanced respiratory disease. Participants were confident internet users with the majority living in areas of high deprivation. Participants reported low self-confidence in their ability to manage their breathlessness (table 1). The mean±SD age was 63±13 years within our sample, of whom 31% were aged >71 years; MRC dyspnoea score was 2.4±1, of whom 40.5% had an MRC >4; thus demonstrating that patients across a wide range of ages and disease severity were recruited.
Of participants randomised to SELF-BREATHE, 19 (73%) out of the 26 logged in and used SELF-BREATHE. Individuals logged into SELF-BREATHE a mean±SD (range) 9±8 (1-33) times over 6 weeks. 36 (70%) of the 52 randomised participants completed and returned the end-of-study postal questionnaires at week six (figure 1). Missing data was greatest in the intervention arm (SELF-BREATHE) (figure 1). Those who did not complete the end-of-study postal questionnaires tended to be older and had more severe breathlessness-related disability (higher MRC scores) (table 1).
Reasons for missing data: two participants completed end-of-study postal questionnaires, but these were not received by the research team; two participants withdrew from the study after randomisation to the intervention arm; and study questionnaires for 12 participants were not returned (reasons unknown).
End-of-study patient-reported outcomes measured at 6 weeks are summarised in table 2. Pre-randomisation, all participants reported that the study design was acceptable. Two (7.7%) out of the 26 participants allocated to the control arm reported that they were "disappointed" to have been allocated to this arm, but were happy to continue their participation in the trial. SELF-BREATHE users reported it to be acceptable (table 3). SELF-BREATHE users reported that it improved both their understanding of chronic breathlessness and breathlessness self-management (table 3). Post-intervention qualitative interviews demonstrated that SELF-BREATHE was acceptable and valued by users, and provided interventions that they perceived to improve their breathlessness.
"My main goal [as part of SELF-BREATHE] was to go walking because I really enjoyed walking. Since I'd had COVID, that all came to a stop. I was battling [with breathlessness] to get to the front door. So, I've managed to get out. Obviously, at the beginning somebody had to be with me. But now, I've actually ventured out on my own with the dog." Female, asthma, 61-70 years "SELF-BREATHE is very directed at self-motivation, so I did it every other day or every day sometimes. One thing that I found very, very useful was the idea of using the fan when you're breathless, that really worked for me, so I do that constantly all the time now. The exercises were Data are presented as n, mean±SD or n (%). NRS: numerical rating scale (0−10); MRC: Medical Research Council; ILD: interstitial lung disease; COVID: coronavirus disease; LCADL: London Chest Activities of Daily Living questionnaire. # : higher score better; ¶ : higher score worse; + : functional disability due to breathlessness; § : self-reported. "I was sceptical I have to be honest, but after a week or so I started to see the benefits of how to control my breathing when moving around and walking, it [SELF-BREATHE] also encouraged me to set my own goals, one being to walk more steps in a day, I'm now above 5000 steps a day. It takes commitment from you to take part but is so very worth it. I know I can't be cured but it has certainly helped me in controlling my breathing and also my mental health." Male, ILD, 51-60 years Furthermore, SELF-BREATHE was found to be helpful at point of breathlessness crisis.
"It was good [SELF-BREATHE] because obviously when you have a breathing attack you automatically just clam up and panic. But it was nice to be able to have that information to hand [SELF-BREATHE]." "What did you find useful when you had these breathlessness attacks?" "The [breathing] techniques and everything, especially with the pursed lips, the relaxation. The bending over and breathing from the diaphragm that helped." Female, COPD, 41-50 years One participant struggled to use SELF-BREATHE due to macular degeneration, highlighting that digital/ online interventions may need additional consideration to increase accessibility.
Discussion
Key findings This is the first feasibility RCT of an online, transdiagnostic, self-management, breathlessness supportive intervention (SELF-BREATHE) for individuals living with chronic breathlessness due to advanced disease. In line with our research objectives and a priori progression criteria we found that an efficacy RCT trial of SELF-BREATHE using our methodology and procedures is likely to be feasible and acceptable to participants.
The feasibility of an efficacy RCT of SELF-BREATHE is supported by the completion of trial procedures, all patients who completed baseline measures were randomised (n=52). Out of the 52 participants randomised, 36 (70%) completed postal questionnaires which were received by the research team. A systematic review and meta-analysis of palliative care trials (n=119) found an overall attrition rate of 29% (95% CI 28-30%); in 50.8±26.5% of cases, attrition was at random, and the most predominant reason was the patient being no longer contactable [34], which was in keeping with our findings.
Patient-reported outcomes may suggest benefit with regard to breathlessness severity, impact of breathlessness on activities of daily living and healthcare utilisation, in this underpowered study. These data provide testable hypotheses and evidence to support conducting a fully powered randomised controlled trial of SELF-BREATHE.
SELF-BREATHE was acceptable and valued by users, who reported observed benefits of using SELF-BREATHE during daily life and at the point of breathlessness crisis. This was despite the complexity and challenges of conducting this RCT during the COVID-19 pandemic. In addition, we propose minor modifications (i.e. multiple methods for data collection: face-to-face, telephone, online and via post), to minimise missing data.
Relevance of findings
High healthcare costs are associated with chronic breathlessness, influenced by frequent GP and emergency department attendances due to breathlessness crises [9,11]. Therefore, it is imperative to find new evidence-based cost-effective approaches. SELF-BREATHE could potentially improve patient-reported outcomes, in particular reducing breathlessness severity while preventing the need for emergency hospital attendance. However, a full-scale RCT would be needed to test this. This study provides both testable hypotheses and evidence to support an efficacy RCT of SELF-BREATHE.
The COVID-19 pandemic has increased the acceptability, use, normalisation and value of the internet for many patients living with chronic breathlessness due to advanced respiratory disease [23]. The changes in clinical service provision because of the COVID-19 pandemic has increased patients' willingness to use online self-management interventions such as SELF-BREATHE, a key influencing factor in the success of this study. SELF-BREATHE was valued by users as it provided them with interventions to improve their breathlessness during daily life and at the point of breathlessness crisis. SELF-BREATHE was co-developed with patients [19,23], underpinning its acceptability.
A reflection from conducting a trial in patients with chronic breathlessness and advanced disease during a pandemic is the importance of selecting a primary and secondary outcome measure that can be easily modifiable and valid to collect via different modalities. Having the option for face-to-face, telephone, virtual and postal completion of measures would be very useful in times of crisis or when research support or resources are low.
Strengths and limitations
There are some limitations to this feasibility study. The participants were not blind to group allocation and would have known that they were allocated to the intervention group rather than usual care, which is common in complex behavioural interventions [14,20]. The researcher entering the research data to the database was blind to group allocation.
Both males and females were well represented in our trial. However, our sample was predominantly White. Under-representation of minority ethnic groups in medical research is an ongoing issue in the United Kingdom and beyond [20,31]. Ensuring equity, inclusion and diversity must be a key priority going forward in planning subsequent trials of SELF-BREATHE. Widening participation and geographical reach of PPI members supporting the onward development of SELF-BREATHE may help engage those from minority ethnic groups. In addition, it is important that a future RCT of SELF-BREATHE is multicentred and inclusive of varied geographical and diverse socioeconomic backgrounds, including translation and dubbing of materials as appropriate.
We endeavoured to recruit participants across a broad demographic range; however, the reach of our research and SELF-BREATHE can be improved. A consequence of the COVID-19 pandemic is increased digital literacy nationally and internationally [18]. Care must be taken to ensure that digital transformation of services do not amplify healthcare inequality by facilitating a digital divide that fails to provide adequate health and social care to those who do not have the skills to benefit [20]. Our data highlighted that for some individuals, complex multimorbidity and disability can make engaging with digital healthcare challenging. Therefore, it is important to consider SELF-BREATHE as a potential treatment option for those who are willing and able to engage with self-management and digital innovation.
It is a strength that this study could be conducted successfully during the COVID-19 pandemic, but it did increase missing data. Some missing data can be directly accounted for due to extrinsic factors, e.g. two questionnaires completed in the intervention arm were posted, but were never received by the research team. This is both a limitation of the study design and reflective of the impact of COVID-19 on infrastructure, including postal services. This study highlights important methodological considerations of conducting a RCT during a pandemic (i.e. the importance of a multiple-methods approach to data capture to minimise missing data).
For 12 out of the 16 participants for whom we did not receive the end-of-study questionnaires, we do not have a known reason for this missing data. Those that were lost to follow-up tended to be older, with higher breathlessness-related disability. One could hypothesise that for these older individuals with more severe disease, having to physically return the end-of-study postal questionnaires may have been too challenging. Support networks were reduced or became nonexistent during the pandemic, due to government-enforced restrictions, and due to COVID-19 infection. Thus, for our participants, retuning a postal questionnaire may have been impractical, or a low priority.
Another influencing factor with regard to the level of missing data is the lack of research support resources available during the COVID-19 pandemic. Indeed, the principal investigator (C.C. Reilly) and research nurses were redeployed to support the acute COVID-19 wards. In addition, high sickness rates across the clinical-academic workforce resulted in lack of resources to consistently follow-up on un-returned questionnaires. The pre-COVID study protocol was to conduct all baseline and follow-up research questionnaires within the participant's own home. This approach has been shown to be advantageous in minimising missing data in patients with advanced disease, and to help engage those who are housebound and unable to attend hospital research visits [14,35]. Our data provide new and valuable insights in terms of the methodological challenges of conducting a clinic trial during a global pandemic. In comparison to face-to-face home visits, postal questionnaires can be cost-and resource-efficient. In hindsight, collecting follow-up data over the telephone or via online video call may have helped minimise missing data.
Conclusion
Conducting an RCT of SELF-BREATHE was feasible. SELF-BREATHE was acceptable to individuals living with chronic breathlessness due to advanced disease. These data support the feasibility and acceptability of an efficacy RCT of SELF-BREATHE, with modifications to minimise missing data (i.e. multiple methods for data collection: face-to-face, telephone, video and via post).
|
2023-02-25T16:08:59.001Z
|
2023-02-23T00:00:00.000
|
{
"year": 2023,
"sha1": "f721db8191bc23a878714632b7174395902b756f",
"oa_license": "CCBYNC",
"oa_url": "https://openres.ersjournals.com/content/erjor/early/2022/12/22/23120541.00508-2022.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "852e30c264c45c2c1e203d407ea9d7435412317d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
35080848
|
pes2o/s2orc
|
v3-fos-license
|
Computer-Based On-Line Assessment of Sterilizing Value and Heat Distribution in Retort for Canning Process
The global food industry has the largest number of demanding and knowledgeable consumers: the world population of seven billion inhabitants, since every person eats! This population requires food products that fulfill the high quality standards established by the food industry organizations. Food shortages threaten human health and are aggravated by the disastrous, extreme climatic events such as floods, droughts, fires, storms connected to climate change, global warming and greenhouse gas emissions that modify the environment and, consequently, the production of foods in the agriculture and husbandry sectors. This collection of articles is a timely contribution to issues relating to the food industry. They were selected for use as a primer, an investigation guide and documentation based on modern, scientific and technical references. This volume is therefore appropriate for use by university researchers and practicing food developers and producers. The control of food processing and production is not only discussed in scientific terms; engineering, economic and financial aspects are also considered for the advantage of food industry managers.
Introduction
Heating process for food is of importance to the consumers since it is considered to be one of food preservation techniques. Under these techniques food can be stored or edible within a long period of time. One of them which require heat treatment is sterilization process. Thermal sterilization of prepackaged canned foods in retort has been the most widely used during the twentieth century. Typically this method consists of heating food containers in pressurized retorts at specified temperatures for prescribed lengths of times (Teixeira and Tucker, 1997). The process time for canned food is indicated based on the sufficient achievement of bacterial inactivation in each container in order to comply with public health standards or food safety. In addition it will minimize the probability of food spoilage. The traditional methods for thermal process calculations or validation such as Ball and Stumbo methods were developed and widely used ever since. However they required the off-line input of tables and consequently series of calculation steps which might be resulting in too-long or too short heating process. At present there are a lot of commercial software available which could be used either on-line or off-line analysis for sufficient heat treatment or process lethality (F o ) such as CAN-CALC , and CALSoft™ etc. Balaban (1996cited by Teixeira et al., 1999 described that CAN-CALC software needed to get f h (heating rate factor) and j h (heating lag factor) from heat penetration test prior to be able to predict internal center product temperatures in response to any dynamic boundary temperature for products of any shape and size as shown in figure 1 and 2. Therefore if assumed that the selected can was at the slowest heating point of the retort, simulated system F o for food products that heated by any combination of conduction or convection heat transfer also could be obtained. However, the software performance was emphasizing with its capability to deal with process deviation such as steam shutting off and back on. The CALSoft™ software (Anonymous, 2011) was designed specifically for conducting heat penetration and temperature distribution testing, evaluating the collected data, calculating a thermal process or vent schedule/come-up time, and evaluating process deviations. It was supposed to use with CALPlex™ data logger and claimed for the most widely used commercial thermal processing software.
When comparing to all other methods to calculate F o , the general method has been accepted to be the most accurate. However, the disadvantage of this method is used to be the clumsiness since it has to obtain the lethal rate in every time step. The smaller time step, the more accurate it will be. But when using computer as a tool to perform all these calculation, F o determination become rapid and simple. Fig. 1. Parameters input in CAN-CALC software before simulating for system F o (Balaban, 2004) www.intechopen.com Fig. 2. Graphical display of calculated or predicted and experimented temperature at coldest point in CAN-CALC software (Balaban, 2004) Many researchers (Lappo and Povey ,1986;Ryniecki and Jayas, 1993) had employed the accumulated process lethality to design system process control for batch steam retort. A number of thermocouples were connected to the cans. The mean temperature at the center of those cans was used for calculating process lethality in real time. Datta et al. (1986) used the numerical solution of 2 dimensional heat transfers in a finite cylinder as a part of the decisionmaking software in a computer-based retort control system. Actual retort temperature was read directly from sensors located in the retort and it was continually updated with each iteration of the numerical solution. Heating was continued until the accumulated lethality was reached some designated target value and the process would always end with the desired level of sterilization. However their solution of the model has some limitations since purely conduction-heated canned food was simulated for. Later many research works (Bichier et al.,1995 ;Teixeira, 1992) had been done without these limitations. Visual Basics computer simulation package for thermal process calculation was developed by Chen and Ramaswamy (2007). This graphical user interface (GUI) program was designed for training and testing of artificial neural artwork models and for study of process design or other research purposes. It is applicable to different retort thermal processing with different types of food such as solids, liquids and liquids containing particles in containers of different shapes and size. Temperature in container was solved by using finite difference and a numerical integration method was used for calculating process lethality and quality retention. There have been several attempts to develop control approaches for thermal process operation in food canning. Traditionally it consists of maintaining specified operating conditions that have been predetermined from product or process heat penetration test. The first control strategy was to employ real-time heat penetration data acquisition for intelligent on-line control of thermally processed foods. It was the most effective way to handle process deviation. However prior to start thermal operation, a number of product containers are instrumented with temperature probes then filling and seaming. Connection is made between these containers to data logger through the lead wires. Computer thus have the real-time accessing the data from data logger and perform calculation for accomplished sterilizing value at the coldest spot of container. The calculated accomplished sterilizing value is continually compared with the target value required at the end of heating. This strategy provides very accurate calculation of process lethality and is able to handle the process deviation without operator intervention and without any unnecessary degree of over-processing. The most valuable feature of this control strategy is that it is nearly foolproof since any thing that might have gone wrong earlier in the product preparation is revealed and accounted for. However, the obvious disadvantage for this type of control strategy would be cost prohibitive (Simpson et al., 2007).
Another retort control strategy that many researchers had worked on is about on-line correction of process deviation which is integrating of the real-time data acquisition for retort temperature, on-line correction factor and mathematical heat-transfer model of can temperature (Teixeira and Manson, 1982;Datta et al., 1986;Teixeira and Tucker, 1997). However, the strategy that will be the future trend is microcontroller-based retort control system or simply on-line temperature measurement of retort to lap top computer. When the calculated accomplished lethality reaches the specified target value, computer will automatic shut off or turn on valves (Simpson el al., 2007). Awuah et al. (2007) discussed that Can-Calc process simulation software also was tested for its performance and further integrated into a computer-based on-line control system by Noronha et al. (1995) and Teixeira et al. (1999). As a whole, the purpose of software design and hardware control was based on the fact that foods should not be overheated since it leads to detrimental effect to food quality as well as the waste of energy and water. Thus heat should be minimally applied or applied as necessary as it needs. In order to get the above mentioned process, it is essential to have proper machine or devices associated with analysis method to assess the process efficacy involving heat treatment for any one of canned products and heat distribution in sterilizing device. However, in Thailand most of the hardware and software available now are imported. They are designed basically on either the post assessment (after completely heating foods) or undergo heating. Up to now more efforts have been carried out for developing the intelligent on-line retort control system which is capable of rapid evaluation, on-line correction and printed documentation. The development of local devices or software for such purposes is still rarely found in Thailand. Thus the objectives of the research are to develop visual basic computer software to integrate the on-line data acquisition. The assessment of sterilizing value or process lethality (F o ) as well as heat distribution in retort was performed while heating. The software also can be used as an education tool for thermal processing study.
Materials and method 2.1 On-line data acquisition and sterilizing value (F o ) assessment
Quick Basics program was designed and developed to obtain the interfacing data from PCL-812PG card (multifunction data acquisition card) together with PCLD-889 boards (amplifier/multiplexer board with signal conditioning and cold junction sensing circuit) as in figure 3. Up to 8 thermocouples could be instrumented to the loaded cans and hard-wired through a retort (figure 4). They were used for sensing the analog inputs of temperatures from different locations in the retort and then they were transformed to digital temperature data via the interface PCL-812PG A/D card. Thus time-temperature history data from tested cans and some for temperature in the retort were recorded and displayed graphically in every 4.5 second in the developed program as GUI software coded by Visual Basic 6.0. Prior to the test, all the temperature reading probes were calibrated from the temperature range of ambient to 140 o C by comparing to the reading of reliable portable digital thermometer measuring hot oil. The designed computer program is able to access the recorded Quick-Basic data file which provide real-time of time-temperature history in cans and retort, and calculate them for lethal rate in every 4.5 second by Simpson's rule of numerical integration to obtain accomplished F o dynamically during sterilizing. In order to evaluate the accuracy of this program, time-temperature history data was also tested with F-ADDING which is a computer program for calculating F o coded by Rouweler (2000). The minimum accomplished F o among all of them from each probe is obtained as system F o and it is compared simultaneously with the target F o needed to end the process in the minimum of process time. The flowchart of the algorithm for on-line F o assessment is shown in figure 5. This approach was accepted as unquestionably the most effective and most certainly the very safest on-line correction when process deviation occurs (Teixeira and Tucker, 1997;Simpson et al., 2007) since thermocouples were used to on-line measure temperature not only retort but also the cans.
Heat distribution performance in retort
A small vertical retort with diameter of 38.8 cm and electric boiler were constructed for the test as shown in figure 4. The interfacing devices was assembled -interface cards, thermocouples, connectors, computer and peripheral equipments-vertical retort and electric boiler. One probe of thermocouples (probe # 1) were connected to the end tip of mercury thermometer in the retort and one (probe # 8) connected to the center of can which was hot filled with distilled water and then seamed. The rest of them (6 probes) were distributed appropriately inside the retort as shown in figure 6. The on-line graphical display of temperature from 8 thermocouple probes was shown while heating. In addition, the probes # which provides minimum and maximum temperature, as well as maximum temperature difference were indicated through out the heating process. The sterilization temperature at 110 and 121 o C were chosen to investigate the heat distribution in the retort.
Process design and minimum heat accumulation in canned products
Heat accumulation in canned products during sterilizing could be investigated either from their heat penetration profiles or accomplished F o values. Thus three probes of thermocouples were connected to the cans and located those (3 cans) in the basket since these 3 locations tend to be the cold points of system -probes # 3, 4, and 5 attached to the cans located at the positions 1, 3, and 5 in the basket respectively (figure 7) and probe # 6 exposed directly to the temperature of the heating medium in the retort. The cans were hot filled with concentrated pineapple juice then seamed and put into the basket at specific locations in the retort as mentioned above. The retort was full loaded with the rest of the cans. Specify target sterilizing value was chosen according to product characteristics (table 1) in GUI window. The information about target sterilizing value could be added to the file by pressing updated button. Subsequently sterilization was commenced by removing air in retort by replacing it with steam. Start button in F o determination GUI window was pressed to begin recording time-temperature data via the interfacing devices until the minimum value of accomplished F o (system F o ) reached specified target F o . Thus process schedule was recorded automatically and displayed graphically. Even though concentrated pineapple juice is acid food (pH > 4.0), mild sterilizing is usually sufficient and therefore could be applied. Specified target F o would be chosen as 10 121.1 F = 0.6-0.8 minutes (Rouweler, 2000) in this case. However in order to demonstrate the process design with this educational tool, the experiments were carried out with the holding temperature during sterilization selected to be at 110 o C and 120 o C. Table 1. The characteristics of tested product.
Coldest spot in container
To validate the capability of program when it was used to find the coldest spot in a container, the interfacing devices were assembled as before. One of the cans was hard-wired with 2 thermocouple probes: probe# 4 at 1/4 of central axis from bottom of can, and probe# 5 at half of central axis as shown in figure 8 while probe# 6 was for measuring temperature of medium in the retort. This can was filled up with baby corns in saline solution which was solids in liquid type of canned food, and then seamed. The position to put this can in the retort was the slowest heating point of this equipment. After the retort was full loaded with cans, sterilization process was carried out at 121 o C for some certain period of time as before, minimum accomplished F o of which could be obtained.
Portable educational tools for computer-based off-line assessment of sterilizing unit
The objective of this part was to design the computer program for assessing process lethality from interfacing data system via USB-A/D board. Thus the driver and interfacing program for data logger, National Instrument USB-9211A, 4 Channel 24 bit (figure 9) was installed to a note book computer. The commercial interfacing software was sensing voltage signal through 1 to 4 type-T thermocouples and transforming to digital data stored in a note book computer (figure 10). located in the basket at the center of bottom layer as the most probably slowest heating point. However one thermocouple (probe # 1) was exposed in the autoclave indicating medium temperature measurement during sterilizing. Temperature of 121 o C for 15 min was chosen for demonstrating sterilizing condition. Temperatures from 4 channels was recorded and stored in text file (*.txt) for every 2 seconds after running autoclave until finishing cooling process. QuickCalFo (Chamchong et al., 2008) was software designed to perform the off-line process lethality assessment by using Visual Basic 6.0 program. Input data of temperature and time during sterilization was retrieved from stored text file (figure 12) while target F o for each product with specific can size was pre-entered and saved into the program or selected from the list of available data before starting analysis for system F o . The result could display the temperature and time record in a spread sheet as well as heat penetrating curves and lethal rate profiles. F o values from each temperature-time profile were calculated by Simpson's rule general method and the minimum value was shown in the combo box nearby as system F o or accomplished F o . The accuracy of F o calculation from this software was validated as before by comparing it with that obtained from F-ADDING program coded by Rouweler (2000). www.intechopen.com
Computer-based on-line assessment of sterilizing value
The software package for process design was divided into 3 parts: (1) the main window of the GUI to receive the input parameter which is target sterilizing value. The user can choose this value from pull down combo box or add/delete and update to have more choices for later use ( figure 12).
(2) Graphical window of temperature and time profiles with 8 corresponded text boxes to display accomplished sterilizing values from maximum 8 probes ( figure 13). There is one text box at the bottom to display system sterilizing value which is the minimum value among all of accomplished sterilizing values from each probe. System sterilizing value increases while the process is underway heating and cooling and ultimately reaches the designated target sterilizing value. Program then displays text message at the bottom of GUI for the operator to stop steaming and total process time during heating is shown in upper right corner text box. The temperature and time record can be used for process design or as documentation in quality assurance system. (3) The spread sheet of temperature-time recorded from 8 thermocouple probes which could display minimum and maximum temperature, as well as maximum temperature difference (max-min) at each time interval through out the heating process ( figure 14).
Heat distribution in retort
Practically heat distribution in a retort should be carried out before performing the assessment of sterilizing value of the process in order to validate the slowest heating point. Therefore heat distribution in a retort (as in part 3 from mentioned above) was observed from on-line temperature record obtained from different locations of this equipment. For sterilizing at 110 o C in a small retort unit distributed heat could be indicated by temperature values at positions 1-8 in the retort corresponding to probe # 1-8 ( figure 6). In addition minimum heating reading from thermocouple probe which was connected to the can located at the slowest heating point was able to be quantified as the accomplished F o for the system. Then it could be used as an indicator for stopping steaming. Therefore the display was able to assure the minimum heat distribution occurring while heating. The coldest point of the system should be coming from the can which had the thermocouple connected with and was located at position 8 or at the upper layer of cans and at the center of the basket. To get enough heat treatment for the products, heat distribution test must be carried out once the machine was installed or process/product was modified. For the same retort, it was found that when holding temperature of sterilization was moderate at 110 o C heat distribution was more uniform than that at 121 o C. Temperature difference from max and min at any holding time or temperature deviation was between 1.8 to 3.1 o C (1.6-2.8%) when sterilizing temperature was at 110 o C but it was between 5-14 o C (4.1-11.6%) at 121 o C sterilizing temperature. This was possible since heating at 121 o C required higher heating rate. However more of stagnant point or dead legs would appear. According to the steam flow pattern in this retort, the probe located at positions 3 (on top of the can which was at the center of basket) and 6 (upper layer and in between cans) were found to be the stagnation points and shown minimum convective heat transfer in each run at higher sterilizing temperature. However the minimum heating point for lower sterilizing temperature was found to get changed to be either position 5 or 2 (top of the retort) at the early stage of holding temperature in sterilizing period and then changed to location 3 for the rest of holding time. This was possible because the more amount of steam used during heating at 121 o C, the narrower the stagnation area would be. In addition, when lower amount of steam used at 110 o C sterilizing temperature, the probes at position 5 and 2 in the top layer of cans initially would get contacted with steam slower than any other locations. After heating at this temperature for a while, heating was up to the top of retort. Temperature at position 5 and 2 would be no longer the minimum. However the exit point of coming steaming was from the bottom. Whenever steam valve was not fully opened the thermocouples in the lower layer would be affected or heated first. Therefore accomplished F o obtained from the can at the center upper layer of the basket was suitable to be system F o because the temperature from the probe nearby (probe 3) had demonstrated minimum heat received.
Process design or schedule and minimum heat accumulation in canned products
The process design or schedule of acid food like concentrated pineapple juice was shown in table 2. The obtained process time of sterilizing acid food at 110 and 120 o C was 10 and 4.5 minutes excluding cooling period respectively. At higher sterilizing temperature (120 o C), it was shorter than that at lower temperature (110 o C) since both were calculated based on the same specified target F o (0.7 minutes) or having the same area under the heat penetration curve before stopping steaming. Although the specified target sterilizing value for such a product was chosen to be 0.7 minutes but F o of the system obtained was 1.29 and 0.94 minutes for sterilizing at 110 and 120 o C, respectively. This was because the calculation was including come-up time, holding and cooling period. Thus a little over-process could occur in each run since there was slow removal of heat during cooling. Improved and proper design of cooling system in the retort would provide better product quality in terms of organoleptic properties. Heat accumulation in canned products can be observed from all accomplished F o values obtained from different locations in retort. For sterilizing at 110 o C in figure 13 (a), accumulated heat which could be indicated by accomplished F o at positions 1, 3, and 5 were 1.29, 1.91 and 2.29 minutes, respectively, while the one outside the can was 3.53 minutes. Thus this was assuring that the coldest point was from the position 1 or at the bottom layer of cans and at the center of the basket. To get enough heat treatment for the products, heat distribution test must be carried out once the machine was installed or process/product was modified. According to the steam flow pattern of this retort shown in figure 15, the cans located at positions 1 and 5 were supposed to be at the stagnation points and had minimum convective heat accumulation in each run. However the minimum accumulated heating www.intechopen.com point was found to get changed from position 1 located at the bottom center of the basket to position 5 located at the upper center of the basket while sterilizing at higher temperature. This was possible because the more amount of steam used during heating at 120 o C, the narrower the stagnation area would be. In addition, while cooling, the can at position 5 (top layer of cans) would get contacted with blowing air into retort during balancing pressure right after stop steaming. Therefore system F o would be obtained from the can at the center upper layer of the basket due to smaller heat accumulation. However cold water could significantly enhance heat removal when it was leveled up to that position in retort.
Coldest spot in a container
The sterilizing values at 2 different points could be used as indicated tool to validate the coldest spot in a container while undergo sterilizing process. As shown in figure 8 the can was instrumented with 2 thermocouple probes and hard-wired through a retort. Probe# 4 was at 1/4 of central axis from bottom of can, and probe# 5 was at the half of central axis while probe# 6 was for measuring temperature in the retort. It was found that temperature rising and dropping from probes #4 and 5 were almost identical and hard to distinguish. Exposed to heating medium in retort 9.54 Table 3. Validation of coldest spot in the can by sterilizing value
Portable educational tools for computer-based off-line assessment of sterilizing unit
The temperature and time logging on-line was displayed as in figure 16 but they were retrieved off-line as a text file shown in figure 17. Then F o was calculated from these data which were coming from thermocouple probes connected to 3 cans and one exposed to heating medium in the autoclave. From QuickCalFo as shown in figure 18, it showed that the slowest heating point in the retort was from thermocouple probe # 3 which was attached to the can located at the center bottom layer of the basket in the autoclave. Thus the accomplished sterilizing value or system F o was chosen from the minimum of 12.81 minutes among all from 4 thermocouple probes. In analysis F o frame box at the bottom right corner, the message after analyzing indicated whether the product had enough heat treatment or not. To display temperature and time as a spread sheet on the left side of this figure, time interval of data was selected first at the bottom between in every 1, 2 or 5 minutes. Then the temperature and time only at this specific interval was shown in the spread sheet of GUI. Heat penetration and lethal rate profiles from all 4 probes were also displayed graphically. The x and y range of these 2 graphs were adjusted automatically according to process temperature and time span used. In addition, this visual basic form could be printed out for food safety documents.
Conclusion
A computer program for on-line data acquisition and accomplished F o assessment was developed by MS Visual Basic 6.0 language. This computer-based on-line device was able to evaluate the coldest point of the can in the retort and calculate process lethality or system F o dynamically while sterilizing. Too much over or under process was avoidable for process design or schedule with the integration of such a device for on-line accomplished F o determination during preprocessing. The setup of hardware and software for computerbased on-line assessment of sterilizing unit would be needed for the cases of new products, processes or equipments. Non-uniform heat distribution in retort always exists. The designed program was able to perform heat distribution evaluation by recording and displaying maximum/minimum temperature deviation at different locations in retort during holding temperature in sterilization process. Lower sterilization temperature at 110 o C had lower temperature deviation (1.6-2.8%) in the retort during the holding temperature comparing to 4.1-11.6% at 121 o C. Thus lower temperature tended to lower the deviation.
|
2019-02-13T14:05:24.994Z
|
2012-02-22T00:00:00.000
|
{
"year": 2012,
"sha1": "965e8c59b586f8187f2cc0d2a452bf42e728da47",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/29169",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7c82eb384dc969109c30f9f9f1abd9f2c635da56",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
268162942
|
pes2o/s2orc
|
v3-fos-license
|
A Corpus-based Comparative Study on the Consistency of Writer's Style and Translator's Style
: On the basis of self-built corpora and with the combination of quantitative methods and qualitative analysis, this paper makes a comparative study of the original English works written by Lin Yutang and their Chinese versions translated by Zhang Zhenyu, a well-known translator of the English works of Lin Yutang. Based on the statistics mainly at the linguistic level, including content words ratio, STTR, h-point, entropy and activity, conclusions of this study include: first, the original writer as well as the translator maintains a consistent style during their writing or translating; second, in spite of nuances in certain parameters, the translator's style is highly consistent with that of the original writer; third, it can also be found that quantitative approach, especially quantitative stylistics is applicable and is of higher objectivity for the research of translation study.
Introduction
Style is taken as a genre, a variant of the language, or a text type, and Baker, in the first place, saw translator's style as a kind of "thumbprint", which can be explored by his or her regular linguistic and non-linguistic features [1] .From an empirical perspective, the style of a proper and popular target text(TT) is in line with that of the source text(ST), and the translator is expected to try his or her best to "copy" the style of the ST, but to what extent the translator realize that goal remains a difficult field to explore, not to mention the widespread conclusions of translation criticism mostly achieved through qualitative analysis.According to Saldanha [2] , Huang [3] , translator's style can be studied through two models, source text type(S-type) and target text type(T-type), and T-type, which puts emphasis on the subconscious choices of a translator, is taken as the main model in this study.
Quantitative stylistics and its related research
Quantitative stylistics, as a branch of stylistics, focuses on the quantitative statistics of the expression features of works of a certain writer or in certain period, in which corpora and quantitative analysis both play important roles.Leech and Short suggested that stylistic study often entails the use of quantification to back up the judgements which may otherwise appear to be subjective rather than objective [4] .Corpora, as standard representative samples of varieties or languages, provides a particularly valuable basis for an in-depth study of the features of any text.Baker also called for the the use of large computerized corpora in translation studies [5] .Quantitative analysis enables researchers to distinguish genuine reflections of the behaviour of a language from chance occurrences, thus getting a glimpse of the normality or abnormality of the text or the language through relatively limited statistics, which are much more reliable and generalizable [6].
In China quantitative stylistics study can be divided into two fields: analysis and comparison of writing styles or features; translator identification.In the first field, Huang, et al. successfully applied the achievements of quantitative linguistics to text clustering, language style comparison, laying a theoretical foundation for quantitative study of Chinese works [7][8] [9] .Jiang, et al. analysed the quantitative features and differences between human and machine translation of English passive sentences, identifying the translation universals and obvious features of machine translation [9] .Zhan, et al. [10][11][12] made writing style comparisons from different quantitative aspects based on corpora of different works at home and abroad.In the second one, Zhan, et al. utilize corpus-based quantitative research, improving the study of the translator identification by the quantitative features of language structure, thus promoting objectivity and interpretability of translation studies [13][14] .
Over years, Chinese translation works are given less attention and as to research of Lin Yutang's original works and their Chinese translations, more efforts should be made.Due to the difficulty in corpora building etc., quantitative research on translational novels in Chinese still plays a small part.
Research questions
On the basis of previous research, parameters that show the linguistic features of texts are chosen to compare the original English works by Lin Yutang and their Chinese translations by Zhang Zhenyu.Questions to be answered are: Whether there is consistency in the linguistic features and style of original literary works and that of translations respectively; Whether there is consistency in the linguistic features and style between original works and Chinese translations.
Corpora
As Table 1 shows, this paper is based on the corpus of three original English novels by Lin (hereinafter referred to as Corpus A) and the corpus of corresponding Chinese translation works by Zhang (hereinafter referred to as Corpus B).STs are in general accord with each other in text length so that the influence of text length in the research can be minimized.
Content word ratio and STTR
In this study, content words include verb, noun, adjective and adverb both in source language and target language, and the content word ratio is a proper indicator of word richness and variety.Standardized type/token ratio, STTR, is also an important indicator of words richness regardless of the text's length, so it is of higher reliability when it comes to texts of different length.
H-point, TC
H-point is considered as a critical point or boundary in the rank-frequency distribution of words in certain text.Most of the words before h-point are function words, while most of the words after it are content words, which can be used to measure linguistic type and stylistic features.Thematic concentration (TC) reflects the extent to which the text focuses on a particular topic [15] .Entropy here is calculated based on the probability of occurrence of a word in a text, entropy can also be used to show the richness of words in certain text.
Activity
Liu describes activity as "the ratio of the total number of verbs in the text to the sum of the total number of verbs and adjectives" [16] .Zörnig et al. expressed the interaction between these two partof-speeches in the following formula [17] : where Q is the activity; V is the frequency of verbs and A is the frequency of adjectives.If Q>0.5, the text can be considered as active; If Q<0.5, it can be considered as descriptive and if Q=1, the text is regarded as extremely active [18] .
Content words ratio and STTR
From Figure 1, the content words ratio as well as STTR of Corpus A is generally lower than that of Corpus B, indicating that the translator tends to use richer and more various words in translations.
ALS1 ALS2 LB1 LB2 RP1 RP2
The dispersion of content words ratio and STTR of Corpus A is 1.46 and 1.38, showing a high degree of consistency in the original writer's writing habit, while that of Corpus B is 0.37 and 1.04, which indicates an even higher degree of consistency in the translator's choice during translation.
H-point, TC and entropy
The value of h-point, TC, and entropy in Corpus A and B barely shows high dispersion(Corpus A: 5.1, 0.156, 0.002; Corpus B:5.3, 0.224, 0.006), and both the original writer and the translator maintain a highly consistent writing or translating style.The h-point of Corpus A is higher than that of Corpus B in varying degree, which means that the original writer, in his works uses relatively more function words and the translator chooses more content words in his translations.The value of Corpus B's TC and entropy, accordingly, shows a higher degree than that of Corpus A, and these three values can also illustrate that the translator prefers richer vocabulary and a more explanatory translation for the readability of target readers.
Activity
The dispersion of the value of activity in Corpus A is 0.03, while that in Corpus B is 0.01, showing little difference in the original writer or the translator's choice in the narrative features of the text.Corpus B has a higher degree of activity, which, as professor Lian summarized, is the result of the favour of verbs in Chinese [19], and based on the previous quantitative research by Xu(2021), the activity of translational Chinese is 0.82, which is close to the result in this study, average activity 0.824.
The original writer and the translator both maintain a consistency in their writing or translating style respectively, and despite nuances in certain parameters as a result of the translator's initiative, the consistency of the linguistic features and style of the original writer and the translator can be examined by SPSS nonparametric test(p=0.317>0.05),and the result shows consistency without any differences between the statistics obtained.Therefore, the linguistic features and the style of the original writer and the translator share great similarities, and the translator strives to "copy" the original writing style of the ST, which realizes a great reader's response as we can find in the Chinese literary circle or in such comments on Zhang's translations as "fluent and natural" [20] .
Conclusion
Taking Lin Yutang's English works and Zhang Zhenyu's Chinese translations as an example, this paper tries to utilize quantitative analysis to make a comparison of the style of original writer and translator, and on the basis of statistics, both the writer and translator maintain a highly consistent style in their works and translations.This study serves as convincing evidence for the fact that the writing style of the original writer, Lin Yutang is of high consistency of the translator, Zhang Zhenyu, and for the excellence of Zhan Zhenyu, a translator who dedicated his whole life to translation.But this study mainly focuses on the lexical level, and more statistics on linguistic features are needed to depict a whole picture of the style of a given writer or translator, thus leaving more room for further study in related field.
Figure 1 :
Figure 1: Comparison of Quantitative Data between Corpus A and Corpus B
|
2024-03-03T18:08:36.544Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "97be1ea75ce2684b0b8b661bb1f5f1ae97acd88b",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2024/02/28/article_1709110388.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8a12605cd00078cb566de46dd45d4394bee4d2a0",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": []
}
|
239031404
|
pes2o/s2orc
|
v3-fos-license
|
On the Development of Autonomous Vehicle Safety Distance by an RSS Model Based on a Variable Focus Function Camera
Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.
Introduction
Today, many studies on autonomous driving are being conducted, and vehicles with autonomous driving functions are rapidly becoming common [1]. According to the WHO (World Health Organization), traffic accidents killed more than one million people in 2013 [2]. Therefore, the safety of autonomous vehicles is becoming more important, and efforts to improve reliability and prevent traffic accidents are essential [3]. ACC (adaptive cruise control), an automotive control algorithm for ensuring vehicle safety by maintaining distance from the vehicle ahead, is the most widely used of the ADASs (advanced driver assistance systems) to assist the driver while driving [4,5].
Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model to prevent the accidents of autonomous vehicles [6]. The RSS model is a mathematical model for determining whether an autonomous vehicle is at fault in an accident and for ensuring safety. The RSS model defines a safe distance for as many as possible while driving and defines a dangerous situation. Moreover, it suggests an appropriate response to avoid the defined risk situation. Table 1 shows the configuration of the Mobileye RSS model. The RSS model includes about 99% of accident scenarios presented by the NHTSA (National Highway Traffic Safety Administration), and tests were conducted on 37 accidents, and the test results were confirmed to be suitable [7].
Related Work
De laco, R. et al. calculated the safest distance to avoid collisions between vehicles when overtaking a stopped preceding vehicle, or when turning to change lanes, based on the RSS framework. Using the RSS model, the authors demonstrate that the vehicle behaves reasonably and is safe at the same time [8,9].
Zhu, M. et al. identified a vehicle following a model suitable for use in Shanghai by calibrating the vehicle-following model based on SH-NDS (Shanghai Naturalistic Driving Study) data. The authors found that the IDM (intelligent driver model) showed the lowest errors and the best overall performance. Through this study, the suitability for microscope traffic simulation was confirmed [10]. Xu, X. et al. extracted the safety-critical car-following events of the SH-NDS data and corrected the RSS model using the NSGA-II algorithm. As a result, it was confirmed that the safety performance increased compared to the precorrection model or a human driver [11]. Li, L. et al. presented a new collision avoidance strategy for the vehicle tracking method to maintain traffic safety and efficiency [12].
Liu, S. et al. confirmed that RSS, as a safety assurance model, can be applied to ensure the safety performance of various autonomous driving algorithms. The influence of the RSS model on the vehicle s cut-in situation was evaluated based on a cut-in scenario with a time-to-collision (TTC) of less than three seconds. It was confirmed that the RSS model was superior to the human driver and only ACC [13].
Zhao, C. et al. confirmed that communication based between vehicles to improve the lane change performance of RSS is efficient and reasonable by increasing the utilization of limited road resources [14]. Khayatian, M. et al. introduced a new definition of RSS rules applicable to all scenarios and proposed a CAV (connected autonomous vehicle) driving algorithm [15]. However, Zhao, C. et al. [14] apply vehicle-to-vehicle communication, and Khayatian, M. et al. [15] includes the premise that vehicle-to-vehicle (V2V) communication should be possible to perform with CAVs. Therefore, there is a limit to the application of the V2V communication technology in a non-preceded state.
Orzechowski, P.F. et al. presented a safety verification technique for situations where roads merge or intersect. This ensured safety for the leading vehicle, and the appropriate interval and time for the following vehicle [16].
Chai, C. et al. evaluated the safety of the RSS model from the perspective of a human driver using a human-in-the-loop driving simulation. It was confirmed that the RSS model is much safer than the human driver or ACC model [17].
Problem Definition
When analyzing previous studies, the variables used in the RSS model were determined using the SH-NDS data. The SH-NDS data has some limitations in generalizing various driving environments, road conditions, and driver habits because the number of drivers surveyed is relatively small and only results obtained from a specific area are used [8]. In this paper, an autonomous vehicle is fixed to overcome this limitation. By determining the vehicle, the variables related to the vehicle in the RSS model are fixed. Through this, the safety distance of the RSS model is measured, and the effectiveness of the RSS model is verified through a comparative analysis, with the safety distance [18] obtained through the existing ACC.
The purpose of this paper is to determine the parameters of the RSS model for constructing the RSS model to be applied to the variable focus function camera, and to confirm the suitability of applying the determined variable to the variable focus function camera by applying the determined variable to the model. It is expected that this study will contribute to improving the efficiency and reliability of the variable focus function camera to which the RSS model is applied. Figure 1 shows the research method and procedure. The purpose of this paper is to determine the parameters of the RSS model for constructing the RSS model to be applied to the variable focus function camera, and to confirm the suitability of applying the determined variable to the variable focus function camera by applying the determined variable to the model. It is expected that this study will contribute to improving the efficiency and reliability of the variable focus function camera to which the RSS model is applied. Figure 1 shows the research method and procedure. The composition of this paper is as follows: Section 2 discusses the necessity of an RSS-model-based variable focus function camera. Section 3 describes how to build a model for the variable focus function application, and Section 4 discusses how to verify the suitability of the RSS model application. Finally, in Section 5, the conclusion of this paper will be presented.
Limitations of the ACC System as an ADAS
People are positive about ADAS systems like ACC [19]. The role of ACC system is collision detection and collision mitigation systems [20]. Heinzler et al. recognized that the number of vehicles equipped with ADASs using various sensors, such as lidar, camera, and radar to assist the driver while driving, is gradually increasing. In addition, the lidar sensor was selected as the subject of the study, and the effect of the weather environment on the lidar sensor was analyzed, and the classification result was presented [21]. ACC, one of the ADAS systems, recognizes obstacles in front, or the current driving situation, and warns the driver of a dangerous situation or brakes itself to avoid a collision [22,23]. The AEB system is that automatically applies emergency braking to avoid a collision with a vehicle in front while ACC is in operation [24]. Various sensors are used to operate the AEB system [25]. Abou-Jaoude R. shows that the ACC system, using the radar sensor, controls the speed through the presence of a vehicle in front, as well as the distance and time interval from the vehicle in front [26]. Pananurak, W. et al. proposed an ACC system with a fuzzy control algorithm applied to intelligent vehicles. It is confirmed that the vehicle could be controlled to move at a desired velocity, and the gap from the leading vehicle can be controlled [27]. Figure 2 shows that principle of ACC operation; if relative longitudinal distance between vehicles is larger than a safe distance, the rear car has to decrease the gap (Figure 2, top). However, if the relative longitudinal distance is shorter than a safe distance, the rear car has to decelerate ( Figure 2, bottom). Ploeg, J. et al. confirmed that safety was maintained through the implementation of CACC (cooperative adaptive cruise control) based on the wireless communication link between the ACC sensor and the vehicle, and a short time interval between vehicles was maintained. As a result, they argued that traffic can be increased, and fuel consumption and exhaust gas emissions could be expected to decrease [28]. However, since the ACC system only judges the situation ahead, it does not operate during reckless cut-ins or on sharp curves [29]. Moreover, according to Ploeg J. et al., there is a limit that the V2V systems precede in order to implement CACC. The composition of this paper is as follows: Section 2 discusses the necessity of an RSSmodel-based variable focus function camera. Section 3 describes how to build a model for the variable focus function application, and Section 4 discusses how to verify the suitability of the RSS model application. Finally, in Section 5, the conclusion of this paper will be presented.
Limitations of the ACC System as an ADAS
People are positive about ADAS systems like ACC [19]. The role of ACC system is collision detection and collision mitigation systems [20]. Heinzler et al. recognized that the number of vehicles equipped with ADASs using various sensors, such as lidar, camera, and radar to assist the driver while driving, is gradually increasing. In addition, the lidar sensor was selected as the subject of the study, and the effect of the weather environment on the lidar sensor was analyzed, and the classification result was presented [21]. ACC, one of the ADAS systems, recognizes obstacles in front, or the current driving situation, and warns the driver of a dangerous situation or brakes itself to avoid a collision [22,23]. The AEB system is that automatically applies emergency braking to avoid a collision with a vehicle in front while ACC is in operation [24]. Various sensors are used to operate the AEB system [25]. Abou-Jaoude R. shows that the ACC system, using the radar sensor, controls the speed through the presence of a vehicle in front, as well as the distance and time interval from the vehicle in front [26]. Pananurak, W. et al. proposed an ACC system with a fuzzy control algorithm applied to intelligent vehicles. It is confirmed that the vehicle could be controlled to move at a desired velocity, and the gap from the leading vehicle can be controlled [27]. Figure 2 shows that principle of ACC operation; if relative longitudinal distance between vehicles is larger than a safe distance, the rear car has to decrease the gap (Figure 2, top). However, if the relative longitudinal distance is shorter than a safe distance, the rear car has to decelerate ( Figure 2, bottom). Ploeg, J. et al. confirmed that safety was maintained through the implementation of CACC (cooperative adaptive cruise control) based on the wireless communication link between the ACC sensor and the vehicle, and a short time interval between vehicles was maintained. As a result, they argued that traffic can be increased, and fuel consumption and exhaust gas emissions could be expected to decrease [28]. However, since the ACC system only judges the situation ahead, it does not operate during reckless cut-ins or on sharp curves [29]. Moreover, according to Ploeg J. et al., there is a limit that the V2V systems precede in order to implement CACC.
Limitations of Distance Measurement Using Sensors
To detect vehicles or obstacles ahead, we utilize not only camera sensors, but also cognitive sensors, such as radar and lidar [30]. The limitations of a single sensor can be supplemented by the fusion of multisensors for recognition. Various studies have been conducted on how to fuse the data from multisensors [31]. To facilitate the detection and tracking of moving objects, radar, lidar, and three vision sensors were combined and utilized [32]. A system that fuses the information of lidar and a single camera sensor to detect pedestrians in the city is presented. The method of fusion of multisensor information makes the system for detecting objects more robust and safer because it does not depend on a single sensor in terms of practical application [33]. However, there are also disadvantages to using multisensors. Radar sensors have limitations in identifying pedestrians. It is difficult to detect when a pedestrian, or various objects close to a vehicle, overlap [34]. In addition, lidar sensors have disadvantages against climate change, such as snow and rain, and because they are expensive, it is difficult to apply them to current vehicles [35,36].
Importance of Applying Variable Focus Function Camera RSS Model
To overcome the limitations of using heterogeneous sensors in autonomous vehicles, the need for a variable focus function camera has emerged. The variable focus function camera is a camera that can change the angle of view and can replace the existing radar and lidar areas. By using a single camera that can change the angle of view as a cognitive sensor, the limitations of existing radars and lidars can be overcome. The RSS model is an interpretable white box mathematical model for ensuring the safety of autonomous vehicles proposed by Mobileye [3]. This represents the minimum requirements that all autonomous vehicles must meet. By applying the RSS model to the variable focus function camera sensor, it will be possible to ensure the safety of autonomous vehicles.
Features of RSS Model and Variable Focus Function Camera
Recently, Mobileye, which is an Israeli subsidiary of Intel that develops autonomous vehicles and ADASs (advanced driver assistance systems), has proposed the RSS model, which is a mathematical model, as a method for judging whether autonomous driving is negligent in the event of an accident caused by an autonomous vehicle [37]. The RSS model is constructed based on five rules. According to Shalev-Shwartz, Shai, S. et al., Equation (1) represents the longitudinal safety distance of RSS, and Equation (2) represents the lateral safety distance [6].
Limitations of Distance Measurement Using Sensors
To detect vehicles or obstacles ahead, we utilize not only camera sensors, but also cognitive sensors, such as radar and lidar [30]. The limitations of a single sensor can be supplemented by the fusion of multisensors for recognition. Various studies have been conducted on how to fuse the data from multisensors [31]. To facilitate the detection and tracking of moving objects, radar, lidar, and three vision sensors were combined and utilized [32]. A system that fuses the information of lidar and a single camera sensor to detect pedestrians in the city is presented. The method of fusion of multisensor information makes the system for detecting objects more robust and safer because it does not depend on a single sensor in terms of practical application [33]. However, there are also disadvantages to using multisensors. Radar sensors have limitations in identifying pedestrians. It is difficult to detect when a pedestrian, or various objects close to a vehicle, overlap [34]. In addition, lidar sensors have disadvantages against climate change, such as snow and rain, and because they are expensive, it is difficult to apply them to current vehicles [35,36].
Importance of Applying Variable Focus Function Camera RSS Model
To overcome the limitations of using heterogeneous sensors in autonomous vehicles, the need for a variable focus function camera has emerged. The variable focus function camera is a camera that can change the angle of view and can replace the existing radar and lidar areas. By using a single camera that can change the angle of view as a cognitive sensor, the limitations of existing radars and lidars can be overcome. The RSS model is an interpretable white box mathematical model for ensuring the safety of autonomous vehicles proposed by Mobileye [3]. This represents the minimum requirements that all autonomous vehicles must meet. By applying the RSS model to the variable focus function camera sensor, it will be possible to ensure the safety of autonomous vehicles.
Features of RSS Model and Variable Focus Function Camera
Recently, Mobileye, which is an Israeli subsidiary of Intel that develops autonomous vehicles and ADASs (advanced driver assistance systems), has proposed the RSS model, which is a mathematical model, as a method for judging whether autonomous driving is negligent in the event of an accident caused by an autonomous vehicle [37]. The RSS model is constructed based on five rules. According to Shalev-Shwartz, Shai, S. et al., Equation (1) represents the longitudinal safety distance of RSS, and Equation (2) represents the lateral safety distance [6].
Here, it is defined as [x] + := max{x, 0}; v f and v r are the velocity of the front and rear cars, respectively; ρ is the response time of the rear car; a max, brake is the deceleration of the front car; a max, accel and a min, brake are the acceleration and deceleration of the rear car, respectively. Moreover, it is defined as v 1,ρ = v 1 + ρa lat max,accel , v 2,ρ = v 2 − ρa lat max,accel . Therefore, the safe distance between two vehicles, suggested by Mobileye, is determined by the velocity, the acceleration/deceleration of the two vehicles, and the response time of the rear car. As shown in Figure 3, d long min represents the safety distance in the longitudinal direction when two vehicles are traveling in the same direction and the following vehicle is an autonomous vehicle. As shown in Figure 4, d lat min is the autonomous vehicle on the left and represents the safe distance between the right side of the autonomous vehicle and the left side of another vehicle.
Here, it is defined as ∶= max , 0 ; and are the velocity of the front and rear cars, respectively; is the response time of the rear car; max, brake is the deceleration of the front car; max, accel and min, brake are the acceleration and deceleration of the rear car, respectively. Moreover, it is defined as , = + max,accel lat , , = − max,accel lat . Therefore, the safe distance between two vehicles, suggested by Mobileye, is determined by the velocity, the acceleration/deceleration of the two vehicles, and the response time of the rear car. As shown in Figure 3, min long represents the safety distance in the longitudinal direction when two vehicles are traveling in the same direction and the following vehicle is an autonomous vehicle. As shown in Figure 4, min lat is the autonomous vehicle on the left and represents the safe distance between the right side of the autonomous vehicle and the left side of another vehicle.
Here, it is defined as ∶= max , 0 ; and are the velocity of the front and rear cars, respectively; is the response time of the rear car; max, brake is the deceleration of the front car; max, accel and min, brake are the acceleration and deceleration of the rear car, respectively. Moreover, it is defined as , = + max,accel lat , , = − max,accel lat . Therefore, the safe distance between two vehicles, suggested by Mobileye, is determined by the velocity, the acceleration/deceleration of the two vehicles, and the response time of the rear car. As shown in Figure 3, min long represents the safety distance in the longitudinal direction when two vehicles are traveling in the same direction and the following vehicle is an autonomous vehicle. As shown in Figure 4, min lat is the autonomous vehicle on the left and represents the safe distance between the right side of the autonomous vehicle and the left side of another vehicle. If the RSS safety distance for the longitudinal and lateral directions satisfies the condition of d lat < d lat min and d long < d long min simultaneously, the two vehicles are in a dangerous state because the minimum safety distance is violated [38].
The variable focus function camera changes the angle of view to cover the range perceived by existing radars and lidars. Moreover, by using a single camera, there is a benefit in terms of space compared to using three cameras according to the perceived distance. Even if the field of vision is limited by raindrops or mud, it can be recovered through an artificial intelligence algorithm. Figure 5 shows a schematic diagram of the concept of a variable focus function camera.
If the RSS safety distance for the longitudinal and lateral directions satisfies the condition of lat < min lat and long < min long simultaneously, the two vehicles are in a dangerous state because the minimum safety distance is violated [38]. The variable focus function camera changes the angle of view to cover the range perceived by existing radars and lidars. Moreover, by using a single camera, there is a benefit in terms of space compared to using three cameras according to the perceived distance. Even if the field of vision is limited by raindrops or mud, it can be recovered through an artificial intelligence algorithm. Figure 5 shows a schematic diagram of the concept of a variable focus function camera. Conventional autonomous vehicles use different types of sensors, such as lidar and radar, as well as cameras, according to the recognition distance [33]. However, the use of various sensors increases the complexity of the system and the possibility of errors. The purpose of the variable focus function camera is to recognize objects in various locations with one camera using the functions of various sensors used for recognition.
Identification of RSS Model Criteria for Variable Focus Function Application
By specifying the vehicle to which the variable focus function camera is applied, the value of the term related to acceleration/deceleration can be fixed in the RSS safety distance calculation formula. Moreover, the speed value has a constant value depending on the driving environment. If the determined value is substituted into the RSS formula, the RSS safety distance is determined by the reaction time. In this study, the vehicle was determined as GENESIS GV80. GENESIS GV80 has three models: 2.5 T gasoline, 3.5 T gasoline, and 3.0 diesel. Table 2 shows the time it takes to reach 100km/h for each model and the acceleration derived from it. The acceleration was calculated as = ∆ /∆ . Conventional autonomous vehicles use different types of sensors, such as lidar and radar, as well as cameras, according to the recognition distance [33]. However, the use of various sensors increases the complexity of the system and the possibility of errors. The purpose of the variable focus function camera is to recognize objects in various locations with one camera using the functions of various sensors used for recognition.
Identification of RSS Model Criteria for Variable Focus Function Application
By specifying the vehicle to which the variable focus function camera is applied, the value of the term related to acceleration/deceleration can be fixed in the RSS safety distance calculation formula. Moreover, the speed value has a constant value depending on the driving environment. If the determined value is substituted into the RSS formula, the RSS safety distance is determined by the reaction time. In this study, the vehicle was determined as GENESIS GV80. GENESIS GV80 has three models: 2.5 T gasoline, 3.5 T gasoline, and 3.0 diesel. Table 2 shows the time it takes to reach 100 km/h for each model and the acceleration derived from it. The acceleration was calculated as a = ∆v/∆t.
Derive RSS Models and Identify Safe Distances by Speed
By substituting the maximum acceleration results for each model in Table 2 into the RSS safety distance Equation (1) presented by Mobileye, an RSS safety distance calculation Sensors 2021, 21, 6733 7 of 11 equation suitable for the variable focus function camera was derived. The maximum acceleration and minimum deceleration values are assumed to be the same because they are determined by the following vehicle with autonomous driving function. Moreover, the maximum deceleration of the leading vehicle and the response time of the autonomous vehicle were cited [39]. Equations (3)-(5) represent the derived RSS safety distance calculation formulas of the 2.5 T gasoline, 3.5 T gasoline, and 3.0 diesel models, respectively. Table 3 shows the result of calculating the safe distance for each velocity of the leading and following vehicle, using Equation (4), derived for the 3.5 T gasoline model. In Table 3, the row represents the velocity of the leading vehicle, and the column represents the velocity of the following vehicle.
Scenario Setup for RSS Model Validation
The target is recognized by fusing the images of far, middle, and close distance, and issuing the appropriate command. When a target is recognized, the relative distance and speed of the leading vehicle are measured. When comparing the RSS safety distance and the relative distance to the leading vehicle, if the RSS safety distance is smaller than the relative distance between the two vehicles, the vehicle decelerates, and if it is larger, it accelerates and narrows the distance from the vehicle in front.
HDA (highway driving assistant) status was assumed for the scenario for verifying the RSS model. HDA is a driver assistance system used when driving at a 30 ∼ 130 km/s 2 velocity, and when the ACC and the LKAS (lane keeping assist system) operate. It was assumed that the driving environment was clear and sunny, and the visibility was sufficiently secured. In addition, driving on a straight road on a highway was assumed, and a situation in which a vehicle suddenly cuts-in is excluded this time. The velocity of the leading vehicle and the autonomous vehicle was assumed to be the same, and v max,brake = 8 m/s 2 , a max,accel = a min,brake = 5.05 m/s 2 , and ρ = 1 s were assumed. Table 4 shows the safety distance for each velocity.
Identification of Response Time Using RSS Safety Distance
The relationship between driving speed and safety distance is shown in Table 5 [40]. Assuming the HDA, when the speed of the autonomous vehicle is greater than 100 km/h, according to Table 5, the safe distance is greater than 100 m. This safety distance is applied to the RSS model and calculated inversely, and the response time ρ is about 1 s. The response time should be kept below the response time calculated as the sum of the recognition, judgment, and control times.
To identify the response time considering the output cycle of the camera, the camera TRW S-CAM3 model, equipped with the Mobileye solution, was selected. The TRW S-CAM3 model is a camera composed of three lenses, each with viewing angles of 25 • (far), 52 • , and 150 • (close). The output period of the TRW S-CAM3 camera sensor data is about 83 milliseconds. It can be obtained by inversely calculating the response time required for recognition, judgment, and control with consideration to the output cycle of the camera.
Validation of Response Time Using Safety Distance of Variable Focus Function Fitted RSS Model
It is assumed that the vehicle in front stops in the HDA situation. As shown in Table 5, if driving at 100 km/h on the highway, the safe distance is about 100 m. When the autonomous vehicle detects the leading vehicle, it measures the relative distance and velocity. If the RSS safety distance is smaller than the relative distance between the two vehicles, the autonomous vehicle gives a deceleration command until it stops and changes the camera sensor s field of view from far to near. Depending on the timing for recognizing the vehicle in front, the data acquisition time varies from 83 ms to 166 ms. In addition, the response time is 8 ms to change the angle of view of the variable focus function camera by using a stepping motor. At 100 km/h, the overall response time for a safety distance of 100 m is about 1 s. The perception time is 172 ms, which is the sum of the data output time, 166 ms, and the response time, 8 ms, of the camera s angle of view change. It is a valid result because it exists within 0.2 s, which is a general cognitive response time. Figure 6 shows a timeline analysis of the response time for each component for a specific situation while the HDA is in operation.
using a stepping motor. At 100 km/h, the overall response time for a safety distance of 100 m is about 1 s. The perception time is 172 ms, which is the sum of the data output time, 166 ms, and the response time, 8 ms, of the camera′s angle of view change. It is a valid result because it exists within 0.2 s, which is a general cognitive response time. Figure 6 shows a timeline analysis of the response time for each component for a specific situation while the HDA is in operation.
Results
As the supply of vehicles with self-driving functions increase, the issue of the safety of autonomous vehicles is emerging. Recently, Mobileye has proposed a white box mathematical model to secure the safety of autonomous vehicles and clarify responsibility in the event of an accident. These mathematical models are called RSS. ACC, a widely used autonomous driving function, is an excellent system, but it has several problems. For example, when there is a sharp curve or a vehicle suddenly cuts-in, the ACC system does not operate. Therefore, the RSS model is useful for compensating for these limitations of ACC. Autonomous vehicles use multiple sensors, such as radar, lidar, and cameras for perception. The use of multiple sensors increases the complexity of the system being configured and increases the chance of errors. To solve this problem, we identified model variables for applying the RSS model to a variable focus function camera that performs the role of multiple sensors with one single camera sensor. Through this study, we derived the safe distance for each velocity, and as a result of considering the data acquisition time and the camera angle change time according to the object recognition timing, valid results were confirmed.
Results
As the supply of vehicles with self-driving functions increase, the issue of the safety of autonomous vehicles is emerging. Recently, Mobileye has proposed a white box mathematical model to secure the safety of autonomous vehicles and clarify responsibility in the event of an accident. These mathematical models are called RSS. ACC, a widely used autonomous driving function, is an excellent system, but it has several problems. For example, when there is a sharp curve or a vehicle suddenly cuts-in, the ACC system does not operate. Therefore, the RSS model is useful for compensating for these limitations of ACC. Autonomous vehicles use multiple sensors, such as radar, lidar, and cameras for perception. The use of multiple sensors increases the complexity of the system being configured and increases the chance of errors. To solve this problem, we identified model variables for applying the RSS model to a variable focus function camera that performs the role of multiple sensors with one single camera sensor. Through this study, we derived the safe distance for each velocity, and as a result of considering the data acquisition time and the camera angle change time according to the object recognition timing, valid results were confirmed.
|
2021-10-15T15:23:18.459Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b0655fbc5ab50e811721800eefb33a8198cd6a1b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/20/6733/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ddbed0d01332e1ce46fe60c23e6d22c10e63fbaf",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12162620
|
pes2o/s2orc
|
v3-fos-license
|
The Reproducibility of 31-Phosphorus MRS Measures of Muscle Energetics at 3 Tesla in Trained Men
Objective Magnetic resonance spectroscopy (MRS) provides an exceptional opportunity for the study of in vivo metabolism. MRS is widely used to measure phosphorus metabolites in trained muscle, although there are no published data regarding its reproducibility in this specialized cohort. Thus, the aim of this study was to assess the reproducibility of 31P-MRS in trained skeletal muscle. Methods We recruited fifteen trained men (VO2peak = 4.7±0.8 L min−1/58±8 mL kg−1 min−1) and performed duplicate MR experiments during plantar flexion exercise, three weeks apart. Results Measures of resting phosphorus metabolites were reproducible, with 1.7 mM the smallest detectable difference in phosphocreatine (PCr). Measures of metabolites during exercise were less reliable: exercising PCr had a coefficient of variation (CV) of 27% during exercise, compared with 8% at rest. Estimates of mitochondrial function were variable, but experimentally useful. The CV of PCr1/2t was 40%, yet much of this variance was inter-subject such that differences of <20% were detectable with n = 15, given a significance threshold of p<0.05. Conclusions 31-phosphorus MRS provides reproducible and experimentally useful measures of phosphorus metabolites and mitochondrial function in trained human skeletal muscle.
Introduction
Magnetic resonance spectroscopy (MRS) is unmatched in its ability to measure tissue biochemistry in intact humans without the need for invasive procedures or the administration of potentially harmful radioactive isotopic tracers. In particular, it has been used extensively to monitor 31-phosphorus ( 31 P) metabolites in both cardiac [1] and skeletal muscle [2]. Due to the large volume and easy accessibility of the skeletal muscles of the human leg, 31 P-MR spectra can be acquired from a localized volume of leg muscle with excellent temporal (.1/s) resolution. Thus 31 P-MRS can be used to measure steady-state concentrations of high-energy phosphorus metabolites in resting skeletal muscle and phosphorus metabolite kinetics during exercise and recovery in a single experiment. It has long been known that the kinetic constants during work transitions provide an insight into the energy metabolism of the exercising (and recovering) muscle (cf [3]). Therefore resting phosphorus metabolites, and their kinetics during transitions from exercise to rest, have been widely used to assess muscle energetic status and energy metabolism, both in healthy subjects [4,5,6,7,8] and in patients with a wide range of diseases [9,10,11,12,13]. Indeed, in many cases MRS may well provide the only accurate in vivo measure of metabolites with rapid turnover in humans and experimental animals.
There have been two recent reports on the reproducibility of 31 P-MRS measurements in healthy untrained human skeletal muscle [14,15]. These recent papers added to an existing body of work using a range of experimental approaches that are summarized in Table 1. Results from these diverse approaches have been quite consistent in showing that 31 P-MRS is generally very reproducible, although one of the more comprehensive studies [14] seemed to suggest that estimates of mitochondrial function (made using kinetic data) are less so, at least compared with measurements of resting phosphocreatine concentration. In addition, the reproducibility studies that have been conducted using repeated testing in a single subject [16,17,18], although helpful in uncovering measurement or intra-individual variability, are unable to detect either systematic bias or populationdependent (inter-individual) variability.
Investigators in other fields have found that there are differences (both improvements and decrements) in the reproducibility of experimental methods when applied to exercise-trained subjects as opposed to untrained controls [19,20]. As with sedentary or moderately active subjects, 31 P-MRS is widely used to measure phosphorus metabolites and kinetics in the muscles of trained subjects, yet Table 1 shows that there are no published data reporting directly on the reproducibility of the method in this specialized cohort. However, what data there are suggest that both the inter-and within-subject variability of 31 P-MRS indices of mitochondrial function may differ markedly in athletes; for example, recently published data suggest that the coefficients of variation of several estimates of mitochondrial oxidative rate differ more than sevenfold between sedentary and endurance-trained subjects [21]. Thus, the aim of this study was to assess the reproducibility of MRS measures of 31-phosphorus metabolism in trained human skeletal muscle. We hypothesized that, despite differences in oxidative capacity between a trained and an untrained cohort, 31 P-MRS would continue to provide reliable, repeatable and useful measures of muscle biochemistry in vivo.
Ethics Statement
The Central Oxfordshire Research Ethics Committee approved this study and fully-informed written consent was obtained from all subjects. All protocols were conducted in accordance with the Declaration of Helsinki.
These data were acquired as part of a larger study. We recruited fifteen trained men from the Oxford rowing crews. We chose rowers for our study based on their participation in an aerobic sport that requires significant recruitment of the plantar flexion muscles of the lower leg [22]. Standard MR contraindications were excluded by history and physical examination. Peak aerobic capacity ( V V O 2 peak) was measured as described in detail elsewhere [23,24]. Ventilatory threshold was calculated according to the Vslope method [25], using software supplied for use with the Metamax system (Metasoft 3, Cortex, Biophysik, Germany). Subsequent MR experiments, the details of which have been published elsewhere [23,24,26], were performed twice, three weeks apart. Subjects were instructed to maintain normal training patterns for the two weeks prior to each measurement. Each subject performed plantar flexion exercise in a Siemens Trio 3T clinical MR system (Siemens, Erlangen, Germany), with a 6 cm dual-tuned 31 P and 1 H surface coil placed under the widest part of the right gastrocnemius. A special wooden housing was con-structed to ensure that coil positioning was consistent and repeatable. Positioning was further refined through the use of scout images. Prior to the acquisition of 31 P MR time-series data, three baseline scans were acquired to allow calculation of correction factors for partial saturation due to the short repetition time (TR) in the main acquisition, and for nuclear Overhauser enhancement (NOE). The acquisition parameters for the 31 P timeseries were TR 500 ms, TE 0.35 ms, bandwidth 2000 Hz, 10 averages, 512 data points, excitation flip angle 25u and 10 rectangular NOE pulses, with pulse duration 10 ms, inter-pulse delay 10 ms and excitation flip angle 180u. The MR exercise protocol was: 5 min rest, 5 min very light exercise (warm-up), 7 min recovery, 5 min at 5 W, 7 min recovery, 5 min at 6 W, 5 min recovery. Exercising values are the means of the last minutes of bouts 2 and 3. Figure 1 shows a typical set of spectra, acquired at 5-second intervals during the recovery phase.
Spectra were processed using jMRUI version 2.2 [27] and quantified using a non-linear least squares algorithm [28]. The resting ATP concentration was taken as 8.2 mM [2]. The chemical shift of the inorganic phosphate (Pi) peak, relative to phosphocreatine (PCr), was used to determine intracellular pH. Intracellular [ADP] was calculated making the standard assumption that the creatine kinase reaction was at equilibrium, and correcting for pH [29]. The halftime of PCr recovery after moderate exercise (PCr t1/2 ) was determined by fitting a monexponential equation to the PCr recovery data. Figure 2 shows a typical fit to experimental data. The maximum rate of mitochondrial ATP synthesis (Q MAX ) was extrapolated from the end-exercise [ADP] and corresponding rate of PCr resynthesis as in [30]. Technical issues caused a loss of data for calculation of Q MAX in a single subject. Thus n = 14 for this and associated measurements.
Statistical analyses were conducted using PASW 18.0 (SPSS Inc., Chicago, USA). Reproducibility was assessed using techniques drawn from [31] and [32]. Heteroscedasticity was treated as significant if the correlation between the means of the repeated measures and the absolute difference between them was positive and significant at p,0.05. In these cases, data were log transformed. A paired t-test was used to assess test-retest bias. The standard deviation of the differences was taken as an index of test-retest variability. In addition to these traditional methods, 95% confidence intervals of the differences between means were calculated. In the case of heteroscedastic data, 95% confidence intervals were calculated for the log-transformed data. When Table 1. Summary of published data regarding the reproducibility of 31 P-magnetic resonance spectroscopy in skeletal muscle (in chronological order). 'antilogged' these confidence limits are ratios, and are reported as such. In the main text, data are reported as means (SD).
Results
The subjects (n = 15) were aged 22 (1) years, weighed 82 (9) kg ( Table 2). They had a peak aerobic capacity of 4.7 (0.8) L min 21 (58 (8) mL min 21 kg 21 ) and a ventilatory threshold of 75 (12) % of peak power, confirming their trained status. Table 3 summarises the results of our analysis, giving the means and standard deviations of the first and second measures in each case, accompanied by the grand coefficient of variation (CV) where applicable. For example, muscle phosphocreatine content was measured as 30 (3) mM on the first visit and 29 (2) mM on the second; the CV for this measurement was 8%. Figure 3 shows the group means (and standard errors) for phosphocreatine concentration in recovery from dynamic exercise. Table 3 also shows the results of our tests of heteroscedasticity, as recommended by Nevill and Atkinson [32]. In two cases (exercising [Pi] and Qmax) there was convincing evidence of heteroscedasticity (i.e. a significant positive correlation between the absolute magnitude of the difference between two observations and their mean). These data were log-transformed and tested for heteroscedasticity again. In both cases the heteroscedasticity was resolved.
We looked for test-retest bias (for example, instrument drift or a learning effect) using a paired t-test comparing the first and second measurements. Table 3 shows that there was no significant test-retest bias in any of the measures taken. The standard deviation of the differences between the first and second measures ('Error (SD of diff.)' in Table 3) is an index measurement Table 3. Reproducibility of 31 P-MRS in trained skeletal muscle (n = 15).
31P-MRS Reproducibility in Trained Muscle
PLoS ONE | www.plosone.org variability (as described in detail by Bland and Altman [31]). We extended this approach by calculating the 95% confidence intervals for the differences between the first and second measures. These confidence intervals give the minimum limits for the detection of changes at a significance threshold of p,0.05. For example, in our trained cohort of fifteen, an increase in resting muscle [PCr] of .0.5 mM or a decrease of .2.9 mM would have been significant at the p,0.05 level. In the case of log-transformed data these confidence limits were antilogged to provide a 95% confidence 'ratio'. For example, in our cohort an increase in exercising [Pi] of .24% or a decrease of .6% would have been significant at p,0.05. In all cases the 95% confidence intervals were not symmetrical due to nonsignificant bias. If one assumes that bias was not present (as the data suggest) then the confidence intervals can be corrected. Thus a change in resting muscle [PCr] of 61.7 mM ((0.5+2.9)/2) could be reasonably assumed to be detectable at p,0.05 using our methods and with n = 15. Likewise, the minimum detectable change in exercising [Pi] would be 615%.
Discussion
We studied the reproducibility of 31 P-MRS indices of muscle metabolism in a trained cohort, for the first time (to our knowledge). We found that measures of resting metabolites were the most repeatable, with CVs of 8% (PCr) and 17% (Pi). Exercising metabolites were more variable (27% (PCr) and 47% (Pi)). Finally, measures of mitochondrial function such as PCr 1/2t , while highly variable (CV = 40%) were still experimentally useful providing a relative detection threshold of ,20% (n = 15, p,0.05).
Training (and recovery) stimulates adaptive physiological changes that vary widely in their timing. Thus it seems reasonable to suggest that the coefficients of variation of a range of physiological parameters measured in athletes may be different to those in sedentary subjects. This hypothesis has led researchers in other areas to specifically study the effect of exercise training on the reproducibility of various experimental methods [19,20]. Bingisser et al. 19 found that there were significant differences in reproducibility between measures taken in trained vs. untrained subjects, with the trained subjects being more homogenous and thus more reproducible in the measures that were studied. Likewise, Heitkamp and colleagues 20 studied the reproducibility of the lactate threshold in trained vs. untrained women. Once again, measurements in the trained women were somewhat more reliable.
Among the many well-known adaptive changes that follow from high levels of physical activity, exercise training stimulates changes in muscle gene transcription [33]. This may explain why muscle oxidative enzyme activity can vary widely in trained or highlyactive humans compared with those who are sedentary [34], and why the coefficients of variation of 31 P-MRS estimates of mitochondrial function can differ markedly in athletes compared to controls [21]. Furthermore, within trained subjects the peripheral training effect can vary dramatically even at the same relative VO 2 [35]. Consistent with this, the coefficients of variation (CV) we observed in our trained cohort were larger than those reported in untrained subjects [14]. For example, the CV of resting [PCr] in our trained cohort was 8%, compared with 2.2% reported by Layec et al. [14] and ,5% by Roussel and co-workers [36]. Yet resting muscle pH, which one would not expect to vary with training status, had a very similar CV in our trained cohort vs. earlier studies in untrained subjects: the CV of resting muscle pH was 0.2% in our hands and was reported as being 0.28% by Layec et al. [14], 0.4% by Roussel et al. [36] and 0.1% by Larson-Meyer and colleagues [37]. Given that the calculation of muscle pH from 31 P-MRS data utilises two independent peaks in a single spectrum, this comparability between the two studies reinforces that our data were of a similar quality to those earlier studies.
Yet despite the slightly greater variation, 31 P-MRS in athletes had excellent reproducibility when measuring intramuscular phosphates. In the absence of significant bias, the smallest detectable difference for a given n can be estimated from the mean of the absolute values of the confidence intervals (as outlined in Results). Using this approach, we estimate that changes in [PCr] of ,2.1 mM (7%) could be detected in just 10 trained subjects.
Consistent with earlier studies, measures of mitochondrial function were more variable. Coefficients of variation in our trained subjects were .30% for both PCr 1/2t and Q MAX . This is compared to coefficients of variation of ,20% for PCr 1/2t [14,15] 13-30% for Qmax [14] in other studies. However, the measurement of PCr 1/2t in athletes is unfairly described by these statistics. Although there was a high degree of inter-individual variation, analysis of the differences (measurement 2-measurement 1) suggested that changes of ,20% could be detected in 15 trained subjects, an eminently feasible number for practical research, particularly given that endurance trained individuals have a Q MAX that is close to double that of untrained individuals [38] and exercise training can induce increases in mitochondrial function of the order of up to 50% in the untrained elderly [39]. The reliability of measurements of metabolite concentration during exercise lay between those same measurements at rest and the indices of mitochondrial function ( Table 3). The increased variation relative to resting measurements could be attributed to several sources: First, despite heavy strapping and careful experimental design, noise may been generated due to motion/ contraction of the target muscles. In addition, variations in aerobic fitness/mitochondrial function and, possibly, ATP-economy of contraction were likely to have contributed to increased variance [40]. One could argue that the lack of tight control over our subjects' training schedules led to increased variability. However, our aim was to assess reproducibility in this cohort under 'normal' conditions (i.e. without strict training control). Nevertheless, the lack of any evidence for increased variability suggests that tight controls may be unnecessary during magnetic resonance studies of athletes.
There were three potential sources of variability in our data: variability in the instrument, physiological variation and processing variability (for example, slight differences in the selection of data used for curve fitting). Earlier studies have addressed these issues by i. Duplicate acquisitions from the same subject under identical conditions (i.e. in immediate succession, cf. [17]), ii. Repeated measurements on the same individual at different times (as in the present study) and iii. Duplicate processing of the same data by the same experimenter on different occasions (as in [14]). The existing work suggests that instrument variability and processing variability contribute rather little to the overall variability. Thus it seems reasonable to suggest that the bulk of the variability we observed was physiological in nature. However, these three sources of variability are difficult to separate entirely (for example, a given instrument may operate with greater variability across several days or months, but no living biological matrix is unchanging across these timescales). For the present study we chose not to separate these sources of variation as, in practice, they are all present; our aim was to produce benchmark data regarding the reliability of the method as a whole. One must consider that our study used athletes whose training was not being directly controlled by the experimenters. As such, variations in training load or the timing of experimental acquisition relative to training sessions may have introduced greater variability than in a cohort where training was rigorously controlled.
To conclude, we studied the reproducibility of 31 P-MRS measures of muscle phosphorus metabolism in a cohort of trained men. The coefficients of variation in this cohort appear to be slightly larger than in earlier, similar studies that used untrained subjects. However, these larger coefficients of variation appeared to be the result of larger inter-individual variation, while test-retest reliability remained good. Thus we found the method to be reproducible and reliable enough for studies to be conducted using relatively small numbers of trained participants, especially where paired statistical comparisons will be used.
|
2016-05-04T20:20:58.661Z
|
2012-06-11T00:00:00.000
|
{
"year": 2012,
"sha1": "14d4a17e129d71171f2163b7a73f083683f91393",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0037237",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3f8170097ab540c9ec361f74d0e4e0a4f8fee3c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15999976
|
pes2o/s2orc
|
v3-fos-license
|
New horizon for infection prevention technology and implantable device
There has been a significant increase in the number of patients receiving cardiovascular implantable electronic devices (CIED) over the last two decades. CIED infection represents a serious complication after CIED implantation and is associated with significant morbidity and mortality. Recently, newly advanced technologies have offered attractive and suitable therapeutic alternatives. Notably, the leadless pacemaker and anti-bacterial envelope decrease the potential risk of CIED infection and the resulting mortality, when it does occur. A completely subcutaneous implantable cardioverter defibrillator is also an alternative to the transvenous implantable cardioverter defibrillator (ICD), as it does not require implantation of any transvenous or epicardial leads. Among the patients who require ICD removal and subsequent antibiotics secondary to infection, the wearable cardioverter defibrillator represents an alternative approach to inpatient monitoring for the prevention of sudden cardiac death. In this review paper, we aimed to introduce the advanced technologies and devices for prevention of CIED infection.
Introduction
There has been a significant increase in the number of patients receiving cardiovascular implantable electronic devices (CIED) over the last two decades [1,2]. This is largely owing to the expanding indications of CIED based on technological improvements and new evidence demonstrating improved survival and quality of life among certain groups of patients having structural heart diseases [3,4]. However, the advantage of these devices is limited by associated adverse events and complications. CIED infection represents a serious complication of cardiac device therapy and is associated with significant morbidity and mortality. Despite appropriate care, in-hospital Contents lists available at ScienceDirect mortality among patients admitted because of CIED infection ranges from 4% to 10% and one-year mortality from 15% to 20% [5][6][7][8][9][10][11].
The majority of patients with CIED infection have pocket and/or endovascular lesions (Fig. 1). If aggressive antibiotic therapy fails to control CIED infection, then complete removal of the device is recommended in many instances [2,6]. The timing of reimplantation is another critical issue. An early re-implantation should be performed in patients who are solely dependent on the CIED; however, at least one week is required to control local or systemic bacterial infections [12]. Currently, the advanced technologies may contribute to a decrease in infection risk and mortality and may bridge the critical period between device removal and re-implantation.
New technologies to reduce the risk of CIED infection
In the USA and Europe, some new alternatives to prevent CIED infection are available. The leadless pacemaker and antibacterial envelope represent attractive and suitable therapeutic options to minimize the risk of CIED infection.
Leadless pacemaker
To reduce the complications associated with the standard transvenous electrode lead of the pacemaker, a leadless pacemaker has been invented. The concept of a completely self-contained VVIR intracardiac pacemaker, first explored about 45 years ago by Spickler JW et al., has finally become a reality with the development of the Nanostim™ Leadless Pacemaker (St Jude Medical, Inc., St. Paul, MN, USA) and the Micra™ Transcatheter Pacing System (Medtronic plc) for use in humans [13][14][15][16]. Technological advances in electronics miniaturization and battery chemistries have enabled creation of a device small enough to be implanted within the heart via a percutaneous, transvenous approach, while still providing similar battery longevity without leads. The leadless pacemaker has been expected to reduce CIED infections, because this system has no physical connection between the endocardium and the subcutaneous pocket, which are the most likely source and channel of bacterial infection, respectively. Furthermore the leadless standalone system never produces subclavian or supra vena-cava occlusions. Both systems have received the CE Mark in Europe, but are not approved in the USA.
The Nanostim system is delivered to the implant site at the lower septum of the right ventricle (RV) via a transfemoral route and allows for bradycardia pacing via a miniature pulse generator with a built-in battery and electrodes that can be entirely and permanently implanted (Fig. 2). The first successful Nanostim implantation in humans took place in December 2012 in Prague, Czech Republic. Recently, a nonrandomized first-in-human study demonstrated this system to be safe and feasible over a 90-day period [15]. This preclinical study expanded on the previous study by demonstrating that the pacing and sensing properties remain adequate for up to 18 months. In addition, the histological analyses at the 90-day mark revealed a limited local response to the implanted device at the RV apex. Furthermore, there were no significant adhesions between the device and the RV walls. These pathological features may have important implications related to the long-term efficacy and safety of this system, as well as for designing approaches to extract the device.
The Micra system, similar to the Nanostim system, is an investigational device and is being assessed in a pivotal global clinical trial. The miniaturized device is only one-tenth the size of a conventional pacemaker (Fig. 3). The Micra system is also delivered directly into the heart through a catheter inserted in the femoral vein. Once positioned, the pacemaker is securely attached to the heart wall in the RV and can be repositioned or retrieved during implantation if needed (Fig. 4). The device does not require the use of leads and is attached via small tines securing it to the heart wall. The first successful in-human Micra implantation occurred in December 2013 in Linz, Austria. It is currently being evaluated in the Medtronic Micra Transcatheter Pacing System (TPS) Global Clinical Trial, which is a single-arm, multicenter study that will enroll up to 780 patients at approximately 50 centers [16].
Both systems allow for retrievability, if needed; however, there are significant differences in the designs that are worth noting: (i) the Micra device has an active fixation mechanism consisting of four electrically-inactive extendable and retractable tines to anchor it to the cardiac tissue, whereas the Nanostim device uses an electrically active fixed helix, (ii) the Micra device is wider (20 Fr) and shorter (25.9 mm) than the Nanostim pacemaker (18 Fr and 41.4 mm), (iii) the Micra pacemaker's communication between the device and programmer is established using a standard programming head, whereas the Nanostim pacemaker communicates with the St. Jude Medical Merlin™ Patient Care System using a programmer link and surface electrocardiographic electrodes, and (iv) the Micra device uses a three-axis accelerometer for rate response, whereas the Nanostim pacemaker utilizes a blood temperature sensor.
The most relevant limitation is that the current Nanostim and Micra devices are indicated for patients requiring a singlechamber pacemaker only, limiting their use to a relatively small percentage of patients. Current indications focus on patients with chronic atrial fibrillation and second-or third-degree atrioventricular block, patients with sinus rhythm with second-or thirddegree atrioventricular block and a low level of physical activity or short expected lifespan, and patients with sinus bradycardia with infrequent pauses or unexplained syncope.
Anti-bacterial envelope
Previously published randomized controlled studies indicate that perioperative intravenous administration of a cephalosporin antibiotic can help to reduce CIED infections [2,18]. The European Society of Cardiology, American Heart Association, and Heart Rhythm Society recommendations for prophylaxis at the time of CIED placement consist of an antibiotic that has in-vitro activity against staphylococci. In recent large studies, the vast majority of patients received antimicrobial prophylaxis with CIED placement [19,20]. Despite widespread use of antimicrobial prophylaxis, CIED infection rates are increasing faster than implantation rates [21]. Effective antimicrobial prophylaxis could help reduce CIED infections and improve clinical outcomes.
The TYRX™ Antibacterial Envelope (Medtronic plc) consists of a Food and Drug Administration (FDA)-approved surgical mesh envelope that releases minocycline and rifampicin in the generator pocket after implantation with a CIED (Fig. 5) [17]. The biocompatible mesh is coated with antibiotics that elute (dissolve) within an approximately 7-day period. The TYRX™ is of two types, with substrate meshes that are 100% absorbable or non-absorbable. A recent report suggests that the use of the TYRX™ absorbable envelope was associated with a very low prevalence (0%) of CIEDrelated infections that was comparable to that seen with the nonabsorbable envelope. However, data from a randomized clinical trial are needed to support increased use of the antibacterial envelope [22]. Both TYRX™ Antibacterial Envelopes are sterile devices constructed of an open-pore weave, knitted filaments of a lightweight mesh, and an absorbable polymer coating impregnated with antimicrobial agents. The antibacterial envelope is indicated for holding CIEDs, thereby creating a stable environment surrounding the device and leads after surgical placement.
At least one-half to two-thirds of CIED infections are caused by Staphylococcus aureus (S. aureus) and coagulase-negative staphylococcus species (CoNS) [18,20,[23][24][25]. In vitro, methicillin-resistant strains of S. aureus and many strains of CoNS are susceptible to a combination of two antibiotics with distinct mechanisms of actions: minocycline and rifampicin [26][27][28]. Rifampicin is bacteriostatic and inhibits DNA-dependent RNA polymerase. Minocycline is bacteriostatic and inhibits protein synthesis. Minocycline has an antimicrobial spectrum against a wide range of gram-positive and gram-negative organisms. Rifampin is a semi-synthetic compound derived from Amycolatopsis rifamcinica and has antimicrobial activity against select gram-positive and gram-negative organisms. The substrate mesh varies between the two types of envelopes and consists of either polypropylene or Glycoprene II. Polypropylene has been utilized in surgically implanted medical devices for decades. The most common use is in hernia repairs.
The absorbable tyrosine-based polymer coating is designed to degrade to well-characterized natural metabolites. It has been demonstrated to resorb benignly, in the same manner as absorbable surgical sutures, while eliciting a minimal inflammatory response. It also has a long history of use with other FDA-approved implantable medical devices. Randomized controlled trials demonstrated that coating or impregnating catheters with the combination of rifampin and minocycline significantly reduces device-associated infections of central venous, hemodialysis, and cerebrospinal fluid drain catheters, especially infections with S. aureus and CoNS [29][30][31][32][33][34][35].
Preclinical studies demonstrated that the antibacterial envelope helped reduce the risk for infection by several pathogens, including Staphylococcus epidermidis, within CIED implant pockets [36]. A previous large clinical study indicated that the envelope is associated with a high rate of successful CIED implantation and a low risk of infection in a population at significant risk for CIED infection. Furthermore, standard use of an antibacterial envelope was associated with a significantly lower rate of CIED infection and appeared to be economically viable [37].
New technology for prevention of fatal CIED infection
It is widely accepted that complication rates are higher with reimplantations, particularly if a lead implantation or revision is involved [38,39]. In addition, morbidity and mortality is particularly high in patients with an infected transvenous implantable cardioverter defibrillator (transvenous-ICD) system, especially when a systemic infection or endocarditis is present.
The risk of reinfection following system re-implantation is also a concern [40,41].
A completely subcutaneous ICD was developed as an alternative to the transvenous-ICD system, as it is implanted without any transvenous or epicardial leads. The rate of infections resulting in explantation or revision of this new device was not lower than that reported in previous ICD registries. However, it should be emphasized that none of the documented device infections were systemic [42].
Complete subcutaneous implantable cardioverter defibrillator
The completely subcutaneous ICD system (S-ICD™ System, Boston Scientific Corp., Marlborough, MA, USA) was developed to provide life-saving defibrillation therapy while leaving the heart and vasculature untouched [43]. The S-ICD system is preferred over transvenous-ICD for patients having no vascular access, a history of recurrent transvenous lead infections, or primary electrical disease with ventricular fibrillation as the major life-threatening rhythm. The first pilot-phase human studies of the S-ICD commenced in 2008, followed by subsequent regulatory and post-marketing studies. Approved by the FDA in September 2012, to provide defibrillation therapy for the treatment of ventricular tachyarrhythmias, the S-ICD system was developed after 10 years of defibrillation and sensing research, acute human feasibility studies, and long-term clinical studies [43][44][45][46][47][48][49]. This system demonstrated a very high shock efficacy for spontaneous ventricular arrhythmias and a decreased incidence of inappropriate shocks [48].
The S-ICD system is comprised of a pulse generator, subcutaneous electrode, electrode-insertion tool, and device programmer. The pulse generator has an estimated longevity of 5 years and is slightly larger with a weight (145 g) approximately double that of a modern transvenous ICD generator [49]. It provides high-energy defibrillation shock (80 J) therapy through the use of a constant-tilt biphasic form. In addition, the new generation S-ICD System (EMBLEM™ [ Fig. 6]), which is 20% thinner and is projected to last 40% longer than the previous S-ICD system, is available in a small number of centers in Europe and USA. This system is also enabled for remote patient management for increased patient convenience. The generator is placed subcutaneously in a left lateral position over the 6 th rib between the midaxillary and anterior axillary lines. Via two parasternal incisions, a 3 mm tripolar parasternal electrode (polycarbonate urethane) is positioned parallel to and 1 to 2 cm to the left of the sternal midline with the distal sensing electrode localized adjacent to the manubriosternal junction and the proximal sensing electrode positioned adjacent to the xiphoid process (Fig. 7).
A population-based decrease in mortality with a new device is paramount, but can be negated if the implant is associated with a higher risk of removal due to pocket infection. Infection without any bacteremia remained the most common complication requiring invasive action in the early experience with the S-ICD [44][45][46]. Many steps were taken to mitigate this risk and prevent device removal, including better operative preparation training and techniques and aggressive management of skin infections [45]. Advances in implantation techniques were introduced in the literature by Knops et al. in an effort to reduce the incisional surface area and resulting infection risk [50]. Advances in operator experience, preparation, and implantation techniques appear to have positively affected the rates of infection, as use of the S-ICD system has expanded worldwide. In recent studies, the simplicity of implantation which avoids vascular access was reflected in the very low rate (2%) of acute major complications such as device system infection [48]. The S-ICD could be a new alternative to the conventional transvenous-ICD system to minimize device system infections.
The limitations of the current S-ICD include its inability to provide anti-tachycardia pacing for ventricular tachycardia, limited bradycardia pacing support, and absence of endovascular monitoring capabilities for collateral data gathering such as impedance monitoring for chronic heart failure. One estimate of potential candidates for the S-ICD includes every patient indicated for primary SCD prevention without a pacing indication. In addition, the use of a subcutaneous sensing electrode with the S-ICD may theoretically increase the risk of over-sensing noise or myopotential signals and under-sensing low-amplitude cardiac signals during ventricular fibrillation. The previous trial compared the arrhythmia detection of 3 commercially available transvenous ICD lead systems with the S-ICD electrode [51]. All devices excelled in detecting ventricular tachyarrhythmia (100%); however, the S-ICD demonstrated greater specificity in discriminating supraventricular from ventricular tachycardia (98% S-ICD vs. 76.7% single-chamber transvenous-ICD vs. 68% dual-chamber transvenous-ICD). Ideally, greater user programming experience and improvements in S-ICD technology may reduce the rate of inappropriate shocks.
4. New technology to reduce the risk of sudden cardiac death after the removal of ICD: a wearable cardioverter defibrillator ICD therapy has been established as a cornerstone of cardiology practice for reducing the incidence of SCD [52][53][54][55]. Unfortunately, ICD system infection represents a complication that occurs even in experienced centers. Among patients who require ICD removal and subsequent antibiotic therapy, a wearable cardioverter defibrillator (WCD; LifeVest WCD4000, ZOLL, Pittsburgh, PA, USA) represents an alternative approach to prevention of SCD. Removal of the ICD deprives the patient of the protection against potential lifethreatening ventricular tachyarrhythmias, particularly in patients with ICD implantation for secondary prevention of SCD. The Heart Rhythm Society recommends the use of WCD as a bridge to ICD reimplantation when the ongoing infection is a concern [56]. The WCD was introduced into clinical practice in 2002, and indications for its use are expanding currently. It has been in use worldwide, especially in the USA and Germany [57,58].
This device consists of an external defibrillator vest that automatically detects and treats ventricular tachyarrhythmias without bystander assistance [59,60]. A WCD is composed of a garment containing two defibrillation patch electrodes on the back, an elastic belt with a front-defibrillation patch electrode, and four non-adhesive ECG electrodes connected to a monitoring and defibrillation unit (Fig. 8). Recent trials demonstrated the efficacy of the WCD in the detection and treatment of lethal ventricular arrhythmias [57,61]. The WCD therapy can prevent SCD until ICD re-implantation is feasible in patients who underwent device removals for device system infections [62].
The efficacy of the WCD in the prevention of arrhythmic SCD would seem to be highly dependent on patient compliance. Proper instruction on the use of WCD is also important to avoid inappropriate shocks. In previous studies, inappropriate shock is a rare event that occurs in 0-3% of patients using WCD [63,64]. Shocks may be inappropriately delivered due to noise, device malfunction, or the rate criteria. A WCD is a unique tool designed to avoid unnecessary shock therapy. If a persistent arrhythmia is detected, the WCD notifies the patient via a "responsiveness test," allowing a conscious patient to prevent treatment. A conscious patient can hold the "response buttons" during the "responsiveness test" to prevent an unnecessary treatment. Therefore, much attention is being paid to the provision of medical education and information to patients in order to optimize their understanding and acceptance of the WCD therapy [59].
The strategy for re-implantation after removal of an ICD must be individualized to each patient and clinical situation. For many patients, continuous inpatient/outpatient monitoring may be impossible or at least highly undesirable. The WCD is a costeffective alternative to protect patients against SCD following the removal of an infected ICD while waiting for ICD re-implantation, as compared to keeping patients in the hospital or discharging them to go home or to a skilled nursing facility [12].
Conclusion
CIED system infection represents a relevant complication after CIED implantation and is associated with a significant risk of morbidity and mortality. However, newly developed technologies and devices represent attractive and suitable therapeutic options to reduce the incidence of this increasing problem.
|
2018-04-03T03:58:13.643Z
|
2016-03-19T00:00:00.000
|
{
"year": 2016,
"sha1": "99e87a3c374e219361392a909ca51ce49bda7c42",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.joa.2016.02.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99e87a3c374e219361392a909ca51ce49bda7c42",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119243932
|
pes2o/s2orc
|
v3-fos-license
|
Simulations of the Sunyaev-Zel'dovich Power Spectrum with AGN Feedback
We explore how radiative cooling, supernova feedback, cosmic rays and a new model of the energetic feedback from active galactic nuclei (AGN) affect the thermal and kinetic Sunyaev-Zel'dovich (SZ) power spectra. To do this, we use a suite of hydrodynamical TreePM-SPH simulations of the cosmic web in large periodic boxes and tailored higher resolution simulations of individual galaxy clusters. Our AGN feedback simulations match the recent universal pressure profile and cluster mass scaling relations of the REXCESS X-ray cluster sample better than previous analytical or numerical approaches. For multipoles $\ell\lesssim 2000$, our power spectra with and without enhanced feedback are similar, suggesting theoretical uncertainties over that range are relatively small, although current analytic and semi-analytic approaches overestimate this SZ power. We find the power at high 2000-10000 multipoles which ACT and SPT probe is sensitive to the feedback prescription, hence can constrain the theory of intracluster gas, in particular for the highly uncertain redshifts $>0.8$. The apparent tension between $\sigma_8$ from primary cosmic microwave background power and from analytic SZ spectra inferred using ACT and SPT data is lessened with our AGN feedback spectra.
SZ POWER TEMPLATES AND THE OVERCOOLING PROBLEM
When CMB photons are Compton-scattered by hot electrons, they gain energy, giving a spectral decrement in thermodynamic temperature below ν ≈ 220 GHz, and an excess above (Sunyaev & Zeldovich 1970). The high electron pressures in the intracluster medium (ICM) result in cluster gas dominating the effect. The integrated signal is proportional to the cluster thermal energy and the differential signal probes the pressure profile. The SZ sky is therefore an effective tool for constraining the internal physics of clusters and cosmic parameters associated with the growth of structure, in particular the rms amplitude of the (linear) density power spectrum on cluster-mass scales σ 8 (e.g., Birkinshaw 1999;Carlstrom et al. 2002). Identifying clusters through blind SZ surveys and measuring the SZ power spectrum have been long term goals in CMB research, and are reaching fruition through the South Pole Telescope, SPT (Lueker et al. 2009) and Atacama Cosmology Telescope, ACT (Fowler et al. 2010) experiments. The ability to determine cosmological parameters from these SZ measurements is limited by the systematic uncertainty in theoretical modelling of the underlying cluster physics and hence of the SZ power spectrum. The power contribution due to the kinetic SZ (kSZ) effect that arises from ionized gas motions with respect to the CMB rest frame adds additional uncertainty.
There are two main approaches to theoretical computations of the thermal SZ (tSZ) power spectrum: from hydrodynamical simulations of SZ sky maps or from semi-analytical estimates (Bond et al. 2002(Bond et al. , 2005. Large cosmological simulations providing a gastrophysical solution to the pressure distribution should include effects of non-virialized motions, accretion shocks, and deviations from spherical symmetry. Averaging over many realizations of synthetic SZ sky projections yields the power spectrum and its variance (e.g., B0205;da Silva et al. 2000;Springel et al. 2001;Seljak et al. 2001;Schäfer et al. 2006). In conjunction with primary anisotropy signals and extragalactic source models, the SZ power spectrum has been used as a template with variable amplitude A SZ for extracting cosmological parameters by the Cosmic Background Imager (CBI) team (B0205; Sievers et al. 2009) and the ACBAR team (Goldstein et al. 2003;Reichardt et al. 2009). A SZ was used to estimate a σ 8,SZ ∝ A 1/7 SZ as a way to encode tension between the SZdetermined value and the (lower) σ 8 obtained from the primary anisotropy signal. The CBI team also has included an analytic model (Komatsu & Seljak 2002, KS) which was also the one adopted by the WMAP team (Spergel et al. 2007). The KS template yielded a lower value for σ 8,SZ than that obtained with the simulation template, by ∼ 10%. The KS model assumes a universal ICM pressure profile in hydrostatic equilibrium with a polytropic (constant Γ) equation of state. The power spectrum is then obtained using an analytic fit to 'halo model' abundances. So far the SPT and ACT have only used the KS template and a related semianalytic one (Ostriker et al. 2005;Bode et al. 2009). This model (Sehgal et al. 2010, S10) allows map generation by painting dark matter halos in N-body simulations with gas. It expands on KS by calculating the gravitational potential from the DM particles, includes an effective infall pressure, adds simplified models for star formation, non-thermal pressure support and energy feedback which are calibrated to observations. Using these templates, the SPT team derived a σ 8,SZ lower than the primary anisotropy σ 8 (e.g., WMAP7, Larson et al. 2010).
Current simulations with only radiative cooling and supernova feedback excessively over-cool cluster centers (e.g. Lewis et al. 2000), leading to too many stars in the core, an unphysical rearrangement of the thermal and hydrodynamic structure, and problems when compared to observations, in particular for the entropy and pressure pro-files. The average ICM pressure profile found through X-ray observations of a sample of nearby galaxy clusters (Arnaud et al. 2009) is inconsistent with adaptive-mesh cluster simulations , as well as the KS analytic model (Komatsu et al. 2010). Pre-heating (e.g. Bialek et al. 2001) and AGN feedback (e.g. Sijacki et al. 2007Sijacki et al. , 2008Puchwein et al. 2008) help solve the over-cooling problem and improve agreement with observed cluster properties.
Previously, an analytical model by Roychowdhury et al. (2005) has explored the effects of effervescent heating on the SZ power spectrum and Holder et al. (2007) use a semianalytical model to calculate how an entropy floor affects the SZ power spectrum. There have been several simulations on galaxy and group scales that have studied how 'quasar' feedback impacts the total SZ decrement (Thacker et al. 2006;Scannapieco et al. 2008;Bhattacharya et al. 2008;Chatterjee et al. 2008). In this work we explore whether AGN feedback incorporated into hydrodynamical simulations of structure formation can suppress the over-cooling problem and resolve the current inconsistency between theoretical predictions and observations of the SZ power spectrum and X-ray pressure profile.
Cosmological simulations
We pursue two complementary approaches using smoothed particle hydrodynamic (SPH) simulations: large-scale periodic boxes provide us with the necessary statistics and volume to measure the SZ power spectrum; individual cluster computations allow us to address over-cooling at higher resolution and compare our AGN feedback prescription with previous models. We used a modified version of the GADGET-2 (Springel 2005) code. Our sequence of periodic boxes had sizes 100, 165, 330 h −1 Mpc. The latter two used N DM = N gas = 256 3 and 512 3 , maintaining the same gas particle mass m gas = 3.2 × 10 9 h −1 M ⊙ , DM particle mass m DM = 1.54 × 10 10 h −1 M ⊙ and a minimum gravitational smoothing length ε s = 20 h −1 kpc; our SPH densities were computed with 32 neighbours. For our standard calculations, we adopt a tilted ΛCDM cosmology, with total matter density (in units of the critical) Ω m = Ω DM + Ω b = 0.25, baryon density Ω b = 0.043, cosmological constant Ω Λ = 0.75, Hubble parameter h = 0.72 in units of 100 km s −1 Mpc −1 , spectral index of the primordial power-spectrum n s = 0.96 and σ 8 = 0.8. For the 'zoomed' cases (Katz & White 1993), we repeatedly simulated the cluster 'g676'(with the high resolution m gas = 1.7 × 10 8 h −1 M ⊙ , m DM = 1.13 × 10 9 h −1 M ⊙ and ε s = 5 h −1 kpc, using 48 neighbours to compute SPH densities, as in Pfrommer et al. 2007).
We show results for three variants of gas heating: (1) the classic non-radiative 'adiabatic' case with only formation shock heating; (2) an extended radiative cooling case with star formation, supernova (SN) feedback and cosmic rays (CRs) from structure formation shocks; (3) AGN feedback in addition to radiative cooling, star formation, and SN feedback. Radiative cooling and heating were computed assuming an optically thin gas of a pure hydrogen and helium primordial composition in a time-dependent, spatially uniform ultraviolet background. Star formation and supernovae feedback were modelled using the hybrid multiphase model for the interstellar medium of Springel & Hernquist (2003a). The CR population is modelled as a relativistic population of protons described by an isotropic power-law distribution function in momentum space with a spectral index of α = 2.3, following Enßlin et al. (2007). With those parameters, the CR pressure modifies the SZ effect at most at the percent level and causes a reduction of the resulting integrated Compton-y parameter ).
AGN feedback model
Current state-of-the-art cosmological simulations are still unable to span the large range of scales needed to resolve black hole accretion. Hence a compromise treatment for AGN feedback is needed. For example, Sijacki et al. (2007) and Booth & Schaye (2009) adopted estimates of black hole accretion rates based on the Bondi-Hoyle-Lyttleton formula (Bondi & Hoyle 1944). Here we introduce a sub-grid AGN feedback prescription for clusters that allows for lower resolution still and hence can be applied to large-scale structure simulations. We couple the black hole accretion rate to the global star formation rate (SFR) of the cluster, as suggested by Thompson et al. (2005) using the following arguments. The typical black hole accretion rates and masses for the inner gravitationally stable AGN disks (of size 1pc) are ∼ 1 M ⊙ /yr and ∼ 10 6 M ⊙ . Since AGN lifetimes are much longer than 1 Myr, mass must be transferred from larger radii to the inner disk. However, at much larger radii this outer disk is gravitationally unstable and must be forming stars. Thus, in order to feed the AGN, stability arguments suggest that the rate of accretion must be greater than the SFR. For simplicity we assume thatṀ BH ∝Ṁ ⋆ . We inject energy into the ICM over a spherical region of size R AGN about the AGN, according to The duty cycle over which the AGN outputs energy is ∆t and ε r is an 'efficiency parameter'. (As we describe below, the calculated efficiency for turning mass into energy is much smaller than ε r .) We have explored a wide range of our two parameters, but the specific choices made for the figures are ∆t = 10 8 yr and ε r = 2 × 10 −4 . We require a minimum SFR of 5 M ⊙ /yr to activate AGN heating in the halo it is housed in. Given the output AGN energy, we must prescribe how it is to be distributed. Our procedure is motivated by the way Sijacki & Springel (2006) did AGN heating via bubbles. Using an on-the-fly friends-of-friends (FOF) halo finding algorithm in GADGET-2, we determine the mass and center of mass of each halo with M halo > 1.2 × 10 12 h −1 M ⊙ . We calculate its global SFR within the AGN sphere of influence of radius where u AGN = ε s and E(z) 2 = Ω m (1 + z) 3 + Ω Λ . Within the halos we partition E inj onto those gas particles inside of R AGN according to their mass. We have varied the prescription for R AGN and its floor u AGN (chosen here to be the gravitational softening ε s ); the specific numbers given in eq. 2 (and for ε r ) match previous successful models that suppress the overcooling by means of AGN feedback (Sijacki et al. 2008, see Sect. 3.1). Defining R ∆ as the radius at which the mean interior density equals ∆ times the critical density ρ cr (z) (e.g., for ∆ = 200 or 500), then the ratio of R AGN to R 200 is a constant ∼ 0.05. -Shown are f b (dashed lines) and f star (solid lines) normalized to the universal value ( f b = 0.13) assumed in our simulations of our cluster g676 with M 500 = 6.8 × 10 13 h −1 M ⊙ . The blue lines are for the simulation with radiative cooling and star formation while the red and orange lines are for our AGN feedback model (ε r = 2 × 10 −4 ,Ṁ ⋆ 5 M ⊙ /yr) and that by Sijacki et al. (2008), respectively. The data points are observations by Gonzalez et al. (2007) and Afshordi et al. (2007). f star (< R 500 ) from X-ray measurements also agrees well, but the errors are large. Our sub-grid model matches the results from Sijacki et al. (2008) in this high resolution simulation well.
Although we have referred to our feedback mechanism as being caused by AGN outflows, radiation pressure from stellar luminosity acting on dust grains will serve much the same purpose, and could also deliver high efficiencies (e.g. Thompson et al. 2005). In the code, we have so far added E inj as a pure heating component, but it should allow for a mechanical, momentum-driven wind component as well, which would not be as prone to catastrophic cooling and likely decrease the ε r needed for useful star formation suppression.
The relevant energy budget is not in fact defined by ε r , but rather by a redshift-dependent effective feedback efficiency where we sum over every energy injection event (labeled by i) and we calculate the stellar mass M ⋆ (< r) within a given radius. In all cases, ε eff ≪ ε r , because: (i) heating suppresses the stellar mass ∆M ⋆ created over ∆t, making it quite a bit less than the stellar massṀ ⋆ ∆t that would have formed without any feedback; and (ii) E inj is a stochastic variable, which we find to be zero about half of the time because the required SF threshold is not achieved. With our fixed ε r − R AGN prescription, our canonical g676 example has ε eff ∼ 5 × 10 −6 for the entire simulation; if all energy had been released within the final R AGN , ε eff would be 8 × 10 −5 , but feedback, especially at early times, is much more widely distributed. Of a total E inj = 9 × 10 61 ergs for g676 we find 58% is delivered in the cluster formation phases at z > 2, another 23% is delivered in the redshift range 1 < z < 2 that can be probed with ACT and SPT resolution, and only 19% comes from the longer period below redshift 1. Feedback prescriptions with smaller E inj which still give the desired star formation suppression need further exploration.
PRESSURE PROFILES
3.1. Testing AGN feedback as resolution varies AGN feedback self-regulates the star formation and energetics of a cluster. In Fig. 1 we compare the fraction of baryons ( f b ) and stars ( f star ) as functions of cluster radius for the high-resolution 'g676' simulations. Our radiative simulation produces 1.5 − 2 times more stars than those with AGN feedback. Our sub-grid AGN model nicely reproduces the results in Sijacki et al. (2008). It should also produce reliable results in the cosmological box simulations in which overcooling is less severe because of the lower resolution. There is significant sensitivity to the value chosen for the feedback parameter ε r : doubling it lowers f b by a factor of 1.5, halving it increases f star by 1.4. The 100 h −1 Mpc simulations were used to study the resolution dependence of our feedback model by varying N 1/3 gas in steps from 64 to 256, with ε s and hence u AGN (eq. 2) decreased accordingly. As u AGN decreased, f star within R 500 increased almost linearly for radiative cooling, whereas for AGN feedback the increases were less. This can be traced to the hierarchical growth of structure since in low-resolution simulations: the small star forming systems are under-resolved; this decreases the SFR that mediates our AGN feedback; and this lowers the overall number of stars produced in the simulations. This behaviour is seen in other AGN feedback models (Sijacki et al. 2007) and has been extensively studied in non-AGN feedback simulations by Springel & Hernquist (2003b).
Stacked pressure profiles
For every halo identified by our FOF algorithm, we calculate the center of mass, R ∆ , the mass M ∆ within R ∆ and compute the spherically-averaged pressure profile normalized to (Voit 2005) and radii scaled by R ∆ . We then form a weighted average of these profiles for the entire sample of clusters at a given redshift. For Fig. 2, we have weighted by the integrated y-parameter, where σ T is the Thompson cross-section, m e is the electron mass and P e is electron pressure. For a fully ionized medium the thermal pressure P = P e (5X H + 3)/2(X H + 1) = 1.932P e , where X H = 0.76 is the primordial hydrogen mass fraction. Splitting the clusters into a number of mass bins gives similar results to this monolithic Y ∆ weight, as does weighting by Y 2 ∆ . We have found that a simple parametrized model with core-scale x c , amplitude A, and two power law indices, α and γ, fits better than with a fixed α. Sample values for our AGN feedback are A = 82, x c = 0.37, α = 0.84 and γ = 4.6 at z = 0; generally the parameters depend upon cluster mass and redshift. At z 1, a more complex parameterization is needed.
In Fig. 2, we show average pressure profiles multiplied by x 3 to make them ∝ dE th /d ln r, the thermal energy per logarithmic interval in radius, and hence to dY ∆ /d ln r. All profiles of dE th /d ln r from simulations and observations peak at or before R 200 , but an integration to at least 4R 200 is required for the total thermal energy to converge. By contrast, the KS profile does not drop over this range due to the constancy of Γ and does not include the outer cluster phenomena of asphericity, accretion shocks, etc. Throughout this paper, we have computed the KS model with an updated concentration parameter given by Duffy et al. (2008). We also show a scaled average S10 pressure profile for clusters with 10 14 M ⊙ < M 500 < 5×10 14 and redshift < 0.2. The S10 profile Comparison of fits to normalized average pressure profiles from analytic calculations, simulations and observations, scaled by (r/R 500 ) 3 . For a cluster of M 500 = 2 × 10 14 h −1 M ⊙ , we show fits to our SPH simulations (red), and compare them with the analytic KS profile (green), the semi-analytic S10 average profile (light green), and a fit to AMR simulations (updated profile by Nagai et al. 2007, private communication; orange). Our feedback model matches a fit to X-ray observations (Arnaud et al. 2009, grey bands) within R 500 well; only the dark grey part is actually a fit to the data, with the light grey their extrapolation using older theory results unrelated to the data. We illustrate the 1 and 2 σ contributions to Y ∆ centered on the median for the feedback simulation by horizontal purple and pink error bars. 2nd panel: We compare fits to our AGN model at redshift z = 0 (red solid) to all our three models at redshift z = 1 (blue). Shown are the 1σ error bars of the cluster-by-cluster variance of the weighted averages in our AGN models using corresponding lighter colors. 3rd panel: We show the effective adiabatic index Γ for our simulations, comparing it with KS (dash-dotted) and with a constant 1.2 (light green). Bottom: The distribution of kinetic-to-thermal energy in percentile decades is indicated by the dots for the feedback case, with the median shown for all three models; thus, there are significant additions to pressure support even in the cores of simulated clusters, and even more so in the SZ-significant outer parts. has been weighted by Y ∆ and agrees well within R 500 and with a slight excess pressure beyond R 500 . Fig. 2 shows our feedback model traces the observed "universal" X-ray profile of Arnaud et al. (2009) shown as a darkgrey band rather well within R 500 . This fit came out naturally, with no further tuning of our feedback parameters beyond trying to agree with the Sijacki et al. (2008) simulation.
Our models without AGN feedback have larger pressures inside R 500 . For the light grey band beyond R 500 , the universal X-ray profile did not use observations, but was fit to an average profile of earlier simulations so the deviation > R 500 does not represent a conflict of our profiles with the data, rather with the earlier simulations. The band shown for the X-ray profile gives a crude correction for the bias in M 500 and R 500 resulting from the Arnaud et al. (2009) assumption of hydrostatic equilibrium. This yields mass values which are on average 25% too low , so the band represents a 0-25% uncertainty in M 500 . This change only affects R 500 ∝ M 1/3 500 and P 500 R 3 500 ∝ M 5/3 500 but does not affect the shape of the profile. (However, as the bottom panel shows, such a correction from turbulence and un-virialized bulk motions (Kravtsov et al. 2006) will depend upon radius and selection function of the X-ray clusters used to make the fit.) Another important issue is the relation between the Y ∆ and cluster mass. We fit our results for this to the scaling relation Sijacki et al. (2007) was also able to reconcile the cluster X-ray luminosity and temperature scaling relation (Puchwein et al. 2008). We find a large variation in the outer pressure profiles beyond R vir , especially at redshift z ∼ 1 as is shown in the second panel of Fig. 2. These regions may have sub-halos, and external but nearby groups on filaments, most of which will eventually be drawn into the clusters. In spite of the large variance of the scaled profiles, the fit to the profiles at z = 0 follows the average. At larger redshift, however, our fitting formula will require more degrees of freedom than in eq. 5 to reflect the range of behaviour of the highly dynamical outer regions.
4. SZ POWER SPECTRA FROM HYDRODYNAMICAL SIMULATIONS 4.1. Stacked SZ power spectra of translated-rotated cosmological boxes We randomly rotate and translate our simulation snapshots at different redshifts (da Silva et al. 2000;Springel et al. 2001, B0205). To obtain thermal Compton-y maps, we perform a line-of-sight integration of the electron pressure within a given solid angle, i.e. y = σ T n e k T e /(m e c 2 ) dl, where k is the Boltzmann constant, n e and T e are the number density and temperature, respectively. We construct 1.6 • ×1.6 • and 3.2 • ×3.2 • maps for the 256 3 and 512 3 simulations, respectively. Using this method there are large sample variances (White et al. 2002) associated with nearby cluster contamination. We have quantified their influence on the power spectrum for each of our three physics models by averaging over twelve translate-rotate viewing angles each projected from our ten 256 3 full hydrodynamical simulations for each of the 33 redshift outputs back to a redshift z = 5; the power spectra of which are then added up to yield the total spectrum. This Bond et al. (2005) (orange pluses), semi-analytical simulations by S10 (dark green) and analytical calculations by KS (light green). The 256 3 power spectra (red symbols) are averages over 12 translaterotate tSZ maps and 10 separate hydrodynamical simulations for each of the 33 redshift bins, the power spectra of which are then added up to yield the total spectrum; the error bars show the variance among the power in all maps. The full-width half-max values appropriate for Planck, ACT and SPT show which part of the templates these experiments are sensitive to. At low-ℓ, the discrepant higher power in the semi-analytical calculations can be traced to the enhanced pressure structures assumed beyond R 200 over what we find. method of computing the power spectrum has the advantage of taking care of the artificial correlations that occur because any individual simulation follows the time evolution of the same structure. For the shock heating case, we did ten more hydrodynamical simulations to show that our averaged template had converged (within ∼10%), but note that using only a few boxes can be misleading in terms of rare events.
The computationally more expensive 512 3 SZ spectra have the equivalent of 8 256 3 plus wider coverage, so the 512 3 shock heating result shown gives a reasonable indication of what to expect. The other 2 physics single-box cases at 512 3 are similar to the 256 3 ensemble means. The analytical approach has the great advantage of including an accurate mean cluster density to high halo masses, but to be usable for SZ power estimation, scaled pressure profiles must also be accurate, a subject we turn to in future work. For now, we note that using such profiles from our simulations gives good agreement with the average SZ power shown at the low ℓ where sample variance will be largest. In Fig. 3, our simulation templates and the KS template shown have excluded structures below z = 0.07 to decrease the large sample variance associated with whether a large-ish cluster enters the field-of-view. Such entities would typically be removed from CMB fields and considered separately.
The mean Compton y-parameter found in our AGN feedback simulations is one order of magnitude below the COBE FIRAS upper limit of 15 × 10 −6 (Fixsen et al. 1996).
We compare the theoretical predictions for the tSZ power spectrum in Fig. 3. Our 512 3 and 256 3 shock heating simulations are in agreement with previous SPH simulation power spectra (Springel et al. 2001, B0205) scaled by C ℓ ∝ (Ω b h) 2 Ω m σ 7 8 , with the factors determined from our simulations of differing cosmologies. The B0205 SZ power shown had a cut at z = 0.2, appropriate for CBI fields; using the same cut on a shock heating simulation with the same cosmology that we have done, we get superb agreement.
The KS and S10 semi-analytic SZ power spectra templates differ substantially from our templates, in particular with higher power at low ℓ: as shown in Fig. 2, the KS pressure profile beyond R 500 overestimates the pressure relative to both simulations and observations, leading to the modified shape and larger Y ∆ ; this behaviour is also shown in Komatsu et al. (2010). The spectrum from S10 is very similar to KS possibly because both assume hydrostatic equilibrium, and a polytropic equation of state with a fixed adiabatic index, Γ ∼ 1.1 − 1.2. Inside R 200 , these assumptions are approximately correct, but they start to fail beyond R 200 . A demonstration of this is the rising of Γ and of the ratio of kinetic-to-thermal energy K/U shown for our simulations in the bottom panels of Fig. 3. The present day (a = 1) internal kinetic energy of a cluster is given by where H 0 is the present day Hubble constant, υ i and x i are the peculiar velocity and comoving position for particle i, and υ andx are the gas-particle-averaged bulk flow and center of mass of the cluster. The additional thermal pressure support we find at large radii from AGN feedback results in the slightly slower rate of K/U growth shown. In all cases the large kinetic contribution shown should be properly treated in future semi-analytic models.
Varying the physics over the three cases for energy injection in our simulations leads to relatively minor differences in Fig. 3 among the power spectra for ℓ 2000. This agreement is due in part to hydrostatic readjustment of the structure so the virial relation holds, which relates the thermal content, hence Y ∆ , to the gravitational energy, which is dominated by the dark matter. Our AGN feedback parameters do not lead to dramatic gas expulsions to upset this simple reasoning. Our radiative cooling template has less power at all scales compared to the shock heating template since baryons are converted into stars predominantly at the cluster centers and the ICM adjusts adiabatically to this change. Thus, at low ℓ where clusters are unresolved, shock heating and radiative simulations give upper and lower limits, bracketing the AGN feedback case. AGN feedback suppresses the core value of the pressure compared to the radiative simulation resulting in less power at ℓ > 2000, a trend that is more pronounced at z > 1 (as shown in Fig. 3). Thus, at these angular scales, the power spectrum probes the shape of the average pressure profile. It depends sensitively on the physics of star and galaxy formation e.g., Scannapieco et al. (2008). Over the ℓ-range covered by Planck, these effects are sub-dominant, and serve to highlight the importance of the high-resolution reached by ACT and SPT.
Current constraints on SZ template amplitudes and σ 8,SZ
Instead of varying all cosmological parameters on which the thermal and kinetic SZ power spectra, C ℓ,tSZ and C ℓ,kSZ , depend, we freeze the shapes by adopting the parameters for our fiducial σ 8 = 0.8 (and Ω b h = 0.03096) model evaluated at 150 GHz, and content ourselves with determining template amplitudes, A tSZ and A kSZ , and a total SZ amplitude A SZ : The spectral function for the tSZ (Sunyaev & Zeldovich 1970), f (ν), vanishes at the SZ null at ∼ 220 GHz and we normalize it to unity at ν = 150 GHz, so it rises to 4 at 30 GHz. Therefore if we find values of A SZ below unity then either σ 8 is smaller than the fiducial cosmological value as 1000 10000 The light grey band is the 2σ upper limit region. The A SZ = 1 S10 tSZ power spectrum (dashed line) and the KS tSZ spectrum (dash dotted line) are shown for contrast; their allowed 1σ band is determined by multiplying these by their A SZ values given in Table 1, but cover a similar swath to the grey bands.
We also show the averaged kSZ power spectra computed for our simulations by dotted lines. The kSZ spectra were calculated in the same was as the tSZ spectra were, and have similar shapes. However, kSZ is underestimated at low ℓ because of missing bulk velocities in the simulations. There should be an additional (rather uncertain) kSZ template from inhomogeneous reionization as well. To show the tension with the CMB data, we plot the tSZ + 0.46 kSZ power (solid lines) since this can be directly compared with the SPT DSFG grey bands.
derived from the primary CMB anisotropies, or else the theoretical templates overestimate the SZ signal.
To determine the probability distributions of these amplitudes and other cosmological parameters from current CMB data we adopt Markov Chain Monte Carlo (MCMC) techniques using a modified version of CosmoMC (Lewis & Bridle 2002). We include WMAP7 (Larson et al. 2010) and, separately, ACT (Fowler et al. 2010) and SPT (Lueker et al. 2009). In all cases, we assume spatial flatness and fit for 6 basic cosmological parameters (Ω b h 2 , Ω DM h 2 , n s , the primordial scalar power spectrum amplitude A s , the Compton depth to re-ionization τ, and the angular parameter characterizing the sound crossing distance at recombination θ). We also allow for a flat white noise template C ℓ,src with amplitude A src , such as would arise from populations of unresolved point sources. We marginalize over A src , allowing for arbitrary (positive) values. Generally there will also be a spatial clustering component for such sources, and these will have templates that are partially degenerate in shape with that for tSZ, but because of the large uncertainties we ignore such contributions here. Reducing the SZ and unresolved source problems to determinations of overall amplitudes multiplying shapes has a long history, e.g., the CBI sequence of papers, and was adopted as well by the ACT and SPT teams. Our results differ slightly from those reported by the ACT team because they use WMAP5+ACT and a combined tSZ+kSZ S10-template, and by the SPT team who use WMAP5+QUaD+ACBAR+SPT and add constraints on the white noise source amplitude beyond the non-negativity we impose.
We first consider a simplified case with A kSZ constrained to be zero and all other cosmic parameters and the source ampli-tude marginalized, yielding a probability distribution for A SZ . The means and standard deviations from our MCMC runs are given in the upper rows of Table 1 in columns 2, 4 and 6 for a number of data combinations and for our 3 physics simulation cases, contrasting with KS and S10. The ACT data is for 148 GHz. There are two SPT cases given. The first uses just the 153 GHz spectrum so it can be directly compared to ACT. For SPT, Lueker et al. (2009) also report a power spectrum derived from subtracting a fraction x of their 220 GHz data from the 153 GHz data to minimize the contribution from dusty star-forming galaxies (DSFG); since 220 GHz is the SZ null, this does not modify the tSZ contribution, but would diminish the frequency-flat kSZ. However, a normalization factor is chosen to preserve power for primary CMB signals that are flat in frequency like kSZ. This has the effect of boosting the tSZ power by a factor of (1−x) −2 . Lueker et al. (2009) find that x = 0.325 minimizes the contribution from the DSFGs so the DSFG-subtracted spectrum suppresses the kSZ by a factor of 0.46 relative to the tSZ. A ∼25% uncertainty remains in x which should be taken into account statistically, but is not here. The correct approach would be to simultaneously treat the 153 GHz and 220 GHz cases, with full modelling of the different classes of point sources, including their clustering, and to take into account the non-Gaussian nature of the SZ and source signals which impact sample variance.
The ACT data is only giving upper limits with their current published data, whereas SPT has detections at 153 GHz with A SZ compatible with unity. For the SPT 153 GHz-only spectrum, we find S10 gives A SZ = 1.39 ± 0.34 while the feedback template gives A SZ = 1.76 ± 0.43, and the comparable 95% upper limits from ACT are 1.95 and 2.93. However, although the white noise shape has been vetoed by marginalization, there could be a residual clustered source contribution from dusty galaxies pushing the derived A SZ high. To the extent that SPT DS FG vetoes this DSFG clustering as well as their Poisson contribution, that A SZ would be a better indicator. It shifts from 0.43 ± 0.21 for KS and 0.50 ± 0.25 for S10 up to 0.75 ± 0.36 for the feedback template, an increase of 50%. The large difference between the 150 and source-subtracted templates, even after marginalizing over a Poisson term, may suggest the power in the correlated source component may be similar to the SZ power, emphasizing the work necessary to do a correct treatment.
Any non-zero kSZ contribution will take some of the amplitude from A SZ , leaving even smaller A tSZ values; columns 3, 5 and 7 of the table give estimates of this diminution. The kSZ power spectra that we have computed are broadly similar to the tSZ power shape, with however sufficiently significant differences to allow shape discrimination in addition to the frequency separability, as Fig. 4 shows. At 150 GHz and an ℓ = 3000 pivot, we find the kSZ power is ∼ 29%, ∼ 29% and ∼ 27% of the tSZ power for the shock heating, radiative cooling and feedback simulations, respectively. We normalize the kSZ to the tSZ at this pivot of 3000 since it has most of the constraining power in the CosmoMC chains for the ACT and SPT measurements and results in the smallest error bars: on larger scales, the errors are increased by the contribution from primary anisotropies while smaller scales are dominated by the instrumental and galaxy-source shot noise.
We used exactly the same procedure to obtain the kSZ spectrum as we used for the tSZ spectrum. The temperature decrement due to the kSZ effect is ∆T/T = σ T n e υ r /c dl, where υ r is the radial peculiar velocity of the gas relative to The mean and standard deviation of the thermal SZ power spectrum template amplitude A tSZ and the total SZ, including our computed kSZ contribution. The numbers assume the kSZ template is perfectly degenerate in shape with the tSZ one. A SZ = A tSZ + A kSZ at 150 GHz, with the relative enhancement in our simulations given by A kSZ /A tSZ = 0.29, 0.29, 0.27 for the shock heating, radiative cooling and feedback simulations, respectively. We have used the ACT team's 148 GHz power spectrum, the SPT team's 153 GHz spectrum and the SPT DSFG-subtracted (SPT DS FG ) spectrum, along with WMAP7. The amplitude of the SZ power is normalized to our fiducial σ 8 = 0.8 cosmology. A rough guide to the σ 8 tension is obtained in the lower rows, using σ 8,SZ ∝ A 1/7 SZ (Ω b h) −2/7 , with exponents determined by B0205 and KS. Since kSZ varies more slowly with σ 8 than tSZ, the numbers are just indicative. the observer. We constructed 12 translate-rotate kSZ maps for each of our 10 separate hydrodynamical simulations and for each of the 41 redshift bins back to z = 10 (rather than z = 5 for tSZ), computing the average and variance of all of these. Since we use simulations with side length L = 165 h −1 Mpc for our 256 3 cases, with fundamental wavenumber (26 h −1 Mpc) −1 , our spectra are missing a bit of power on the largest scales (affecting low-ℓ) since we do not sample well the long-wavelength tail of the velocity power spectrum in spite of the number of runs done.
We have included the kSZ template by ignoring the relatively small shape difference about the pivot point of the kSZ and tSZ power spectra; i.e., we assume the perfect degeneracy C ℓ,kSZ = C ℓ,tSZ , as the SPT team did. Thus we only need the ratios A kSZ /A tSZ given above for the 150 GHz cases and the further x factors for the mixed frequency DSFG case. For the ratios we use our translate-rotate values of 0.29, 0.29 and 0.27 from our simulations, 0.276 for S10, and used a rough estimate of 0.25 for KS. Apart from ignoring the shape difference, we have also ignored kSZ from patchy re-ionization at high redshift, although it can have a competitive amplitude to the late time fully ionized gas motions with respect to the CMB rest frame that we are modelling (Iliev et al. 2008(Iliev et al. , 2007. In presenting the results from our analyses of the MC Markov chains, we just subtract A kSZ from A SZ . The Table 1 A tSZ that we derive from these assumptions are all on the low side of unity for DSFG, with KS and S10 being more than 2.5σ low, whereas the feedback template is only about 1σ low (and 1σ high for 153 GHz alone). We leave it to future work to include a more complete implementation of the kSZ spectra.
The means and errors on A SZ provide the cleanest way of presenting the tension, or lack thereof, of these SZ models with the primary CMB data which indicates σ 8 ≈ 0.8. However, it has been conventional to translate these numbers into a σ 8,SZ using the way A SZ scales with cosmic parameters, roughly as A SZ ∝ σ 7 8 (Ω b h) 2 , as given by B0205 and KS. The lower rows in Table 1 show σ 8,SZ using this scaling. Although the scaling applies to the tSZ component only, with the kSZ power being less sensitive to σ 8 , we also quote results for the kSZ-corrected cases. Ideally one should use the data to determine the cosmic parameters which uniquely and fully determine the primary spectrum, the A tSZ and A kSZ , and the tSZ and kSZ shape modifications as the parameters vary. This slaved treatment enforcing σ 8,SZ = σ 8 has σ 8 's value being driven by WMAP7 and other primary CMB data rather than by the SZ information.
CONCLUSIONS AND OUTLOOK
Without hydrodynamical simulations in a cosmological framework similar to the ones presented in this paper it is hard to come up with a consistent model of the gas distribution in clusters and the infall regions which both contribute significantly to the SZ power spectrum. In this paper, we identify three main points that a future semi-analytic model of such a pressure distribution has to provide.
(1) In order to arrive at a consistent gas distribution that matches not only the integrated stellar mass fraction but also the X-ray derived pressure profiles within R 500 , we need selfregulating AGN-type feedback. We emphasize that we tuned our parameters to match a previous single-cluster model that successfully suppressed the over-cooling by means of AGN feedback . The excellent agreement with current data was a pleasant byproduct: our simulated pressure profiles agree with recently obtained observational ones that have been constructed from X-ray data; the scaling relations between the cluster mass and X-ray based Compton-Y (Arnaud et al. 2009) also agree; as do the integrated stellar and gas mass fractions (Gonzalez et al. 2007;Afshordi et al. 2007).
(2) The amount of non-gravitational energy injection into proto-clusters and groups by AGN and starburst galaxies at intermediate-to-high redshifts z 0.8 is poorly understood. Other observables are needed to constrain the physics and to answer this question which seems to be essential in understanding the resulting gas profiles. Our simulations suggest that AGN-type feedback lowers the central pressure values as a hydrodynamic response of the gas distribution to the nongravitational feedback of energy. This effect inhibits gas from falling into the core regions which causes a flatter and more extended pressure profile and a noticeably reduced power of the SZ power spectrum at small angular scales for ℓ 2000.
(3) For the SZ flux to be converged, an integration of the pressure profile out to 4R 200 is necessary; half of the SZ flux is contributed from regions outside R 200 . To compute a reliable SZ power spectrum, it is essential to precisely characterize the state of the gas in these infall regions. In particular, we find that: (i) the pressure support from kinetic energy strongly increases as a function of radius to reach on average equipartition with the thermal energy at ∼ 2R 200 in our AGN model with the exact dependence on cluster mass to be determined by future work; (ii) the effective adiabatic index Γ = d ln p/d ln ρ ∼ 1.2 in the interior, but upturns towards Γ ∼ 5/3 beyond the virial radius; (iii) the inclusion of cluster asphericity at large radii may also become important.
Hence a successful semi-analytic model of the spherical cluster pressure, if that is indeed a viable goal, at the least needs careful calibration using numerical simulations which accurately treat all of the effects. The variance of the average profiles also encodes important information that is manifested in the power spectrum. Our studies also show that simplified analytic models that employ hydrostatic gas models with a constant Γ necessarily overpredict the SZ power on large scales by up to a factor of two and predict an inconsistent shape of the SZ power spectrum. The alternative that we explore in a subsequent paper is to use stacked scaled simulational clusters which are rotated to principal axes to provide the pressure form factors for the semi-analytic approach.
The tSZ power spectrum of our 512 3 simulation agrees well with the average of our ten 256 3 simulations. A large number of simulations are needed to properly sample the high-mass end of the cluster mass function and hence accurately deal with sample (cosmic) variance. Alternatively, larger cosmological volumes can compensate since they contain enough statistics on the large scale modes that are responsible in part for forming the highest-mass clusters which are also the rarest events. This, however, is quite challenging as we require the same (high-)resolution to accurately follow the physics in the cluster cores which is needed to obtain profiles that match current X-ray data. Our 256 3 simulations do not quite sample large enough scales to provide a fully converged kSZ power spectrum at low ℓ since we miss the long-wavelength tail of the velocity power spectrum. We also have ignored the patchy re-ionization kSZ which could be a significant contributor, up to 50% of the total kSZ (e.g., Iliev et al. 2008Iliev et al. , 2007. We have found the ℓ < 2000 multipole range to be relatively insensitive to cooling and feedback, at least for the range constrained by the X-ray data. We did find the higher multipole range (ℓ ∼ 2000 − 10000) probed by the high-resolution ACT and SPT CMB telescopes is sensitive to the feedback prescription; hence the high-ℓ SZ power spectrum can be used to constrain the theory of intracluster gas, in particular for the highly uncertain redshifts > 0.8. In addition to the SZ power spectrum probe, our simulations can be used to address the cosmological significance of cluster counts as derived from the SZ effect. Counts provide complementary constraints on parameters that help to break some degeneracies that are present in the power spectrum method. By employing inhomogeneous, localized and self-regulated feedback we are not only able to match recent X-ray reconstructions of cluster core regions, but also decrease the tension in σ 8 estimated from SZ power with σ 8 from other cosmological probes. However, only a detailed confrontation between simulations exploring the vast terrain of feedback options with the rapidly improving high resolution observations of cluster interiors can move the theory of cluster gas physics and its use for precision cosmology forward.
We thank Norm Murray, Volker Springel, Hy Trac, Jerry Ostriker, Gil Holder, Niayesh Afshordi and Diasuke Nagai for useful discussions. Research in Canada is supported by NSERC and CIFAR. Simulations were run on SCINET and CITA's Sunnyvale HPC clusters.
|
2010-06-03T22:12:46.000Z
|
2010-03-22T00:00:00.000
|
{
"year": 2010,
"sha1": "ed7e415cc1e7b69eeb3dc6b9fb05416cbeca6a62",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1003.4256",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ed7e415cc1e7b69eeb3dc6b9fb05416cbeca6a62",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
52009746
|
pes2o/s2orc
|
v3-fos-license
|
Exosomes derived from mesenchymal stem cells enhance radiotherapy-induced cell death in tumor and metastatic tumor foci
Background We have recently shown that radiotherapy may not only be a successful local and regional treatment but, when combined with MSCs, may also be a novel systemic cancer therapy. This study aimed to investigate the role of exosomes derived from irradiated MSCs in the delay of tumor growth and metastasis after treatment with MSC + radiotherapy (RT). Methods We have measured tumor growth and metastasis formation, of subcutaneous human melanoma A375 xenografts on NOD/SCID-gamma mice, and the response of tumors to treatment with radiotherapy (2 Gy), mesenchymal cells (MSC), mesenchymal cells plus radiotherapy, and without any treatment. Using proteomic analysis, we studied the cargo of the exosomes released by the MSC treated with 2 Gy, compared with the cargo of exosomes released by MSC without treatment. Results The tumor cell loss rates found after treatment with the combination of MSC and RT and for exclusive RT, were: 44.4% % and 12,1%, respectively. Concomitant and adjuvant use of RT and MSC, increased the mice surviving time 22,5% in this group, with regard to the group of mice treated with exclusive RT and in a 45,3% respect control group. Moreover, the number of metastatic foci found in the internal organs of the mice treated with MSC + RT was 60% less than the mice group treated with RT alone. We reasoned that the exosome secreted by the MSC, could be implicated in tumor growth delay and metastasis control after treatment. Conclusions Our results show that exosomes derived form MSCs, combined with radiotherapy, are determinant in the enhancement of radiation effects observed in the control of metastatic spread of melanoma cells and suggest that exosome-derived factors could be involved in the bystander, and abscopal effects found after treatment of the tumors with RT plus MSC. Radiotherapy itself may not be systemic, although it might contribute to a systemic effect when used in combination with mesenchymal stem cells owing the ability of irradiated MSCs-derived exosomes to increase the control of tumor growth and metastasis. Electronic supplementary material The online version of this article (10.1186/s12943-018-0867-0) contains supplementary material, which is available to authorized users.
Introduction
Radiotherapy is a critical and inseparable component of comprehensive cancer treatment and care [1]. It is estimated that about half of cancer patients would benefit from radiotherapy for treatment of localized disease, local control, and palliation [2]. The success of RT in eradicating tumors depends on the total radiation dose being delivered accurately [3]. However, there are limits to the RT dose that can be given safely, which are imposed by the tolerance of the normal tissues surrounding the tumor [4,5] and it is clear that the high intrinsic sensitivity of normal tissues to ionizing radiation often precludes the application of curative radiation doses [6,7].
Cell membranes are intimately involved in the biochemical events that define cancers, and in particular, they are intensely involved in cancer metastasis [8]. In addition, the establishment of metastases also requires a complex interplay between malignant cells, normal cells, stroma, mesenchymal cells and extracellular matrix in their new microenvironments to facilitate invasion of extracellular matrix and tissue stroma and evade the defenses of the host [8][9][10].
Mesenchymal stem cells (MSCs) are found ubiquitously in many tissues and are not restricted to those of mesodermal origin such as bone marrow, adipose, muscle and bone [11]. MSC-based new therapies could potentially treat a wide range of conditions, such as cancer, inflammatory and degenerative disorders that have historically challenged patients and clinicians [12]. Although the use of MSCs for cancer therapy are considered as a useful tool in various studies [13], more research is necessary to understand their tumor promoting and suppressing potentials and to circumvent donor variations [13,14].
The ability of MSCs to accumulate at tumor sites makes them extremely attractive for directed cancer therapy; moreover it has been described that the tumor-tropism of MSCs increase with radiotherapy [15]. Mesenchymal cells MSCs are recruited by tumors from both nearby and distant locations.
Cells can secrete 'molecular machinery' through several types of vesicular carriers that are composed of both membrane and cytosolic constituents. Cell secreted exosomes (30-100 nm extracellular vesicles) play a major role in intercellular communication due to their ability to transfer proteins and nucleic acids from one cell to another [16]. Depending on the originating cell type and cargo, exosomes may have either immunosuppressive or immuno-stimulatory effects, which have potential applications as immuno-therapies for cancer and auto-immune diseases [17]. In addition, exosomes might also have tumor-promoting or tumor-suppressor activities. Very recently, Hoshino and co-workers [18] have demonstrated that cell-tumor-derived exosomes prepare a favorable micro-environment at future metastatic sites and mediate non-random patterns of metastasis. Emerging evidence shows that exosomes are incipient mediators of cancer-host crosstalk and are involved in tumor initiation, growth, invasion and metastasis [8][9][10]. Tumor-secreting factors can also increase metastasis by inducing vascular leakiness, promoting the recruitment of pro-angiogenic immune cells, and influencing organotropism and it has been shown that tumor-derived exosomes uptaken by organ-specific cells prepare the pre-metastatic niche and may also facilitate organ-specific tumor metastatic behavior [18,19]. It has also been described that thorax irradiation could facilitate the spread of surviving tumor cells and thus tumor recurrence under certain conditions [20], and that therapy with MSC protects lungs from radiation-induced injury and reduces the risk of lung metastasis [21].
Developments in understanding of tumor response and ways to modify it resulting from combination of RT with pharmaceutical agents to abrogate toxicity, represent an area of exciting research and development, which offer potential to improve the therapeutic ratio [3]. We have recently shown that the combination of MSC cell therapy plus radiotherapy in melanoma tumor-xenografts implanted in NOD/SCID-gamma-mice, significantly reduced the size of the established tumors, both in the primary-directly irradiated tumor as well as in the distant non-irradiated tumor [22].
Taking into account these antecedents and our previous studies [22,23], in the current study we aimed to elucidate the mechanism by which mesenchymal cells counteract the pro-tumor and pre-metastatic actions of tumor cells through isolation and identification of key components in exosomes derived from irradiated MSCs. "Radiotherapy may not only be a successful local and regional treatment but, when combined with MSCs, may also be a novel systemic cancer therapy".
Cell lines and culture
Umbilical-cord stromal stem cells (MSCs) were prepared and cultured as previously described [24,25]. Tumor cell lines A375, G361 and MCF7 were cultured as previously described [23,26]. All the cells were kept in a humidified incubator with 5% CO 2 at 37°C. The FBS utilized to prepare conditioned medium was depleted of bovine exosomes as described elsewhere [27] by ultracentrifugation of 50% (v/v) diluted FBS on DMEM at 100,000 ×g for 16 h at 4°C. All the cell lines were routinely tested for mycoplasma following the manufacturer's instructions and were found to be negative (e-MycoTM plus Mycoplasma PCR Detection Kit, Intron Biotechnology, Korea).
Xenografts of A375, G361 and MCF7 cell lines
We implanted 1·10 6 cells from the human cancer line G361, A375 or MCF7 into 7/9-week-old NOD/SCID-gamma (NSG) mice following the same procedure we used in our previous study [22]. Four groups of eight mice were treated with radiotherapy, MSC therapy, MSC therapy before radiotherapy, or left untreated (control). When necessary, mice were anesthesized with isoflurane or ketamine/medetomidine (41 mg and 0,5 mg per kg of animal weight, respectively) with reversal by atipamezole (1,2 mg/kg animal weight) to minimize anesthesia recovery duration. The total treatment duration was at least four weeks. After the final dose, we followed tumor size and mice weight and welfare for an additional 6-10 days before ending the experiments.
Mice groups to study the A375 spontaneous metastatic process.
Radiotherapy group
One group (8 mice) with a tumor on each hind leg was anesthetized with ketamine/medetomidine and only one of the tumors was treated with a dose of 2 Gy. Ionizing radiation was delivered by X-Ray TUBE (YXLON, model Y, Tu 320-D03) as described previously [22]. The treatment was repeated once-a-week for a total of two weeks.
MSC therapy groups
Two groups (8 mice in each group) with tumors larger than 60 mm 3 were treated with an intraperitoneal administration of 10 6 MSC once-a-week for 2 successive weeks. The day after each cellular treatment, one of the groups (8 mice) was randomly selected to have one of their tumors irradiated. The other group was monitored and treated repeatedly with injections of MSC every week for 2 weeks.
Control group
One group (8 mice) with tumors on each leg was handled in exactly the same way as the irradiated and MSC injected mice, although the group did not receive either radiation or MSC therapy.
Biodistribution of MSCs on tumor-bearing mice
We labelled MSCs with BrdU or with luciferase to follow their biodistribution when injected intratumorally or intraperitoneally. To label MSCs with BrdU we treated exponentially growing cells with 10 μM BrdU for 24 h before using them. By labelling the injected MSCs with BrdU we were able to identify them later on formalinfixed paraffin-embedded sections of the tumors 24 h after the injection.
Tumor growth measures and calculations
We monitored the tumor sizes every 2-3 days and measured two perpendicular diameters from each tumor to calculate tumor volume. The mathematical model applied for the analysis of this set of data obtained in our experimental therapeutic studies to measure the growth of tumors as a function of time, is the exponential growth. Under the conditions of the experiments, the logarithm of tumor volume increase linearly with time. For more details see our previous paper [22].
Using the individual tumor growth kinetics equation fitted for each one of the tumors, we calculated the necessary time for tumors to reach a volume of 2.00 ml (time to tumor growth) in a similar way to the concept previously described [28,29], the values corresponding to each group allow us to assess the treatment efficiency in terms of increase of survival time in each one of the therapeutics groups studied, compared with the control group. Furthermore, from the fit of the experimental data to for the growth of tumors as a function of time, to an exponential equation we can obtain the value of the slope and, using this, the values for the duplication time (TD). The Extra sum-of-squares F-Test for comparing fits of different curves was made [22] using GraphPad Software.
Histopathological and immuno-histochemical studies
At the end of the experiments, we recovered the xenografts from each study group, the complete thorax and the abdominal and pelvic organs and fixed in 10% buffered formalin for 48 h. Paraffin-embedded 4 μm sections were dewaxed, hydrated, and stained with hematoxylin-eosin. We determined the mitotic index, the necrotic areas and apoptotic cells observed outside the necrotic fields and a complete and protocolized macroscopic and microscopic study of the pelvic, abdominal and thoracic organs was done to assess possible metastasis. We studied one histological section of heart, mediastinum, spleen, pancreas; a longitudinal section of kidneys, the genital tract, a segment of the large intestine, and lymph nodes found plus all the lung lobules and five longitudinal liver sections. It was considered as different metastatic foci if there were interposed healthy parenchyma between groups of more than 10 neoplastic cells. For further details on Exosomes purification, characterization and analysis, proteomic analysis, and statistical analysis see Additional file 1.
Results
Previously we have shown that MSCs increased their tumor suppressor activity when they are activated with radiotherapy. In the current study we wondered if this anti-tumor action could also be relevant in decreasing metastatic spread. To assess this effect, we implanted three different tumor cell lines, G361, A375 and MCF7, in both flanks of NOD/SCID mice to produce bilateral xenografts. Our results demonstrate that the A375 human skin-melanoma cancer cell line, when implanted as xenografts, in the NOD/SCID-gamma mice growth faster than G361 and MCF-7 cell line xenografts moreover, A375 xenografts are able to spread from its initial location to produce metastases in the internal organs of the mice, whereas, in our model, the cell lines G361 and MCF7 lack this potentiality (Additional file 1: Table S1). 60 out of the 97 mice bearing A375 xenografts showed metastatic spread (Fig. 1a). Of these mice, 59 of the 60 (98.3%) showed lung poli-metastasis, the mean number of metastatic foci on lungs being 14.2 ± 1.8. After that, the organs more frequently invaded by the tumor cells are the liver (33/60; 55.0%); the kidney (20/60; 33.3%) and the lymph nodes (4/60; 6.7%). The data suggest that the lung is the initial target of metastatic dissemination and after this step, and more slowly, tumor cells may reach the liver and/or kidney. Thus, for the rest of the study we used A375 cell line as model to evaluate the effect of radiotherapy, mesenchymal cell and MSC plus radiotherapy on the tumors (irradiated and bystander) and on the metastatic spread process.
Biodistribution of MSC injected or infused in mice
To study the movement of cells inside the A375 xenografts we injected MSCs labeled with BrdU (10 6 cells) intratumorally and performed the histological study 24 h post-injection (Fig. 2a). The histological study shows that injected (brown) MSCs were present inside the tumor tissue and that they localized along longitudinal trajectories whose tracing could be associated with the existence of neo-formed vessels within the xenografts. In fact the shape of the MSCs resembles that which is characteristic of normal pericytes as has been previously described: Once inside the tumor MSCs are incorporated into their stroma and could remain, as pericytes, in the environment of the vessel walls that nourish the neoplastic process [30].
We also studied the biodistribution of MSCs, genetically modified to express the luciferase gene on tumor-bearing NSG mice. Figure 2b shows images (IVIS-Lumina II) corresponding to mice with A375 tumor xenografts placed on the upper part of both hind legs. We treated the tumor on the right flank of the mice with RT (2 Gy). Right after the MSC injection, the luminescence occupies the abdominal region. At day 1 the pattern of cell distribution is different and suggests that the highest cell density is found in the central region of the mouse body and in its pulmonary and circulatory systems. At day 2, apart from the central focus, there is another region with intense bioluminescence that seems to fit to the border of the irradiated tumor. This pattern is maintained 5 days after the injection of cells.
MSC combined with RT reduces the number of observed metastasis.
To further evaluate the anti-metastatic potential of MSCs combined with RT, we have carried out experiments to follow tumor-volume growth kinetics during a timecourse of only 14 days. Reducing the duration of the experiment reduced the probability of a massive metastatic spread of tumor cells in almost all mice included in the study, regardless of the treatment, and allowed us to assess the differences among the groups. All the growth curves obtained either in Control as in MSC, RT and RT + MSC groups has been plotted in Fig. 3.
We made one key assumption in the model: Tumor growth rate is constant in the interval of time between the start of data acquisition and the end of the experiment, and treated tumors grow slower than the control Fig. 1 a Distribution of A375 xenografts' micro-metastasis in the internal organs of the tumor-bearing NOD/SCID-gamma mice. Results are expressed as mean value ± standard error of mean. b Representative photomicrographs of H&E from lungs, liver, kidney and intravascular micrometastasis (black arrow) tumors because their doubling times are longer. Comparing the tumor response curves in the mice treated with RT, MSC, MSC + RT or without any treatment, (Fig. 4a), we have observed an improved tumor response in the group treated with MSC + RT (green curve) compared to the groups of mice treated with RT (red curve, P < 0.0001) or with MSCs (blue curve, P < 0.0001), exclusively. We have shown that this mathematical model properly described the growth A375 tumors, until more than 30 days (Additional file 2: Figure S1).
The Using these values, we have calculated the cell-loss rate (C L ) that can be attributed to each treatment [22,31]. The C L for RT + MSC was: 0,44 and for RT alone 0,12. According to this concept we can state that radiotherapy inhibited tumor growth with a cell loss rate of 12.0% per day compared to tumor growth in the control group. This effect was enhanced by the addition of MSCs to the radiotherapy, with cell lost rate of 44,4% per day, leading to a mesenchymal enhancement ratio of MSC-ER = 3,7, whilst MSCs alone inhibited tumor growth with a cell loss rate of 14,5%. Assuming that the effects for each of the treatments (RT and MSC) are independent [32], we have calculated that the expected value (E) for the surviving fraction after the treatment with RT + MSC is: E = 0,75. On the other hand, the observed value for the surviving fraction after RT + MSC is O = 0,44. Using both data we have calculated the ratio O/E = 0.59, which is a strong indicator of the synergistic effect [32] between RT and MSC when they are applied together for tumor treatment in this model. These results demonstrate the potentiation of the bystander effect by the MSCs used together with radiotherapy.
To get an approach to the survival of the mice in each group, we have calculated the time-to-tumor growth (T-t-G), a theoretical tumor end-point-time for tumor growth [28], defined in this case as the time necessary for each tumor to reach the volume of 2,00 ml. The differences between the times necessary for the tumors from each group to reach 2,00 ml among the groups are statistically significant (Fig. 4b, P < 0.0001) and specifically the Fig. 4b and, interestingly, the combined treatment RT + MSC produces a clear enhancement of the radiotherapy efficacy measured as an increase in this parameter.
Our results demonstrate that the combined treatment with RT + MSC increases the surviving time of the mice included in this group by 5 days (22%) compared to the group of mice treated exclusively with RT and more than 11 days (60%) compared to the control group. Of interest is the bystander effect of the radiotherapy on the tumor of the contra-lateral side, which by itself led to an inhibition of tumor growth corresponding increase of 1 day in the T-t-G. Tumors from the non-irradiated flank, thus exposed to the bystander effect after RT + MSC treatment, showed a further inhibition of tumor growth increase in T-t-G of 3,6 days compared to the tumor growth under control conditions. Next, we analyzed the amount of metastasis foci present in each of the mice included in the different groups. Metastases were microscopically identified and counted to calculate their frequency. Figure 5a illustrate the difference in size between A375 xenografts respect to control and MSC + RT groups at the end of the experiment. To further quantify the inhibition of tumor foci by MSCs the number of metastasis pooled Exosomes secreted from MSCs are quantitatively, functionally and qualitatively different from the exosomes secreted from MSCs* Exosomes (Exo) and microvesicles (MV) secreted by mesenchymal cells, from both inactivated MSCs and activated MSCs*, have been quantified measuring the amount of protein present in each of the fractions from the sequential centrifugation method used to separate MV and Exo. Typical images of the Exo and MV from MSCs, obtained by transmission electronic microscopy, are summarized in Fig. 6a. The values of protein concentration in the paired experiment designed for this purpose, suggest that the treatment of MSCs with 2 Gy of low-LET ionizing radiation dose produces the activation of the irradiated cells, increasing the secretion of proteins to the culture media by the stimulated cells (Fig. 6b): MSC* = 0.251 ± 0.002 μg/ml vs. MSC = 0.214 ± 0.004 μg/ml, P = < 0.0001. The differences between proteins in MV and Exo from MSC and MSC* are statistically significant (Fig. 6b): Exo: MSC = 0.091 ± 0.002 μg/ ml vs. MSC* = 0.140 ± 0.001 μg/ml, P < 0.0001 and MV: MSC = 0.123 ± 0.003 μg/ml vs. MSC* = 0.111 ± 0.001 μg/ ml, P = 0.0002 (Fig. 6b). Our data demonstrate that the exosomes secretion in MSCs* increased 1.5 fold times respect to MSCs.
Exosomes and proteins secreted by MSCs might be involved in the antitumoral effects observed
Exosomes are pivotal in facilitating intercellular communication [33]. We wonder if the exosomes produced by MSCs and MSC*, can modulate the growth of tumor cells by affecting major cellular pathways that lead to the cell death of the tumor cells, which could be the protein "cargo" contained in these exosomes [16] and whether it is possible to identify differences between the tumorsuppressor activity of exosomes obtained from MSCs and from activated MSCs*. Figure 6d-f summarizes the results of a cell survival assay [34] adapted to measure G361 and A375 survival fraction. We compared the survival fractions of tumor cells treated with MSC or MSC* conditioned medium (Fig. 6d-e) and then compared the effect of MSC* exosomes on A375 cells (Fig. 6f ). The potency index is defined as the relation between the estimated surviving fraction from MSC*-exosomes and the MSC-exosome treated cells.
MSC* exosomes reduce the cell survival of A375 cells (P < 0.0001) as the unfractioned conditioned media of MSC* does. This indicates that the activation of MSCs with 2 Gy increases its tumor-suppressor effect. As we found a dramatic cytoreductor effect of MSC* exosomes on the tumor cells, we have examined the protein content in these nanosized oraganelles. The results of these experiments are in Fig. 7 and in Additional file 1.
Exosome function enrichment
To further characterize the functionality of the exosome content from MSC and MSC* we have used a bioinformatics tool aimed to identify the signaling pathways involved in different key cellular process. Significant biological process terms from REVIGO were studied in detail (Fig. 7). 15 common terms were obtained between MSC and MSC* results, 20 terms were exclusively enriched in MSC and 41 from MSC* (p-value < 0.01 or log10 p-value<− 2), as shown in detail in Additional file 1. According the uniqueness values, dispensability values and p-values, common GO terms between MSC and MSC* are related to calcium-independent cell-matrix adhesion, transport mediated by vesicles, platelet degranulation and activation and cardiovascular development (see details in Fig. 7). However, enriched terms from MSC exosomes are correlated to wound healing, coagulation, hemostasis and regulation of immune response (displayed in Fig. 7). Interestingly, MSC* analysis generated the most prominent and diverse terms in relation to the control of tumor growth, in particular the negative regulation of response to stimulus, localization of cell, leukocyte cell-cell adhesion and positive regulation of cell death (shown in Additional file 1).
Discussion
In this paper, we present a set of preclinical therapeutic data in which we combine RT with MSC therapy. We have demonstrated that tumor cell loss induced after treatment with radiotherapy increases with the combination of RT and MSCs, reaching 51.4% per day when compared to RT alone, which was only 25.8% per day with an MSC enhancement ratio of around 2 (Additional file 2: Figure S1). These values indicate that the combination of MSC + RTs produces a synergic effect. Furthermore, we have calculated the differences in the time necessary to reach 2,00 ml of tumor volume from the different groups (Fig. 4b). Tumors treated with RT alone would need 24.4 days to reach this volume and mice treated with the combination of MSC + RT would need 29.8 days. Our results demonstrate that the concomitant and adjuvant use of RT and MSCs could represent an increase of the surviving time of the mice included in this group of around 22%, compared to the group of mice treated exclusively with RT. Moreover, the number of metastatic foci found in the internal organs of the mice treated with MSC + RTs was a 60% less than in the group of mice treated with RT alone.
The paracrine effect of MSC was first described almost two decades ago by Haynesworth and co-workers [35]. Extracellular vesicles such as exosomes are naturally released from MSCs and in our model might be responsible for the survival reduction of tumor cells in vitro. Understanding the fundamental biology underlying mesenchymal stem cell and tumor interactions has the potential to increase our knowledge of cancer initiation and progression, and also lead to novel therapeutics for cancer. Exosomes derived from mesenchymal stem cells seems to be key players at this respect. Due to their properties, MSCs may be qualified as a therapeutic tool to treat radiation-induced tissue damage [36]. Numerous studies have shown that administered either intraperitoneally or intravenously MSCs efficiently home onto tumors and metastases [37,38]. Exosomes secreted by MSCs have been shown to contain antiapoptotic miR-NAs, to promote epithelial and endothelial wound healing and angiogenesis, and to contain growth factor receptor mRNAs, known to promote wound healing and Fig. 6 a Morphologic characterization of the extracellular vesicles released by MSC and MSC* precipitated by differential ultra-centrifugation. b Total protein concentration on the extracellular vesicles released by MSC and MSC*. c Protein concentration on the microvesicles and exosomes from MSC and MSC*. MSC or MSC* unfractioned conditioned medium reduced the surviving fractions of (d) A375 and (e) G361 cells. f Comparison between unfractioned conditioned medium from MSC and MSC* and of its exosomes on the A375 cell line. MSC conditioned medium (blue points) has been considered as the control as there is any statistical differences between MSC and growth media controls (data not shown). Differences are statistically significant between conditioned medium (P < 0.05) and exosomes (green points, P < 0.0001) from MSC and MSC* protect the intestines from experimental necrotizing enterocolitis [39]. We have demonstrated in vitro, that exosomes separated from the culture medium of MSCs are quantitatively, functionally and qualitatively different from the exosomes obtained from MSC activated cells. When we analyzed the exosome "cargo", before and after activation with RT, we have found important differences in the proteomic content between the samples.
Our results (Fig. 7 and Additional file 1) show that there are qualitative differences between the proteins contained in the exosomes obtained from MSCs and MSCs*.
According with the GO terms obtained through a hypergeometric analysis, we found a different enrichment of terms between MSC and MSC* exosomes in different biological processes as well as in the number of pathways affected. Thus, whereas the numbers of highly significant common GO terms and MSC terms are in consonance, MSC* results generated a large variety and number of pathways altered, and it demonstrate the profound metabolic alteration that have undergone these exosomes.
Consequently, the results show that common GO terms and MSC terms are similar and related with exosome functions. Therefore, as shown in Additional file 1 the distribution of clusters is analogous and clusters representatives are associated with transport mediated by vesicles, coagulation (through platelet roles), development processes or immune response. On the contrary, as shown in Fig. 7, and Additional file 1, MSC* enriched terms are more disperse, having different and interconnected clusters with a complex biological background. Among the more representative clusters we highlight leukocyte cell-cell adhesion, cell localization, and negative regulation of responses to stimulus and cell death. Some of these proteins are key components of cell-cell or cell-matrix adhesion (Additional file 1) and includes annexin and integrins such as ANXA1, ANAX2, ITGB1, ITGA3, FN1, CTNNB1, APOH which interplay may activate exosome and leukocyte adhesion to tumor cells to limit tumor growth. The presence of annexin is very significant only in the exosomes released from MSCs*. The prototype member of this family, ANXA1, has been widely recognized as an anti-inflammatory mediator affecting migration and cellular responses of various cell types of the innate immune system [40]. Moreover, ANXA1 mRNA was tremendously up-regulated following MSCs irradiation (Additional file 2: Figure S2). Interestingly some key biological aspects of ANXA1 (potential tumor suppressor gene, ability to modulate tumor cells apoptosis induced by ionizing radiation and radiotherapeutic efficacy) deserve future studies to fully elucidate its role in the therapeutic effect of exosome derived from irradiated MSCs.
The therapeutic efficacy of transplanted MSCs actually seems to be independent of the physical proximity of the transplanted cells to damaged tissue. The number of MSCs that engraft into injured tissues may not be sufficient to account for their robust overall protective effects. Exosomes secreted by MSCs have been shown to contain anti-apoptotic miRNAs, to promote epithelial and endothelial wound healing and angiogenesis, and to contain growth factor receptor mRNAs, known to promote wound healing. Considered to be a vectorized signaling system, we believe that the exosomes released from MSCs seem to bind to specific membrane micro-domains on tumor cells, which widen the radiotherapy action, by stimulating tumor cell death thus increasing the sensitivity of cells to radiation and promoting the systemic effects. This hypothesis provides a rationale for the therapeutic efficacy of MSCs and their secreted exosomes in a wide spectrum of diseases, and also rationalizes the additional use of MSC exosomes as an adjuvant to support and complement other therapeutic modalities [11].
Conclusions
Our results show that exosomes derived from irradiated MSCs may be a determinant factor in the enhancement of radiation effects leading to increased metastasis control. Radiotherapy itself may not be systemic, although it might contribute to a systemic effect when used in combination with mesenchymal stem cells.
Additional files
Additional file 1: Significant biological process terms from REVIGO (Reduce + visualize gene ontology). (XLS 51 kb) Additional file 2: Figure S1. (a) Tumor growth kinetics and response to radiotherapy administered twice-a-week alone or in combination with simultaneous MSC* injection. The combination of radiotherapy and MSC* reduced tumor growth rate more than radiotherapy alone did. (b) Calculated time to tumor growth (T-t-G) for each group. As a result of the reduction on tumor growth kinetics, tumors from the group receiving the combination of RT + MSC* would need more days to reach 2,0 ml. 3. What do the authors mean by MSC* + RT in Fig. S1. The notation (MSC* + RT) means: MSC*: in vitro activated (2 Gy of low-LET (lineal energy transfer) radiation) mesenchymal cells were administered intraperitoneally; RT: 2 h after MSC* injection tumors were treated locally with radiotherapy (RT, 2Gy). This combined treatment was repeated every 4 days during a total of 24 days. Figure S2. mRNA expression of TRAIL, DKK3 and ANXA1 by MSCs 24 and 48 h after receiving 2 Gy of radiation. The overexpression of TRAIL and DKK3 is consistent with our previous study [22], the ANXA1 overexpression is consistent with the presence of the protein form inside the MSC* exosomes.
|
2018-08-17T21:20:39.369Z
|
2018-08-15T00:00:00.000
|
{
"year": 2018,
"sha1": "193d1aac020d3529f15c4027a57781e5430d90ce",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/s12943-018-0867-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "193d1aac020d3529f15c4027a57781e5430d90ce",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
226975412
|
pes2o/s2orc
|
v3-fos-license
|
The effects of hypertension as an existing comorbidity on mortality rate in patients with COVID-19: a systematic review and meta-analysis.
Introduction: Coronavirus has spread throughout the world rapidly, and there is a growing need to identify host risk factors to identify those most at risk. There is a growing body of evidence suggesting a close link exists between an increased risk of infection and an increased severity of lung injury and mortality, in patients infected with COVID-19 who have existing hypertension. This is thought to be due to the possible involvement of the virus target receptor, ACE2, in the renin-angiotensin-aldosterone blood pressure management system. Objective: To investigate the association between hypertension as an existing comorbidity and mortality in hospitalized patients with confirmed coronavirus disease 2019 (COVID-19). Methods: A systematic literature search in several databases was performed to identify studies that comment on hypertension as an existing comorbidity, and its effect on mortality in hospitalized patients with confirmed COVID- 19 infection. The results of these studies were then pooled, and a meta-analysis was peformed to assess the overall effect of hypertension as an existing comorbidity on risk of mortality in hospitalized COVID-19 positive patients. Results: A total of 12243 hospitalised patients were pooled from 19 studies. All studies demonstrated a higher fatality rate in hypertensive COVID-19 patients when compared to non-hypertensive patients. Meta-analysis of the pooled studies also demonstrated that hypertension was associated with increased mortality in hospitalized patients with confirmed COVID-19 infection (risk ratio (RR) 2.57 (95% confidence interval (CI) 2.10, 3.14), p < 0.001; I2 =74.98%). Conclusion: Hypertension is associated with 157% increased risk of mortality in hospitalized COVID-19 positive patients. These results have not been adjusted for age, and a meta-regression of covariates including age is required to make these findings more conclusive.
Introduction
In early December 2019, the first cases of a pneumonia of unknown origin were reported in Wuhan, China. Whilst initially appearing to be a localised outbreak, centred around a seafood and wet animal wholesale market in Wuhan City, within weeks it had spread to over 200 different countries worldwide, and was declared a pandemic on 12th March [1]. As of the 19th June 2020, there were 8,577,196 coronavirus cases, and 456,269 reported deaths worldwide [2]. It has been established that the pathogen responsible for this disease is the SARS-CoV-2 virus [3], a member of the coronavirus family. The disease is now largely referred to as COVID-19. Whilst the origin of the virus remains to be identified, its symptoms have been well characterised, and these include; fever, cough, fatigue, sputum production, headache, haemoptysis, dyspnoea and lymphopenia [4]. The rapid spread of the virus and its high variability in symptoms and severity prompted rapid research into host risk factors. Several risk factors for poorer outcomes have been identified, including older age, male sex, existing comorbidities and obesity [5]. Of the existing comorbidities, hypertension and diabetes are most frequently present in COVID-19 sufferers [6]. The noticeable high prevalence of hypertension in COVID-19 infection, and the identification of the angiotensin-converting-enzyme 2 receptor (ACE2) as the viral target [7] has drawn significant interest to the involvement of hypertension in COVID-19 infection.
In the present study we conducted a systematic review with meta-analysis with an aim to summarise all available primary research which has examined whether hypertension is a risk factor for increased mortality in COVID-19 patients.
Search strategy and Study Selection
I performed a literature search of the studies published since the COVID-19 outbreak began (2019) until June 20, 2020, with no restrictions on country imposed. We ran our searches in Ovid-Embase, Ovid-Medline, and medRxiv using the following search terms: (exp hypertension/) OR (high blood pressure.mp. or hypertension) OR (hypertensive.mp.) AND the COVID-19 search strategy suggested by NICE, see Figure S1. The Web of Science database was also searched using the following terms: Topic= (COVID-19* AND hypertension*). If inclusion criteria was satisfied, the full-text of the articles was retrieved and reviewed in its entirety. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint
Inclusion and Exclusion Criteria and Data Collection
The following inclusion criteria were considered when assessing the eligibility of the identified studies: empirical studies of hospitalised patients which include 10 or more participants that: 1) included patients with confirmed COVID-19 infection who also had arterial hypertension as an existing comorbidity prior to hospital admission at the time of COVID-19 diagnosis, 2) disclosed information on clinical outcomes defined as survival or in hospital mortality, and 3) compared clinical outcomes between hypertensive and non-hypertensive patients.
Assessment of Study Quality
To assess quality of included studies in the meta-analysis, we used The Newcastle-Ottawa Scale for cohort studies ( Figure S2), and the NIH quality assessment tool for case series studies ( Figure S3).
Statistical Analysis
We extracted the number of patients in each group (hypertensive vs non-hypertensive, died vs lived) and collated these in a table. We then calculated fatality rate (FR) as a percentage of patients that died out of the total number of patients within each group (hypertensive and non-hypertensive), and report these in our results section.
We used a random effect model to summarise our statistical synthesis and generate a forest plot; this model considers within subject variance, and between subject variance, which is likely to exist in our pooled sample. Risk ratio (RR), 95% confidence intervals (CI) and p-values for each study are reported, as well as an overall pooled effect size estimate.
Heterogeneity was assessed using the I² statistic which estimates the percentage of variation in effect sizes that is due to heterogeneity between the studies; the higher the value of the I² statistic [8], the more heterogeneity is present. For the purposes of this systematic review and meta-analysis, I will accept an I² statistic of no more than 95%. We then performed a subgroup analysis according to study size, to determine if heterogeneity remained the same when smaller and larger studies were separated. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint
Study characteristics
The 19 studies included resulted in a pooled number of 12,243 patients with confirmed COVID-19 infection. Of these, data on hypertension as an existing comorbidity was available for 12,243 patients, and data on survival/mortality was available for 12,218 patients. We compared mortality rates in patients with hypertension ( n= 3,566) and patients is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint without ( n= 8,677). The study characteristics, including study design, hospital location, hospital admittance dates and final date of follow up can be seen in Table 1. Of the nineteen included studies, fourteen are from China, two are from Italy, two are from Iran, and one is from New York. Included studies are a mix of retrospective-cohort and retrospective case-series. All included studies define how COVID-19 was diagnosed, see Table 1, and clearly state the dates of admission for patients included in the studies; follow up period is not defined in all included studies, but ranged from 0 days to 4 weeks.
Fatality rates in hypertensive vs non-hypertensive patients
Comparing fatality rates in COVID-19 patients between those that are hypertensive ( n = 3566) and non-hypertensive ( n = 8677), there is a much higher fatality rate in the hypertensive group (30.5%) when compared to the nonhypertensive group (9.9%), see Table 2. Seven studies [10,14,16,21,24,25,26,27] also report a statistically significant p-value of less than 0.01 when carrying out a between group comparison, i.e. when comparing mortality in the hypertensive vs non-hypertensive group; only two studies report an insignificant p-value of less than 0.05 [12,15], see Table 2. It is important to note however, that only the study by Guan WJ et al was adjusted for age and smoking status; the remaining 18 studies were not adjusted for any covariates. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint
Overall estimation of effect size and subgrouping heterogeneity differences
This meta-analysis uses a random effects model and demonstrates that the presence of hypertension as an existing comorbidity is associated with a 157% increased risk of mortality in patients diagnosed with COVID-19 (risk ratio (RR) 2.57 (95% confidence interval (CI) 2.10, 3.14), p < 0.001; I² =74.98%) (Figure 2). Fourteen of the nineteen included studies report a significant p-value of less than 0.05 and 95% confidence intervals that do not cross 1.
Overall heterogeneity in the random effects model was high (I² = 74.98%), suggesting that the effects of hypertension on mortality are not the same across all studies. Subgrouping studies according to size however, Figure 2
Age and gender distributions within hypertensive patients
Five studies [13,14,20,23,27] provided information on age within hypertensive and non-hypertensive COVID-19 patients; three reported a median value [14,20,23], and two reported a mean [13,27], see Table 3. Information on age was available for 3595 patients. Average age of patients with hypertension ( n=1251) in the three studies reporting a median value was 69, and 62 in non-hypertensive patients ( n=2344) ( Table 3). Average age of COVID-19 positive patients with hypertension the two studies reporting a mean value was 64, and 50 in the non-hypertensive patients (Table 3). Age is therefore higher within the hypertensive patients. Although, only the study by Guan WJ et al adjusted their results for age within hypertensive and non-hypertensive patients. This study reported a hazard ratio of 1.58 (95%CI = 1.07-2.3, p-value = 0.002) when comparing mortality in hypertensive vs non-hypertensive patients, when adjusted for age and smoking status.
Information on sex was available for 2548 patients from four studies [13,20,23,27]. The male sex predominated in both the hypertensive and non-hypertensive patients, at 62.5% and 57.2% respectively. Between group analysis was however, only available for two studies [23,27] and this demonstrated insignificant results for both studies, Table 4, suggesting the sex differences in hypertensive and non-hypertensive patients may not be significant. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint
Summary of main findings
We present data that indicates a significant increase in mortality in COVID-19 patients with hypertension as an existing comorbidity; all the included studies demonstrate a higher fatality rate in the hypertensive group when compared to the non-hypertensive group, and almost all report a statistically significant between group comparison with a p-value of less than 0.05. The two studies that do not [12,15] are both very small studies, with only 201 and 102 patients each respectively. Bias within the study, or errors in data collection could therefore easily explain the disparity in results. An overall effect estimation also demonstrates a very significant overall contribution of hypertension to increased mortality in COVID-19 patients, with only five of the total nineteen included studies crossing the line of no difference and suggesting an insignificant result. Only one of these studies [17] is within the larger studies group; the remaining four studies [9,12,15,23] are all within the smaller studies group, suggesting that within study biases could be to blame for the difference in results. The smaller studies do however, demonstrate a much greater homogeneity in results when compared to larger studies, which demonstrate very high heterogeneity.
Those studies that do report on age and gender within COVID-19 patients indicate that hypertension is more commonly present in males, and in older populations. Hypertension is known to be more common in males, and the elderly, although sex differences in prevalence of hypertension are reported to diminish after the age of 60 [28]. It is already known that elderly patients infected with COVID-19 are at an increased risk for progression to a severer form of the disease, and at an increased risk of mortality when compared to younger individuals [29]. The results of this meta-analysis would suggest that the added effects of the presence of hypertension may increase this risk even further across all age groups, and this may be specifically relevant to the elderly, who are already at an increased risk. This meta-analysis however, was not adjusted for age as a co-variate and therefore the effects seen may be a mere reflection of the increased risk of mortality in hypertensive patients [30] that exists irrespective of COVID-19 infection; it would be necessary to perform a meta-regression to separate these variables and prove that the association seen is in fact true. Similarly, recent studies have identified that the male sex is more likely to die from COVID-19 infection [31]: the results of this systematic review identify that the percentage of males in both hypertensive and non-hypertensive groups, in included studies that do report on sex difference, demonstrate that males are higher in both groups. We have already mentioned that hypertension is known to be higher in males; the high prevalence of males in both the hypertensive and non-hypertensive COVID-19 patients however may suggest that these variables are indeed separate.
Only four studies report on COVID-19 sex differences, and none report differences in mortality that are adjusted for is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint sex. Again, a meta-regression for sex as a co-variate would be necessary to determine if the effects of hypertension are still present in the absence of sex as a potential confounding variable.
The ACE2 receptor and hypertension in COVID-19
Whilst the association between COVID-19 pathogenesis and hypertension remains to be fully investigated, the proposed mechanisms by which this virus is interlinked with hypertension could be closely linked to the target receptor for this virus, ACE2, and its involvement in the renin-angiotensin-aldosterone system (RAAS). Internalisation of the COVID-19-ACE2 receptor complex would theoretically result in reduced expression of ACE2 on cell surfaces. This could then impede the cells ability to degrade angiotensin II, a necessary step to ensure correct blood pressure homeostasis.
Furthermore, hypertension is a known risk factor for increased mortality [32], and is known to cause myocardial injury [39]. Interestingly, whilst death from acute respiratory distress syndrome (ARDS) predominates in COVID-19 cases [33], there have been an increasing number of reports on myocardial injury also causing many COVID-19 deaths [34]. This could be due to increased myocardial demand that normally accompanies a viral illness, although there are several reports that a failure to appropriately metabolise angiotensin II may compromise cardiac function [35,36], and we have already mentioned that the loss of ACE2 could interfere with this process. It remains to be established then, whether the combined effects of existing hypertension increasing risk of myocardial injury, together with the possible malfunctioning of angiotensin II metabolism in COVID-19 infection, together are causing increased risk of myocardial injury, and therefore increased mortality in hypertensive patients. Indeed, if one were to look at the results of the study by Chen T et al, it is clear to see patients with a previous history of hypertension dominated in the group of patients that developed acute cardiac injury (61% of patients) [37], although the differences in those that ultimately died does not seem as significant (77% vs 76%); the association remains to be explored.
Another possible consideration could be the management that severe COVID-19 patients which likely have a poor prognosis require. Patients who develop hypoxemic respiratory failure in ARDS will usually require mechanical ventilation [38], a process which requires general anaesthesia and loss of autonomous airway management.
Hypertension is a known risk factor for complications in the application of general anaesthetic, with patients at an increased risk of greater swings in blood pressure than the normal population, followed by increased cardiovascular morbidity [39]. The increased risk that hypertension confers on mortality in COVID-19 patients could therefore be . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint due to the requirements for mechanical ventilation that most patients suffering from a critical form of COVID-19 infection will require, irrespective of their hypertensive status.
Similarly, hyper-coagulability has been reported in COVID-19 patients, with one study noting the development of inhospital deep vein thrombosis (DVT) in 23% of patients, despite anticoagulant prophylaxis [40]. This could possibly be explained by the long hospital-stays COVID-19 patients are faced with, although appropriate anticoagulant prophylaxis has been shown to reduce the chances of in-hospital DVTs down to 2% [41]. Hypertension is known to confer a hyper-coagulable state [42] through its well-known contributions to Virchow's triad. Several cases of pulmonary embolisms in COVID-19 patients, in some cases bilateral, have been reported [43,44], and these embolisms have been identified as the cause of death in some of these patients [45]. Whether or not hypertension is present in these patients however, remains unreported. It could be postulated that hypertension may be causing endothelial wall damage, thus contributing to a hyper-coagulable state, contributing to pulmonary embolisms and thus increasing mortality in COVID-19 via this mechanism. Again, this link remains to be further explored.
Limitations and strengths
This meta-analysis does demonstrate a significantly increased risk of mortality in hypertensive COVID-19 patients admitted to hospital; however, only 12243 patients were included. To date, there have been 8,577,196 reported cases of COVID-19 worldwide and so this sample is not largely representative, and a much larger sample of patients would be needed to make these findings conclusive. The reason for such a small sample of inclusion, at least at present, is the lack of studies due to the novel nature of this disease; as time progresses, I expect there to be many more studies eligible for inclusion in a similar systematic review in future. A large level of heterogeneity between studies was also present, and this is likely to be due to confounding factors which were not controlled for in the studies. These factors can include smoking status, ethnicity, additional comorbidities, amongst others. There may also be significant discrepancies in how data was collected across all studies, as the studies included fail to define how 'existing hypertension' was classified. The follow-up period is also very limited, again owing to the novel nature of this disease, and therefore the mortality figures may be skewed; a longer follow up period would be preferred to allow a bigger is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint be interpreted with caution; a similar systematic review and meta-analysis, followed by a meta-regression is needed to make these results more conclusive.
Conclusion
These findings demonstrate that hypertension is a significant risk factor for increased mortality in COVID-19 patients.
However, the potential reasons for this are also discussed, and hopefully demonstrate that the relationship between hypertension and COVID-19 pathogenesis is unclear; how the renin-angiotensin-aldosterone-system comes into play with this, if it does, is also unclear. More studies from across the globe, which are well-controlled and consider essential co-variates, are needed to ensure these results can be generalized to all populations; a greater understanding of COVID-19 pathogenesis is also required to determine how hypertension is conferring an increased mortality in affected patients.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Patient and public involvement
The studies pooled in this meta-analysis all received ethical approval to waive the requirement for patient consent
Funding
The author received no financial support for the research, authorship, and/or publication of this article. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint Figure S1. NICE recommended search strategy for COVID-19 OVID platform . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint Figure S2. Newcastle-Ottawa quality assessment form for cohort studies used to assess quality of included cohort studies . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint 8. Were the statistical methods well-described? 9. Were the results well-described? Figure S3. National Heart, Lung, and Blood Institute Quality Assessment Tool for Case Series Studies used to assess quality of included case-series studies . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted November 17, 2020. ; https://doi.org/10.1101/2020.11.16.20149377 doi: medRxiv preprint
|
2020-11-17T20:02:05.944Z
|
2020-11-17T00:00:00.000
|
{
"year": 2020,
"sha1": "bf2d2982e0a36dbbd3e43c752f4399075517910c",
"oa_license": "CCBY",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/11/17/2020.11.16.20149377.full.pdf",
"oa_status": "GREEN",
"pdf_src": "MedRxiv",
"pdf_hash": "bf2d2982e0a36dbbd3e43c752f4399075517910c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12974767
|
pes2o/s2orc
|
v3-fos-license
|
Determination of Carbamate and Organophosphorus Pesticides in Vegetable Samples and the Efficiency of Gamma-Radiation in Their Removal
In the present study, the residual pesticide levels were determined in eggplants (Solanum melongena) (n = 16), purchased from four different markets in Dhaka, Bangladesh. The carbamate and organophosphorus pesticide residual levels were determined by high performance liquid chromatography (HPLC), and the efficiency of gamma radiation on pesticide removal in three different types of vegetables was also studied. Many (50%) of the samples contained pesticides, and three samples had residual levels above the maximum residue levels determined by the World Health Organisation. Three carbamates (carbaryl, carbofuran, and pirimicarb) and six organophosphates (phenthoate, diazinon, parathion, dimethoate, phosphamidon, and pirimiphos-methyl) were detected in eggplant samples; the highest carbofuran level detected was 1.86 mg/kg, while phenthoate was detected at 0.311 mg/kg. Gamma radiation decreased pesticide levels proportionately with increasing radiation doses. Diazinon, chlorpyrifos, and phosphamidon were reduced by 40–48%, 35–43%, and 30–45%, respectively, when a radiation strength of 0.5 kGy was utilized. However, when the radiation dose was increased to 1.0 kGy, the levels of the pesticides were reduced to 85–90%, 80–91%, and 90–95%, respectively. In summary, our study revealed that pesticide residues are present at high amounts in vegetable samples and that gamma radiation at 1.0 kGy can remove 80–95% of some pesticides.
Introduction
Pesticides such as insecticides, herbicides, fungicides, and acaricides are an abundant and diverse group of chemical compounds. Pesticides are widely applied during cultivation and postharvest storage to improve the quantities and quality of crops and food [1]. The use of pesticides is essential to control pests in horticultural crops and to ensure the production of adequate food supplies for the increasing world population, as well as to control insect-borne diseases. Pesticides are used to decrease crop loss both before and after harvest [2,3] and to prevent the destruction of edible crops by controlling agricultural pests or unwanted plants, thereby improving food production [4][5][6]. The increased use of pesticides has led to fears of adverse consequences not only for human health but also for the environment due to pollution.
The general population is exposed to pesticides on a daily basis via dietary ingestion of contaminated food products. Several studies have indicated that certain foods contain higher levels of pesticide residue, such as fruits, juices, and vegetables [7]. Vegetables containing residue concentrations above the prescribed maximum residue level (MRL) may pose a health hazard to unwary consumers [8][9][10][11].
Fresh fruits and vegetables are important components of a healthy diet, as they are a significant source of vitamins and minerals. Different types of vegetables are consumed daily by locals in Bangladesh. Among them, eggplant is one of the most common vegetables used in various dishes. Therefore, monitoring pesticide residues in vegetables, particularly in eggplant, may indicate the extent of pesticide contamination that may pose a possible risk to human health.
Several methods can be employed for the removal of various classes of pollutants from contaminated environmental samples [12]. Some of these methods are advanced oxidation processes (AOPs), including UV photolysis, photocatalysis (hydrogen peroxide and ozone), analysis using Fenton's reagent, and radiolysis of water [12][13][14][15][16]. In addition, radiation is one of the most powerful AOPs, in which irradiation with a beam of accelerated electrons or gamma-radiation can decompose various pollutants, such as pesticide residues.
Radiolytic degradation of pollutants has been employed in recent years for treatment of natural waters and wastes of different origins and has also been used for drinking water treatment [17][18][19][20]. Moreover, gamma-irradiation is becoming an important technology in the food industry, including food safety concerns such as the preservation of fruits and vegetables to reduce pathogenic microbes [21]. On the other hand, even though radiation of food has been investigated by many scientists, limited studies have focused on the effect of gamma-radiation for the removal of pesticide residues [22][23][24].
In recent years, carbamate and organophosphorus pesticides have become increasingly important due to their broad spectrum of activity, their relatively low persistence, and their generally low mammalian toxicity when compared to organochlorine pesticides [25][26][27]. Although carbamate and organophosphorus pesticides are extensively used by Bangladeshi farmers during the cultivation of crops and vegetables, there is very little information on the incidence of vegetable samples that have been contaminated with these pesticides and whether irradiation of foods, specifically vegetable samples, prior to their consumption is an efficient method for the removal of such contaminants.
Thus, the aim of the present study was to determine the residual levels of carbamate and organophosphorus pesticides in samples of a vegetable widely consumed in Bangladesh, namely, eggplant. The effect of radiation treatment on the removal of pesticide residues from four types of vegetables that are commonly consumed raw, namely, capsicum, cucumber, carrot, and tomato, was also investigated.
Collection and Preservation of Samples.
To monitor the pesticides present in vegetable samples, eggplant (Solanum melongena) samples ( = 16) were collected from four different markets in the Gulshan-2 area, Dhaka, Bangladesh. The vegetable samples that were used to investigate the effects of radiation treatment on the removal of pesticides were capsicum (Capsicum annum), cucumber (Cucumis sativus), and tomato (Solanum lycopersicum). These vegetables were selected because they are usually eaten raw.
Only fresh, high-quality vegetables that were free from blemishes or rot were used. Following collection, the samples were refrigerated at 4 ± 1 ∘ C overnight and analyzed the next day. To reduce variability, all of the vegetable samples used in the study were collected within similar areas.
Sample Extraction for Pesticide
Analysis. Sample preparation was conducted by following the methods described by [28,29]. An amount of sample (200 g) was chopped, and a small amount (20 g) was then macerated with 50 mL of ethyl acetate, hexane, and acetone (3 : 1 : 1). Anhydrous sodium sulfate (20 g) was added to remove water before the addition of 0.05-0.10 g AAC for the removal of soluble plant pigments. The mixture was further macerated at full speed for 3 min using an Ultra-Turrax macerator (IKA-Labortechnik, Janke & Kunkel GmBH & Co., KG, Germany). The samples were then centrifuged for 5 min at 3000 rpm, and the supernatant was transferred to a clean graduated cylinder for volume measurement. The organic extract was concentrated to 5 mL using a vacuum rotary evaporator (Rotavapor-R 215, Buchi, Switzerland) at 250 mbar with water bath at 45 ∘ C. The extraction process was followed by a cleanup step using column chromatography with Florisil (60-100 mesh, Sigma, USA, analytical grade) to remove any residual components that may interfere with the HPLC detector system.
Sample Preparation for Radiation Treatment.
The vegetable samples were carefully washed with running tap water, as usually practiced in domestic kitchens. After washing, the stems of all samples were removed. Cucumbers were first peeled with a peeler, followed by uniform slicing using a sterile knife on a clean chopping board. The tomatoes were sliced without being peeled, while capsicum was directly chopped.
Radiation Treatment.
The samples were packed into sterilized (15 kGy radiation dose) low-density polyethylene (LDPE) plastic bags before being sealed with a sealer. Two samples were prepared for each type of vegetables. The packets were individually labeled, and two different radiation doses (0.5 and 1.0 kGy) were applied to each. In this study, a 1850 terabecquerel (50 kCi) 60 Co gamma-irradiator was used as the radiation source. Nonirradiated sample of each type BioMed Research International 3 of vegetables was kept as control sample. After the radiation treatment, both the irradiated and the nonirradiated samples were analyzed for the presence of pesticides following the above method.
2.6. Cleaning of Extracts. The samples were cleaned following the method described by [30]. Briefly, the cleaning of acetone extract was performed using Florisil column chromatography. The Florisil (60-100 mesh) was activated at 200 ∘ C for 6 h and was subsequently deactivated with 2% distilled water. The top 1.5 cm of the 0.6 cm diameter Florisil column was packed with anhydrous sodium sulfate. Elution was performed with a solvent mixture of double distilled hexane (65%) and dichloromethane (35%) at 5 mL/min. The eluent was concentrated to a small volume (1-2 mL) using a rotary vacuum evaporator and transferred to a vial. Any residual solvent was completely removed under a gentle flow of nitrogen. The evaporated sample was reconstituted to a total volume of 1 mL by dissolution in acetonitrile prior to HPLC injection. The procedure was similarly conducted for all vegetable samples.
HPLC Analysis.
Following the cleaning of extract, aliquots of the final solution were quantified using a (Shimadzu) LC-10 ADvp HPLC, equipped with an SPD-M 10 Avp attached to a photodiode array detector (Shimadzu SPD-M 10 Avp, Japan) (200-800 nm). The analytical column was a C18 Reverse Phase from Alltech (250×4.6 mm, 5 m) that was maintained at 30 ∘ C in a column oven. A combination of 70% ACN and 30% water was used as the mobile phase, running with a flow rate of 1.0 mL/min. All solvents were of HPLC grade and were filtered using a cellulose filter (0.45 m) prior to use. Prior to HPLC analysis, the samples were passed through a 0.45 m nylon syringe filter (Alltech Assoc) before being manually injected (20 L) each time. Suspected pesticides were identified based on the retention times of the pure analytical standards. Quantification was performed based on the method described by [28] (Figure 1).
Quality Control and Quality Assurance.
Quality control and quality assurance were incorporated into the analysis. The accuracy and precision were validated in accordance with the European Commission (EC) guidelines [31]. The precision was expressed as the relative standard deviation (RSD). Accuracy can be measured by analyzing samples with known concentrations and comparing the measured values with the actual spiked values. For the recovery experiments, pesticide-free samples (20 g) were spiked in triplicate ( = 3), after homogenization by the addition of appropriate volumes of pesticides standards at two different levels (0.05 and 0.50 g/mL). Control samples were processed along with spiked ones. The mixture was left standing for 1 h to allow equilibration. The processes of extraction and cleanup of pesticide residues were similar as described above [28,30]. The mean percentage recoveries ranged from 86% to 99% while precision ranged from 4.45% to 14.54%. Percentage recovery = [CE/CM ×100], where CE is the experimental concentration determined from the calibration curve and CM is the spiked concentration. (LOD). LOQ was defined as the lowest concentration of the analyte that could be quantified with acceptable precision and accuracy. The LOD was defined as the lowest concentration of the analyte in a sample that could be detected but not necessarily quantified. The LOQ and LOD were evaluated as signal-to-noise ratios (S/N) of 10 : 1 and 3 : 1, respectively, and were obtained by analyzing unspiked samples ( = 10) [32]. In the present study, the LOD and LOQ were 0.001 mg/kg and 0.003 mg/kg, respectively.
Analysis of Carbamate and Organophosphorus Residues.
This is the first study to determine the occurrence of organophosphorus and carbamate residues in eggplant samples (Solanum melongena) collected from four different markets in the Gulshan-2 area in Dhaka. Pesticide residues were detected in 50% of the 16 samples, and approximately 19% of the total samples exceeded the MRL level provided by the World Health Organisation (WHO) or Food and Agricultural Organisation (FAO). Two samples (VS-15 and VS-16) were contaminated with carbaryl and pirimicarb, while another sample (VS-14) contained carbofuran. In addition to the detected carbamates, six organophosphorus pesticides (diazinon, dimethoate, parathion, phenthoate, phosphamidon, and pirimiphos-methyl) were also detected in seven of the eggplant samples, with some exceeding the MRL level set by FAO/WHO. Among the pesticides detected, carbofuran was present in most (74%) of the samples while phenthoate and dimethoate were present in 16% and 7% of the samples, respectively (Figure 2). The concentrations of the remaining three pesticides were within safe limits. Due to the long persistence nature of organochlorine pesticides, these have recently been eliminated from agricultural practices in many countries, including Bangladesh [33]. However, the use of carbamates and organophosphorus pesticides has increased because their low persistence has led to claims that they are less harmful to the environment. Therefore, in the present investigation, we focused on carbamates and organophosphates in vegetables normally eaten raw because the exposure of the consumer to the pesticides will be greater for vegetables which are eaten raw than cooked one.
Among the carbamate pesticides, carbofuran was detected at a very high concentration (1.86 mg/kg) in a single sample (VS-14) (Figure 3). Contrary to the findings from [34], who did not detect any carbofuran or carbaryl residues in eggplant samples, carbaryl was detected in two eggplant samples (VS-15 and VS-16) at 0.003 and 0.006 mg/kg, respectively ( Table 2). This variation may be due to the use of different eggplant sources to supply the market. Other than carbaryl, the two similar samples (VS-15 and VS-16) were also contaminated with pirimicarb at 0.008 and 0.007 mg/kg, respectively. In some cases, the detected pesticide residue concentrations exceeded the recommended limit set by WHO; this can be dangerous to the health of consumers. Six different organophosphorus pesticide residues were analyzed to determine their levels in the collected eggplant samples. Among all the organophosphorus pesticide residues analyzed, the highest phenthoate concentration observed was in sample VS-14 (0.311 mg/kg), followed by sample VS-16 (0.077 mg/kg) (Tables 2 and 3). The sample VS-3 contained two pesticides (diazinon and parathion) at 0.022 ( Figure 4) and 0.006 mg/kg. The level of the detected parathion was lower than that detected in some eggplant samples collected from Dhaka, Bangladesh (0.32 mg/kg), in a previous study [34]. Dimethoate was detected in a single sample (VS-8) at 0.183 mg/kg, while phosphamidon and pirimiphos-methyl were detected in samples VS-6 and VS-3 at 0.022 and 0.008 mg/kg (Tables 2 and 3), respectively. Phenthoate and phosphamidon were present at levels higher than the values recommended by the FAO/WHO. In comparison to our result, eggplant samples from India had a lower dimethoate level (0.030 mg/kg) but contained a higher phosphamidon level (0.038 mg/kg), as reported by [35].
Removal of Pesticide Residues in Vegetables
Using Gamma-Radiation. The persistence of pesticide residues is a complex matter affected not only by the chemical and physical characteristics of the parent compound and its degradation products but also by the nature of the formulation applied, the adsorbents, and the type of solvents employed. Some types of plants have waxy surfaces that tend to trap sprayed pesticides, thereby making pesticides more resistant to removal, and they would be as true surface residues. Although washing, peeling, and cooking remove a large amount of pesticides during food processing, some studies have indicated that they are inefficient in reducing pesticide residues below the MRL value. For example, [36] reported that quinalphos residues in cauliflower were reduced only to some extent by various home processing methods such as washing and cooking. It has been suggested that the inefficiency of home processes for decontaminating treated cabbage may be due to the strong adsorption properties of quinalphos and chlorpyrifos [37]. Due to the persistent nature of some pesticides and the inefficiency of conventional methods of pesticide removal, additional steps to remove pesticides and their degradation products should ideally be incorporated. Unfortunately, even though the additional steps are important, they are not normally employed because they do not enhance food value.
In addition, few studies have investigated the effectiveness of additional steps in removing pesticides, including the use of gamma-radiation.
In the present study, three vegetables that are normally eaten raw in Bangladesh were selected to determine the best radiation dose useful for the reduction of pesticide residues to safer levels. We have selected the WHO recommended doses of radiation for processed and peeled vegetables at 0.5-1.0 kGy, as opposed to the higher levels recommended for unprocessed vegetables at 2.5 kGy [38].
In this study, the collected samples contained three different pesticides which are diazinon in capsicum, chlorpyrifos in cucumber, and phosphamidon in tomato (Table 4). When the samples were treated with 0.5 kGy gamma-radiation, there was a reduction in the total amount of pesticides present; the degree of reduction varies with the pesticide type. For example, chlorpyrifos, diazinon, and phosphamidon were reduced by 35-43%, 40-48%, and 30-45%, respectively, when radiation strength of 0.5 kGy was utilized ( Figure 5). However, when the radiation dose was increased to 1.0 kGy, the levels were reduced to 80-91%, 85-90%, and 90-95%, respectively. The ideal radiation dose is probably 1.0 kGy because when radiation was applied at 0.5 kGy, the highest reduction rate was only 40-48% for diazinon, but at a 1.0 kGy radiation dose, the highest reduction rate could reach up to 90-95% for phosphamidon (Table 4). Furthermore, based on the International Atomic Energy Agency (IAEA) criteria, irradiation doses of up to 1.5 or 2.0 kGy doses are deemed to be safe as they do not affect the quality and appearance of fresh vegetables [39]. Continuous monitoring of residual pesticide levels in different environmental samples and the study of the best method for their removal is important to understand the level of contamination and to determine remedial actions.
Conclusion
This study reveals the presence of carbamate and organophosphorus residues in eggplant samples collected from four different markets in the Gulshan-2 area in Dhaka; some of these residues exceeded the MRL limits. Our results also indicated that pesticide levels decreased with increasing radiation doses and varied with pesticide type. Continuous monitoring of residual pesticide levels in different vegetable samples is important for their safe consumption.
|
2018-04-03T02:47:48.413Z
|
2014-03-10T00:00:00.000
|
{
"year": 2014,
"sha1": "7b28444c0343cc41a9dfeeef25366d1f2245bd79",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2014/145159.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bcff2de5834dd2f5d89646603002f8158053047",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
257620678
|
pes2o/s2orc
|
v3-fos-license
|
Evaluating the Effect of Ammonia Co-Firing on the Performance of a Pulverized Coal-Fired Utility Boiler
: Ammonia (NH 3 ), as a derivative of hydrogen and energy carrier, is regarded as a low-carbon fuel provided that it is produced from a renewable source or a carbon abated process of fossil fuel. Co-firing ammonia with coal is a promising option for pulverized coal-fired power plants to reduce CO 2 emission. Applying the co-firing in an existing pulverized coal-fired boiler can achieve satisfying combustion performance in the furnace but may affect the boiler performance. In the present work, a thermal calculation method was employed to evaluate the impact of ammonia co-firing on the boiler performance of an existing 600 MW supercritical utility boiler, covering the co-firing ratio range up to 40% (on heat basis). The calculations indicated that, as compared to sole coal combustion, co-firing ammonia changed the volume and composition and consequently the temperature and heat transfer characteristics of the flue gas. These resulted in increased variations in the heat transfer performance of the boiler with increasing of the co-firing ratio. The evaluations revealed that co-firing up to 20% ammonia in the existing boiler is feasible with the boiler performance not being considerably affected. However, the distribution of the heat transferred from the flue gas to boiler heat exchangers is significantly deteriorated at higher ratios (30% and 40%), resulting in over-temperature of the superheated steam, under-temperature of the reheated steam and considerable reduction in boiler thermal efficiency. It implies retrofits on the heat exchangers required for accommodating higher ratio co-firing in the existing boiler. The comparison study showed that co-firing 20% ammonia provides a superior boiler performance over co-firing 20% biomass producing gases and blast furnace gas.
Introduction
To limit global warming to 1.5 • C, phasing out unabated coal in the global power sector by 2030-2040 is inevitable [1].Some countries have implemented or are on schedule to implement coal phase-out [2].In China, coal is currently the dominant fuel for electricity generation to support rapid economic and social development, ensure energy security and stabilize electricity supply, and is expected to remain as a major power source in the near term of transiting to a low-carbon future.However, the coal power industry is also the biggest CO 2 emitter, accounting for 34% of national carbon emissions [3].China has committed to achieve carbon peaking before 2030 and carbon neutrality by 2060.In such a circumstance, the coal power industry is under increasingly great pressure to reduce CO 2 emissions.Power plant operators are pursuing decarbonization technologies, among them low-carbon fuels including biomass [4,5] and hydrogen-containing fuels, mainly ammonia (NH 3 ) [6,7], to partially or fully substitute coal and they are regarded as promising options.Switching from coal to these fuels allows existing coal-fired power plants to gradually become low-carbon generators with continuous operation to generate dispatchable electricity while retaining a large part of the existing assets [2,7,8].
Ammonia can be burned in pure oxygen or air, producing only water and nitrogen, and is a potential carbon-free hydrogen fuel provided that it is produced from a renewable source or a carbon abated process of fossil fuel.It is proven that ammonia can be burned directly in internal combustion engines, gas turbines and coal-fired boilers [9,10].As a fuel, ammonia has various advantageous characteristics including high volumetric energy density, low unit energy storage cost and well-established storage and transport infrastructure [10], and is particularly suitable to be utilized on large scales in existing pulverized coal-fired power plants to partially replace coal through co-firing for CO 2 reduction [11].It therefore has been attracting broad research efforts for implementation in applications.Laboratory studies have been carried out for understanding the combustion characteristics and NO x formation, one of the most concerning issues, of ammonia burning under pulverized coal combustion conditions and pulverized coal/ammonia co-combustion under various co-firing ratios [12][13][14].The studies showed that the ignition and flame propagation of pulverized coal/ammonia mixtures are comparable and even better than those of sole pulverized coal [15][16][17][18].Moreover, the production of NO x can be suppressed by injecting the combustion air with a two-stage strategy and maintaining the independence of the ammonia/coal burner [19].With the staged combustion strategy, the production of both char-NO x and fuel (NH 3 )-NO x can be reduced to achieve lower NO x emission as well as lower unburnt carbon in ash than during pure coal combustion [20].It means that co-firing ammonia is capable of retaining or improving combustion performance as compared to pulverized coal combustion through combustion organization.
Trials and tests in bench-and pilot-scale combustion facilities and industrial-scale furnaces have demonstrated that co-firing ammonia in pulverized coal-fired power plants is feasible in terms of combustion performance and NO x suppression.The industrial trial of adding a small amount of ammonia (0.6-0.8%) to a 156 MW pulverized coal-fired unit [21,22] confirmed that ammonia is burned completely and co-firing ammonia had no impact on the boiler operation and NO x emission.Additionally, experiments on a full-scale coal-fired utility boiler showed that spraying urea in the fuel-rich zone could be carried out with reduced NO x emissions and full conversion of ammonia, the main decomposition product of urea [23], which indirectly proved the practical applicability of ammonia co-firing.Tamura et al. [12] investigated co-firing coal and NH 3 in a 1.2 MWth furnace with a single horizontal burner.They observed the same combustion performance of ammonia co-firing as pulverized coal combustion in terms of ignition, flame temperature and flue gas emissions, no ammonia slip and similar NO x emissions up to 30% co-firing with the designed burner and ammonia injection.Hiroki et al. [24] tested co-firing NH 3 with pulverized coal in a 10 MWth combustion facility with a swirl burner and also showed that, at co-firing ratios up to 20%, NO x could be limited to the same level as for coal firing and a stable flame was maintained by supplying ammonia through the center of the coal burner.Niu et al. [25] conducted industrial tests on a 40 MWth facility installed with a full-scale co-firing burner, demonstrating that co-firing 0-25% NH 3 could achieve good combustion stability and burnout of both coal and ammonia and control NO x emissions at low levels through air staging.Besides experimental studies, cost-effective numerical simulations were also employed to investigate co-firing ammonia in pulverized coal-fired facilities [12,[26][27][28].The results represented the experimental observations at lower cofiring ratios and also explored the combustion performance and NO x formation at higher ratios.The investigations confirmed that co-firing ammonia at lower ratios retains similar combustion performance as pulverized coal combustion [12,26], however, the burnouts of both fuels may deteriorate significantly at co-firing ratios of above 40% [27] despite low NO x emissions still being achievable [26][27][28].Both the experimental and numerical studies suggested the necessity and significance of burner design and combustion organization for higher ratios of ammonia co-firing in pulverized coal-fired furnaces.
While most of the previous studies were performed on the combustion performance and NO x emissions in pulverized coal-fired furnaces, few addressed the effect of ammonia co-firing on the boiler thermal performance [6,11].It is well known that pulverized coal-fired utility boilers are generally designed for specific coals.As a gaseous fuel, however, ammonia has significantly different properties from coals, which may affect the performance, such as heat transfer of an existing boiler when ammonia is co-fired.
Genichiro et al. [11] evaluated the boiler efficiency and material balance of a 1000 MW pulverized coal-fired boiler co-firing 20% ammonia.They observed comparable performance with coal combustion and nevertheless proposed some retrofits required for improvement.Xu et al. [6] assessed the performance of a 600 MW boiler co-firing ammonia up to 20% by using exergy analysis and found the boiler exergy efficiency decreased with increasing co-firing ratio, implying that the boiler performance may deteriorate further at higher co-firing ratios.To accommodate higher ratios of co-firing without compromising the performance may require boiler retrofits, and it relies on assessing the impact of co-firing on boiler heat transfer performance [19].However, there is still a lack of studies on the heat transfer performance of existing boilers co-firing ammonia, particularly at relatively higher ratios.
The present work was to evaluate the effect of ammonia co-firing on the heat transfer performance of an existing pulverized coal-fired boiler over a wide range of co-firing ratios, up to 40%, by using a thermal calculation method, aimed at the technological issue of developing higher ratios for co-firing applications with boiler performance that is comparable to sole coal combustion.Considering co-firing ammonia is new but cofiring other gaseous fuels including biomass gas and blast furnace gas (BFG) is an existing technology applied in coal power plants, the comparison was also made with co-firing of these fuels based on thermal calculation analyses with the aim of exploring the applicability of existing gaseous fuel co-firing technologies for higher ratios of ammonia co-firing to improve boiler performance.
Overview of the Existing Pulverized Coal-Fired Utility Boiler
The evaluation on the boiler performance of ammonia co-firing was conducted with an existing 600 MW pulverized coal-fired utility boiler.The layout of the boiler is schematically shown in Figure 1.The unit is a typical Benson-type supercritical boiler, tangentially fired with 24 low-NO x pulverized coal burners installed in six layers at the furnace corners.The boiler was designed to burn bituminous coal with 20 pulverized coal burners in five layers to achieve the output of the maximum continuous rating (MCR).The pulverized coal burners and associated secondary air ports are distributed evenly in the lower furnace and a set of separated over-fired air ports are arranged above the burner zone for air staging to realize low NO x combustion.The properties of the pulverized coal are listed in Table 1. and to the high-temperature superheater and reheaters, low-temperature reheater, economizer and air preheater mainly by convection.The main designed thermal parameters of the boiler are presented in Table 2.The values are on an as-received basis.
After combustion in the lower furnace, the flue gas flows through the upper furnace, in which two stages of platen superheater are installed, horizontal convection pass with high-temperature reheater and superheater, second convection pass with low-temperature reheater and economizer and then the air preheater and finally leaves the boiler.Along the flow process, the flue gas transfers the heat to the furnace water walls (evaporator) by radiation, to the platen superheaters through radiation and convection and to the hightemperature superheater and reheaters, low-temperature reheater, economizer and air preheater mainly by convection.The main designed thermal parameters of the boiler are presented in Table 2.
Thermal Calculation Analysis of Boiler Performance
The thermal calculation method was employed to evaluate the performance of the existing pulverized coal-fired boiler and the effect of ammonia co-firing on the boiler performance.A thermal calculation model for the boiler was established in the form of Microsoft Excel spreadsheet tables, following the standard method for thermal calculation analysis [29].The model adheres to the law of energy conservation for the boiler and its heat exchangers, including the evaporator, superheaters, reheaters, economizer and air preheater.The method is standard and widely applied in boiler design and retrofit.Validation with the designed results of the boiler burning the pulverized coal proved the correctness and accuracy of the model calculation.
The thermal calculation was focused on the heat transfer from the flue gas to working fluids, including water/steam in the heat exchangers and air in air preheater.The flue gas temperatures are determined based on the balance between the heat transferred to the exchanger surface and heat released from the flue gas.The heat release is calculated by where ϕ is the thermal retention coefficient, set to be 0.99 determined by boiler thermal efficiency and heat dissipation losses; I p and I p represent the enthalpy of the flue gas at the inlet and outlet of the heat exchanger, respectively.In the furnace, the flame transfers the heat by radiation between the flue gas and radiant heat exchange surfaces, expressed as Energies 2023, 16, 2773 5 of 14 where ψ represents the effective fraction of the radiation heat flux between the flue gas and surfaces of ash fouling on the furnace wall, F is the surface area of the furnace's surroundings walls, ε f denotes the emissivity of the radiation between the flue gas and fouling surfaces, σ 0 is the Stefan-Boltzmann constant and T 1 is the average temperature of the flue gas.The radiation heat of Equation ( 2) is balanced with the heat released from the flue gas of Equation ( 1), i.e., the difference between the heat input and output of the calculated zone, to determine the heat transferred and flue gas temperatures.High-temperature superheater and reheaters, low-temperature reheaters, economizers and air preheaters are heating surfaces dominated by convective heat exchange.The heat transferred from the flue gas to the working flows in these heat exchangers located in the convection passes is determined by where K is the heat transfer coefficient, H is the convective surface area, ∆t is the average temperature difference between the flue gas and working fluid, ε represents the thermal resistance of ash layer deposited on the heat exchanger surface and α 1 and α 2 are the heat exchange coefficient on the flue gas and working fluid sides of the heat exchanger, respectively, both depending on the flow rate, temperature and thermo-physical properties of the flue gas or working fluid.The specific values of the parameters for heat transfer coefficient calculation are provided in Table A1 in Appendix A.
The front and rear platens in the upper furnace are the semi-radiant heating surface, which absorbs both the flue gas radiation in the furnace and convection heat of the contacted flue gas.So, α 1 in Equation ( 3) is expressed as: where ξ is the correction factor of heat transfer coefficient on the flue gas side of the semiradiant heating surface and is set to be 0.6, d is the diameter of the platen tubes, s is the longitudinal intercept of the tubes, α c and α r are heat exchange coefficients of convection and radiation of the flue gas, respectively, x p is the configuration angle coefficient of the main heating surface determined to be 0.87 and 0.9 for the front and rear platen, respectively.By substituting Equation (4) into Equation ( 3), the heat transferred to the platen superheaters can be calculated.Through the heat transfer calculation, mainly the quantities of the heat transferred to the working fluid and the inlet and outlet temperatures of the flue gas and working fluid of the heat exchangers are determined.
The thermal efficiency of the boiler is determined based on the heat balance and calculated as the difference between the thermal input into the furnace and heat losses from the boiler, given as where q i denotes the heat losses, presented as the percentage of the boiler heat input.The losses include the sensible heat of the exhausted flue gas from the boiler, heat content of unburned gases, mainly CO and ammonia, in the flue gas, heat content of unburned carbon in the bottom and fly ash, heat released from the outside surfaces of the boiler by radiation and convection and the sensible heat of the bottom and fly ash.
For all the calculations of the co-firing cases and sole coal combustion, the boiler was assumed to be operated under the same conditions: the heat input with the fuels into the boiler is the same for achieving the MCR and the combustion air is supplied in similar distribution patterns for complete combustion of both the coal and gaseous fuels with an access air ratio of 1.2 as well as the same air-staging level.The ammonia co-firing ratio is presented as its percentage in the total heat input.Assuming that ammonia is completely burned to release heat, the amount of ammonia fuel can be obtained by dividing the amount of ammonia heat input by its low calorific value, and then the amounts of air required for combustion and flue gas produced as well as the composition of the flue gas can be determined.
For ammonia co-firing, thermal calculation analyses were performed for the boiler co-firing ammonia at various ratios up to 40% (on the heat basis).At co-firing ratios of less than 40%, the conversions of both coal and ammonia can be complete in the furnace [13,27], enabling the focus on investigating the impact of co-firing ratio on the boiler performance.While the co-firing ratio was changed for various co-firing cases, the thermal inputs of the mixed fuel into the furnace were kept the same as for the sole coal firing in the calculations.The lower heating value of ammonia was set to be 18.6 MJ/kg and other properties of ammonia used in the calculations were taken from the literature [30].
Co-Firing Gaseous Fuels with Pulverized Coal
Thermal calculations were also conducted for the boiler co-firing two biomass gases and a BFG at a co-firing ratio of 20% (on the heat basis), respectively.The results were compared with those from ammonia co-firing at the same ratio to further investigate the effects of ammonia co-firing on the boiler performance.The gaseous fuel properties for the calculations were taken from the literature [31,32].The biomass gases were produced from air-blown gasification of two types of biomass, respectively; the BFG was the one co-fired in a pulverized coal-fired power plant.The compositions and lower heating values of these gaseous fuels are presented in Table 3.The calculated quantities of the air required for, and the flue gas produced from, the combustion of ammonia co-firing cases are presented in Figure 2, compared to those of sole pulverized coal combustion.The air requirement for co-firing slightly and linearly decreases with increasing of the co-firing ratio from 0 to 40%, implying less heat required for the air preheater to heat the combustion air for ammonia co-firing at higher ratios.In contrast, the volume of the flue gas produced increases considerably with the co-firing ratio.Therefore, the air supply fans of the unit can remain unchanged, but the capacity of the induced draft fan needs to increase for accommodating the increased flue gas flows at higher co-firing ratios.The calculated quantities of the air required for, and the flue gas produced from, the combustion of ammonia co-firing cases are presented in Figure 2, compared to those of sole pulverized coal combustion.The air requirement for co-firing slightly and linearly decreases with increasing of the co-firing ratio from 0 to 40%, implying less heat required for the air preheater to heat the combustion air for ammonia co-firing at higher ratios.In contrast, the volume of the flue gas produced increases considerably with the co-firing ratio.Therefore, the air supply fans of the unit can remain unchanged, but the capacity of the induced draft fan needs to increase for accommodating the increased flue gas flows at higher co-firing ratios.As for the flue gas composition, the calculation results for various co-firing ratios are shown in Figure 3.It indicates that co-firing ammonia reduces CO2 production due to the fact that ammonia is a carbon-free fuel.With increasing of the co-firing ratio, CO2 concentration in the flue gas (shown as RO2, i.e., CO2 + SO2 in Figure 3a) declines considerably As for the flue gas composition, the calculation results for various co-firing ratios are shown in Figure 3.It indicates that co-firing ammonia reduces CO 2 production due to the fact that ammonia is a carbon-free fuel.With increasing of the co-firing ratio, CO 2 concentration in the flue gas (shown as RO 2 , i.e., CO 2 + SO 2 in Figure 3a) declines considerably and CO 2 emission gradually decreases from 74.3 m 3 /s for coal combustion to 49 m 3 /s for 40% co-firing (Figure 3b), neither proportionally to co-firing ratio.The fraction of H 2 O in the flue gas increases considerably and obviously with the co-firing ratio because H 2 O is the main product of ammonia combustion.Moreover, the fraction of the radiative gases, including RO 2 and H 2 O, increases slightly (Figure 3a).The variations in the flue gas composition mean changes in thermo-physical properties of the flue gas after co-firing.For example, the specific heat capacity of the flue gas increases from 12.0 kJ/kg•K to 13.0 kJ/kg•K as the co-firing ratio increases from 10% to 40%.Such changes have an impact on the temperatures and heat transfer properties of the flue gas, as described below.As for the flue gas composition, the calculation results for various co-firing ratios are shown in Figure 3.It indicates that co-firing ammonia reduces CO2 production due to the fact that ammonia is a carbon-free fuel.With increasing of the co-firing ratio, CO2 concentration in the flue gas (shown as RO2, i.e., CO2 + SO2 in Figure 3a) declines considerably and CO2 emission gradually decreases from 74.3 m 3 /s for coal combustion to 49 m 3 /s for 40% co-firing (Figure 3b), neither proportionally to co-firing ratio.The fraction of H2O in the flue gas increases considerably and obviously with the co-firing ratio because H2O is the main product of ammonia combustion.Moreover, the fraction of the radiative gases, including RO2 and H2O, increases slightly (Figure 3a).The variations in the flue gas composition mean changes in thermo-physical properties of the flue gas after co-firing.For example, the specific heat capacity of the flue gas increases from 12.0 kJ/kg•K to 13.0 kJ/kg•K as the co-firing ratio increases from 10% to 40%.Such changes have an impact on the temperatures and heat transfer properties of the flue gas, as described below.The content of fly ash in the flue gas decreases linearly with the co-firing ratio, as shown in Figure 4a.As fly ash is also the major radiation component of the flue gas, the The content of fly ash in the flue gas decreases linearly with the co-firing ratio, as shown in Figure 4a.As fly ash is also the major radiation component of the flue gas, the decrease in the fly ash slightly reduces the radiation of the solid particles in the furnace.On the other hand, the slight increase in the fraction of the radiative gases in the flue gas (Figure 3) enhances the emissivity of the flame.As a consequence, the emissivity of the flame and flue gas in the furnace does not vary considerably as compared to the case of sole coal combustion, as indicated in Figure 4b.Nevertheless, the reduction of the fly ash reduced the ash-related heat losses such as unburned carbon in ash, having a favorable effect on boiler efficiency.decrease in the fly ash slightly reduces the radiation of the solid particles in the furnace.
On the other hand, the slight increase in the fraction of the radiative gases in the flue gas (Figure 3) enhances the emissivity of the flame.As a consequence, the emissivity of the flame and flue gas in the furnace does not vary considerably as compared to the case of sole coal combustion, as indicated in Figure 4b.Nevertheless, the reduction of the fly ash reduced the ash-related heat losses such as unburned carbon in ash, having a favorable effect on boiler efficiency.
Flue Gas Temperatures in the Boiler
The calculated temperatures of the flue gas at some locations in the boiler, including the adiabatic flame temperature in the furnace and temperatures at the exit of the furnace (below the super-heater platens), outlet of the economizer and exit of the boiler (the outlet of the air preheater), varying with the co-firing ratio, are presented in Figure 5 to show the effect of ammonia co-firing on the combustion and heat transfer in the boiler.
Flue Gas Temperatures in the Boiler
The calculated temperatures of the flue gas at some locations in the boiler, including the adiabatic flame temperature in the furnace and temperatures at the exit of the furnace (below the super-heater platens), outlet of the economizer and exit of the boiler (the outlet of the air preheater), varying with the co-firing ratio, are presented in Figure 5 to show the effect of ammonia co-firing on the combustion and heat transfer in the boiler.
Flue Gas Temperatures in the Boiler
The calculated temperatures of the flue gas at some locations in the boiler, including the adiabatic flame temperature in the furnace and temperatures at the exit of the furnace (below the super-heater platens), outlet of the economizer and exit of the boiler (the outlet of the air preheater), varying with the co-firing ratio, are presented in Figure 5 to show the effect of ammonia co-firing on the combustion and heat transfer in the boiler.As can be seen in Figure 5a, the adiabatic flame temperature in the furnace declines with the increase in co-firing ratio and it decreases by about 50 °C at 40% co-firing as compared to that of coal combustion.The main cause is the increase in the heat capacity As can be seen in Figure 5a, the adiabatic flame temperature in the furnace declines with the increase in co-firing ratio and it decreases by about 50 • C at 40% co-firing as compared to that of coal combustion.The main cause is the increase in the heat capacity of the flue gas due to the increase in the gas volume (Figure 2) and the change in the gas composition (Figure 3).In particular, the increase in H 2 O in the flue gas leads to the decrease in the adiabatic flame temperature.The reduction in the flame temperature may weaken the flame radiation and deteriorate combustion in the furnace.It implies that, in order to ensure the boiler radiation and combustion, the ammonia amount may be kept at a low proportion.
For the same reasons, the gas temperature at the furnace exit also decreases considerably at higher co-firing ratios, as shown in Figure 5b.This temperature is related to the properties of the combustion products and layout of the water walls and platens as well as the heat input into the furnace.As the co-firing ratio increased from 10% to 40%, the furnace temperature decreased by 48-77 • C relative to that for the sole coal combustion case (Figure 5b).The extent of the decrease is larger than that for the flame temperature mostly because of the greater increases in the volume and heat capacity of the exhausted gas from the furnace although less heat is released by radiation in the furnace mainly due to the lower flame temperature.The decreases in the flame and furnace exit temperature with increasing of the co-firing ratio are consistent with the observations in numerical simulations [12,[26][27][28].
The decreased temperature of the flue gas exiting the furnace and the changes in flue gas composition lowering thermal conductivity are not conducive to the radiation and convection in the platen zone and convective heat transfer along the convection passes although the increased flue gas volume certainly enhances the convection.These effects gradually changed the trend of the gas temperature varying with the co-firing ratio as the flue gas flows through the heat exchangers in the convection passes.As a result, the gas temperature at the outlet of the economizer presents an increasing trend with increasing of the co-firing ratio, as compared to the sole coal combustion case (Figure 5c).Further downstream of the flue gas flow, Figure 5d shows that the boiler exhausted gas temperature nearly linearly increases by 12-28 • C with the co-firing ratio rising from 10% to 40%.The extent of the increase is significant as compared to the sole coal case.The causes are the increased flue gas volume (Figure 2) and inlet temperature (i.e., the outlet temperature of the economizer) (Figure 5c) together with the reduced heat required for heating the combustion air with a decreased volume (Figure 2) in the air preheater.
The variation in the composition and particularly the increase in H 2 O content of the flue gas change the dew temperatures of sulfuric acid and water vapor in flue gas, thus affecting the potential of the corrosion in the air preheater.The calculation results presented in Figure 6 indicate that, while the water dew point rises linearly by 10 • C, the acid dew temperature increases just by 3 • C as the co-firing ratio increased from 0 to 40%.The considerable rise in the water dew temperature is obviously attributed to the increase in H 2 O vapor content in the flue gas (Figure 3a).The acid dew point only rises slightly because, although the water content increases, ammonia combustion does not produce SO 3 and thus reduces SO 3 concentration in the flue gas.The significant increase in the gas temperature at the outlet of the air preheater (Figure 5d) determines that ammonia co-firing does not increase the corrosion potential in the air preheater despite a slight increase in the acid dew point.Nevertheless, the considerable increase in the water dew point, as well as the increase in acid dew temperature, is likely to affect the corrosion potential if the boiler is operated at lower loads, which requires further evaluation.
Heat Transfer Performance
The calculated heats transferred to the main heat exchangers of the boiler varying with the ammonia co-firing ratio are presented in Figure 7.As can be seen, the heat transferred to the furnace water walls increases a bit at the co-firing ratio of 10% and then decreases with the co-firing ratio increase.Such a trend is similar to the observation in a numerical simulation [26].It is the result of the decrease in the flame temperature and the change in the flame emissivity caused by the variations of radiative components (ash particles, RO 2 and H 2 O) in the flue gas.Along the flue gas flow, the heats transferred to the front and rear platen superheaters, high-temperature reheater, high-temperature superheater and low-temperature reheater decline with the increase in the co-firing ratio.Nevertheless, the extent of the declination generally decreases along the flue gas flow, as indicated by the greater extent of decrease for the front platen superheater and slight decrease for the low-temperature reheater.An exception is that the heat absorption by the economizer increases considerably when co-firing 10% ammonia relative to that of coal combustion, and then changes slightly at higher co-firing ratios.
ticles, RO2 and H2O) in the flue gas.Along the flue gas flow, the heats transferred to the front and rear platen superheaters, high-temperature reheater, high-temperature superheater and low-temperature reheater decline with the increase in the co-firing ratio.Nevertheless, the extent of the declination generally decreases along the flue gas flow, as indicated by the greater extent of decrease for the front platen superheater and slight decrease for the low-temperature reheater.An exception is that the heat absorption by the economizer increases considerably when co-firing 10% ammonia relative to that of coal combustion, and then changes slightly at higher co-firing ratios.As shown in Figure 7, ammonia co-firing does affect the heat transfer and heat distribution in the boiler and consequently the performance of the boiler.In a Benson-style supercritical boiler, the heat of superheated steam is absorbed mainly through the furnace water walls, various stages of superheaters and the economizer.Combining the heats from these exchangers, the heat of superheated steam slightly increases with increasing of the co-firing ratio.Nevertheless, the calculations revealed that the designed amount of the attempering water is sufficient to maintain the superheated steam temperature at the nominal value at the co-firing ratio up to 20% but slight over-temperatures occur at higher co-firing ratios.As for the reheated steam, its heat absorbed from the two stages of the reheaters decreases considerably with the co-firing ratio.It implies that the nominal reheated steam temperature may not be achievable with ammonia co-firing.To retain the nominal value, the burners have to be tilted up and/or the upper layers of the burners operated to increase the reheated steam temperature, as for the cases of 30% and 40% cofiring.Even so, the calculations indicated that the nominal reheated steam temperature could hardly be retained at co-firing ratios higher than 20%.Moreover, such boiler operations for adjusting the reheated steam temperature could further increase the superheated steam temperature.Additionally, if the boiler is operated at lower loads, while cofiring at higher ratios may maintain the superheated steam temperature, the under-temperature of the reheated steam would worsen further, which is likely to significantly degrade the performance of the boiler and power generation unit.As shown in Figure 7, ammonia co-firing does affect the heat transfer and heat distribution in the boiler and consequently the performance of the boiler.In a Benson-style supercritical boiler, the heat of superheated steam is absorbed mainly through the furnace water walls, various stages of superheaters and the economizer.Combining the heats from these exchangers, the heat of superheated steam slightly increases with increasing of the co-firing ratio.Nevertheless, the calculations revealed that the designed amount of the attempering water is sufficient to maintain the superheated steam temperature at the nominal value at the co-firing ratio up to 20% but slight over-temperatures occur at higher co-firing ratios.As for the reheated steam, its heat absorbed from the two stages of the reheaters decreases considerably with the co-firing ratio.It implies that the nominal reheated steam temperature may not be achievable with ammonia co-firing.To retain the nominal value, the burners have to be tilted up and/or the upper layers of the burners operated to increase the reheated steam temperature, as for the cases of 30% and 40% co-firing.Even so, the calculations indicated that the nominal reheated steam temperature could hardly be retained at co-firing ratios higher than 20%.Moreover, such boiler operations for adjusting the reheated steam temperature could further increase the superheated steam temperature.Additionally, if the boiler is operated at lower loads, while co-firing at higher ratios may maintain the superheated steam temperature, the under-temperature of the reheated steam would worsen further, which is likely to significantly degrade the performance of the boiler and power generation unit.
Boiler Thermal Efficiency
Figure 8 shows the boiler thermal efficiency varying with ammonia co-firing ratio.The calculated efficiency is based on the lower heating value of the input fuels.It is clear that the boiler thermal efficiency decreases nearly linearly from 93.7% of the sole coal combustion to 93.3% when increasing the co-firing ratio to 40%.The extent of the decrease at the co-firing ratio of 20% is generally consistent with that from the analysis on a 1000 MW boiler [11], but the efficiency further degrades at higher co-firing ratios.Figure 8 shows the boiler thermal efficiency varying with ammonia co-firing ratio.The calculated efficiency is based on the lower heating value of the input fuels.It is clear that the boiler thermal efficiency decreases nearly linearly from 93.7% of the sole coal combustion to 93.3% when increasing the co-firing ratio to 40%.The extent of the decrease at the co-firing ratio of 20% is generally consistent with that from the analysis on a 1000 MW boiler [11], but the efficiency further degrades at higher co-firing ratios.
It is well known that the biggest factor affecting the boiler efficiency is the heat loss through the exhausted flue gas from the boiler.As indicated in Figure 5d, the boiler exhausted flue gas temperature increases considerably with increasing of the co-firing ratio.Together with the increase in the flue gas volume, it causes efficiency loss of up to 0.7-0.8% at co-firing ratios of 30% and 40%.On the other hand, the heat loss due to the unburned carbon in ash and other losses associated with the ash obviously decline when increasing the co-firing ratio because the lower input of coal reduces the ash yield and consequently the ash-associated heat losses.The higher preheated air temperature due to the enhanced heating of the combustion air in the air preheater can enhance the combustion in the furnace, therefore offsetting the effect of the decreased flame temperature on coal combustion.Totally, the unburned carbon and other ash-related heat losses are reduced by 0.2-0.3% at co-firing ratios of 30% and 40%.Considering the effects of the cofiring on the heat losses, the calculated boiler thermal efficiency declines, but fortunately the extent is not so great even at higher co-firing ratios.Nevertheless, co-firing ammonia does cause the decrease in the boiler efficiency, as also observed in the evaluation of utility boilers [6,11].It is one of the most influential aspects of ammonia co-firing affecting the boiler performance.The decrease in the boiler efficiency and difficulty in retaining reheated steam temperature suggest that optimized design and retrofit of the boiler are required so as to maintain the thermal efficiency and operation performance of the boiler for co-firing ammonia at higher ratios.For example, enlarging the area of the lower temperature heat exchanger surface before the economizer, i.e., the lower temperature reheater, may not only increase the heat absorption from the flue gas to increase the boiler efficiency but also help maintain the reheated steam temperature.It also implies that the impact of ammonia co-firing on the boiler performance to some extent depends on the designed structure of the boiler, in particular, the distribution of the heat adsorbed by the main heat exchangers.Such an issue is worthy of further investigation.
Comparison of Co-Firing Ammonia and Co-Firing Other Gaseous Fuels
While co-firing ammonia with coal is a newly developing technology, co-firing gaseous fuels including biomass gas and BFG is already applied for pulverized coal-fired It is well known that the biggest factor affecting the boiler efficiency is the heat loss through the exhausted flue gas from the boiler.As indicated in Figure 5d, the boiler exhausted flue gas temperature increases considerably with increasing of the co-firing ratio.Together with the increase in the flue gas volume, it causes efficiency loss of up to 0.7-0.8% at co-firing ratios of 30% and 40%.On the other hand, the heat loss due to the unburned carbon in ash and other losses associated with the ash obviously decline when increasing the co-firing ratio because the lower input of coal reduces the ash yield and consequently the ash-associated heat losses.The higher preheated air temperature due to the enhanced heating of the combustion air in the air preheater can enhance the combustion in the furnace, therefore offsetting the effect of the decreased flame temperature on coal combustion.Totally, the unburned carbon and other ash-related heat losses are reduced by 0.2-0.3% at co-firing ratios of 30% and 40%.Considering the effects of the co-firing on the heat losses, the calculated boiler thermal efficiency declines, but fortunately the extent is not so great even at higher co-firing ratios.Nevertheless, co-firing ammonia does cause the decrease in the boiler efficiency, as also observed in the evaluation of utility boilers [6,11].It is one of the most influential aspects of ammonia co-firing affecting the boiler performance.The decrease in the boiler efficiency and difficulty in retaining reheated steam temperature suggest that optimized design and retrofit of the boiler are required so as to maintain the thermal efficiency and operation performance of the boiler for co-firing ammonia at higher ratios.For example, enlarging the area of the lower temperature heat exchanger surface before the economizer, i.e., the lower temperature reheater, may not only increase the heat absorption from the flue gas to increase the boiler efficiency but also help maintain the reheated steam temperature.It also implies that the impact of ammonia co-firing on the boiler performance to some extent depends on the designed structure of the boiler, in particular, the distribution of the heat adsorbed by the main heat exchangers.Such an issue is worthy of further investigation.
Comparison of Co-Firing Ammonia and Co-Firing Other Gaseous Fuels
While co-firing ammonia with coal is a newly developing technology, co-firing gaseous fuels including biomass gas and BFG is already applied for pulverized coal-fired boilers.In practice, the highest ratio of biomass gas co-firing is up to 25% [33] while the ratio of BFG co-firing could be up to 30% or even higher [32].To further investigate the effect of ammonia co-firing, the performance of the boiler burning 20% ammonia was compared to that of co-firing two biomass gases (BG1 and BG2) and a BFG at the same ratio, based on thermal calculations.The calculated values of some performance parameters are provided in Table 4.When co-firing biomass gases and BFG, the adiabatic flame temperature and furnace exit temperature are much lower than those when co-firing ammonia.It is clear that the biomass gases and BFG contain large fractions of inert gases (N 2 as well as CO 2 and H 2 O) and have much lower caloric values than ammonia (Table 3).Their co-firing at the same ratios of heat input produces much larger volumes of the flue gas and also causes greater changes in flue gas composition.As a consequence, co-firing results in much lower flame temperatures and also greatly decreases the radiation heats transferred to the furnace walls, as shown in Figure 9. Due to the significant increases in the flue gas volumes of co-firing the three low calorific value gases (Table 4), the convection heat transfer in the boiler is enhanced.It indicates that co-firing the three low calorific value gases significantly changes the heat distribution between radiation and convection in the boiler as compared to sole coal combustion and ammonia co-firing.Moreover, more heat is carried by the larger volume flue gas to the convection passes and also leads to the increased heat losses associated with the exhausted flue gas.Additionally, the significantly lower flame temperatures in the furnace (Table 4) deteriorate the combustion of pulverized coal, resulting in more heat losses of unburned carbon in ash.These determine significant decreases in the boiler efficiency (Table 4).For improving the heat transfer performance and also the boiler efficiency, the practical technology is to deliver the biomass gases and BFG through gas burners installed below the coal burner zone so as to increase the radiation heat absorbed by the furnace walls when retrofitting the existing boilers [32,33].
Conclusions
The performance of an existing 600 MW pulverized coal-fired utility boiler co-firing ammonia was evaluated by thermal calculation analysis.The evaluation covered a wide range of co-firing ratios up to 40% (on heat basis) to investigate the effect of the co-firing ratio on boiler heat transfer performance, aimed at developing higher ratio co-firing applications with the boiler performance comparable to sole coal combustion.The evaluations showed that, while co-firing up 20% ammonia in the existing boiler is feasible because the boiler performance is not considerably affected, the heat transfer performance of the boiler heat exchangers significantly changed at co-firing ratios of 30% and 40%.With increasing of the co-firing ratio from 0-40%, more heat transfer moves to the downstream of the flue gas flow, resulting in over-temperature of the superheated steam, under-temperature of the reheated steam and a decrease in boiler thermal efficiency at higher co-firing ratios.These imply boiler retrofits on the heat exchangers are required to accommodate a higher ratio of ammonia co-firing in the existing boiler for improving the performance, making it comparable to that of pulverized coal combustion.The boiler co-firing 20% ammonia was further compared with co-firing two biomass-produced gases and a BFG at the same ratio with the aim of exploring the application of existing gaseous fuel co-firing technologies for ammonia co-firing to improve boiler performance at higher cofiring ratios.The comparison indicated that co-firing ammonia presented superior performance over co-firing low calorific value gas fuels.Nevertheless, the technology of co-firing low calorific value gas fuels by injecting the gaseous fuel into the furnace at the lower part of, or below, the burner zone can be applied to achieve improved boiler performance in the existing boiler at higher co-firing ratios.ratio of ammonia co-firing in the existing boiler for improving the performance, making it comparable to that of pulverized coal combustion.The boiler co-firing 20% ammonia was further compared with co-firing two biomass-produced gases and a BFG at the same ratio with the aim of exploring the application of existing gaseous fuel co-firing technologies for ammonia co-firing to improve boiler performance at higher co-firing ratios.The comparison indicated that co-firing ammonia presented superior performance over co-firing low calorific value gas fuels.Nevertheless, the technology of co-firing low calorific value gas fuels by injecting the gaseous fuel into the furnace at the lower part of, or below, the burner zone can be applied to achieve improved boiler performance in the existing boiler at higher co-firing ratios.
Figure 1 .
Figure 1.Schematic of the overall layout of the 600 MW supercritical boiler.
Figure 1 .
Figure 1.Schematic of the overall layout of the 600 MW supercritical boiler.
Figure 2 .
Figure 2. Volumetric flow rates of combustion air and flue gas.
Figure 2 .
Figure 2. Volumetric flow rates of combustion air and flue gas.
Figure 2 .
Figure 2. Volumetric flow rates of combustion air and flue gas.
Figure 3 .
Figure 3. Flue gas composition (a) and CO 2 emission (b) varying with ammonia co-firing ratio.
Figure 4 .
Figure 4. Variations of (a) fly ash content and (b) flue gas emissivity with the co-firing ratio.
Figure 4 .
Figure 4. Variations of (a) fly ash content and (b) flue gas emissivity with the co-firing ratio.
Figure 4 .
Figure 4. Variations of (a) fly ash content and (b) flue gas emissivity with the co-firing ratio.
Figure 5 .
Figure 5.Effect of ammonia co-firing on the gas temperatures in the boiler, (a) adiabatic flame temperature and flue gas temperatures at (b) furnace exit, (c) economizer outlet and (d) boiler exit.
Figure 5 .
Figure 5.Effect of ammonia co-firing on the gas temperatures in the boiler, (a) adiabatic flame temperature and flue gas temperatures at (b) furnace exit, (c) economizer outlet and (d) boiler exit.
Figure 6 .
Figure 6.Effect of ammonia co-firing on (a) sulfuric acid dew point and (b) water vapor dew point.
Figure 6 .
Figure 6.Effect of ammonia co-firing on (a) sulfuric acid dew point and (b) water vapor dew point.
Figure 7 .
Figure 7. Effect of co-firing on heat transfer in boiler furnace and to heat exchangers.
Figure 7 .
Figure 7. Effect of co-firing on heat transfer in boiler furnace and to heat exchangers.
Figure 8 .
Figure 8.Effect of ammonia co-firing on boiler thermal efficiency.
Figure 8 .
Figure 8.Effect of ammonia co-firing on boiler thermal efficiency.
Energies 2023 , 16 Figure 9 .
Figure 9. Comparing the distribution of heat transfer in the boiler between co-firing ammonia and co-firing three low caloric value gases at co-firing ratio of 20%.
Table 2 .
Designed thermal parameters of the boiler.
Table 2 .
Designed thermal parameters of the boiler.
Table 3 .
The properties of two biomass-produced gases and a BFG.
Table 4 .
Some performance parameters of co-firing different gaseous fuels at the ratio of 20%.
|
2023-03-19T15:08:10.843Z
|
2023-03-16T00:00:00.000
|
{
"year": 2023,
"sha1": "ba928b63f3ea11ed4358dda2db7cab9602e2847e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/16/6/2773/pdf?version=1678967057",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e0d6ed0f826c8f0cff167c1ffb7bdea4c8a8426c",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
}
|
3454104
|
pes2o/s2orc
|
v3-fos-license
|
Worldwide Protein Data Bank biocuration supporting open access to high-quality 3D structural biology data
Abstract The Protein Data Bank (PDB) is the single global repository for experimentally determined 3D structures of biological macromolecules and their complexes with ligands. The worldwide PDB (wwPDB) is the international collaboration that manages the PDB archive according to the FAIR principles: Findability, Accessibility, Interoperability and Reusability. The wwPDB recently developed OneDep, a unified tool for deposition, validation and biocuration of structures of biological macromolecules. All data deposited to the PDB undergo critical review by wwPDB Biocurators. This article outlines the importance of biocuration for structural biology data deposited to the PDB and describes wwPDB biocuration processes and the role of expert Biocurators in sustaining a high-quality archive. Structural data submitted to the PDB are examined for self-consistency, standardized using controlled vocabularies, cross-referenced with other biological data resources and validated for scientific/technical accuracy. We illustrate how biocuration is integral to PDB data archiving, as it facilitates accurate, consistent and comprehensive representation of biological structure data, allowing efficient and effective usage by research scientists, educators, students and the curious public worldwide. Database URL: https://www.wwpdb.org/
Introduction
The Protein Data Bank (1) (PDB, pdb.org) was established in 1971 with just seven X-ray crystal structures and was the first open-access digital biological data resource. Today, the PDB is the single global archive for 3D macromolecular structure data, containing >130 000 structures determined by macromolecular crystallography (MX; using X-ray photons, electrons or neutrons), nuclear magnetic resonance (NMR) spectroscopy and electron cryomicroscopy (3DEM) methods. The Worldwide PDB (2) (wwPDB, wwpdb.org) was formed in 2003 to ensure global management of the PDB archive for the public good. The wwPDB founding members were three wwPDB regional data centers: the Research Collaboratory for Structural Bioinformatics PDB (RCSB PDB) (1) in the United States, the PDB in Europe (PDBe) (3) and PDB Japan (PDBj) (4). The Biological Magnetic Resonance Bank (BMRB, University of Wisconsin in USA and Osaka University in Japan) (5), which manages an archive of NMR experimental data, joined the wwPDB in 2006.
PDB data from MX, NMR and 3DEM are accessed by both non-expert and expert data users globally. In 2016, >1 million data users worldwide performed >590 million structure data file downloads, corresponding to 1.5 million data downloads per day. Based on our analysis of the annual Database issues of Nucleic Acid Research from 2011 to 2016 (www.oxfordjournals.org/ nar/database/a/), 200 data resources access and use PDB data. The PDB archive is accessed by a large and diverse user community that encompasses researchers working in biotechnology, agricultural and pharmaceutical industries, academic and public sector scientists, students, educators, the curious public, with >80% of users having no or limited expertise in structural biology. PDB data and resources are used for basic and applied research across the sciences and in education, textbook publishing, experimental and computational methods development, drug discovery to name but a few.
Biocuration is central to PDB data management. Indeed, the PDB archive is widely regarded as one of the best-curated biological data resources available (6). The primary goal of biocuration is to accurately and comprehensively represent biological knowledge, to enable computational analysis and to provide easy access to data for scientists, educators and students. It involves translation, standardization and integration of information relevant to biology into a data archive or resource, thereby enabling integration with the scientific literature and management of large data sets (www.biocuration.org/dissemination/ who-are-we/). All data submitted to the PDB undergo critical review by subject matter experts who curate, annotate and validate incoming data for completeness and accuracy.
Currently, in addition to 3D atomic coordinates, each PDB data deposition contains experimental data and metadata describing the molecular model and experimental details. Metadata encompasses protein names, sequences, source organism(s), small-molecule information (e.g. chemical name, structure and formula), data collection information (e.g. instrumentation and data processing) and structure-determination information (e.g. model-building, refinement and validation methods and statistics). In addition, the wwPDB provides value-added annotation such as secondary structure, quaternary structure descriptions and information about ligand-binding sites.
OneDep (7) is a unified tool that supports deposition, validation and biocuration and is used by both wwPDB Biocurators (hereafter Biocurators) and PDB data depositors (hereafter Data Depositors). It was developed by the wwPDB partners in collaboration with EMDataBank partners (8) to ensure that high-quality, internally consistent data are collected and that both the Data Depositor experience and biocuration processes are consistent worldwide. Introduction of OneDep has eliminated many sources of inconsistency that inevitably arose while wwPDB regional data centers were using independent data-processing systems. Occasionally, these differed in requirements for mandatory data items, different software and validation standards and distinct output data formats. To ensure consistency of data representation, OneDep uses the PDBx/mmCIF (9) data dictionary, which enables data standardization, data-model extension and seamless data exchange among wwPDB regional data centers. During development of OneDep, the wwPDB partners agreed on common practices for PDB data deposition, biocuration and validation. The OneDep system and wwPDB validation processes have been described in recent publications [Young et al. (7) and Gore et al. (10), respectively]. In this publication, we describe in detail the processes, practices and tools that wwPDB regional data centers employ during biocuration of PDB structure deposition.
Data representation
The PDB has aimed to adhere to the FAIR principles (Findability, Accessibility, Interoperability and Reusability) (11) since its inception in 1971. Together, the wwPDB partners manage the archive and provide PDB data users (hereafter Data Consumers) around the world with unrestricted access to the structural data stored therein without limitations on data use (i.e. Reusability). To ensure Findability, each PDB entry is assigned a globally unique and persistent identifier. To ensure Interoperability with other data resources, PDB structure data, experimental data and associated metadata now conform to controlled vocabularies and semantic relationships defined in the PDBx/mmCIF dictionary (mmcif.wwpdb.org), which continues to be developed in collaboration with the scientific community (www.wwpdb.org/task/mmcif). In 2011, the PDBx/mmCIF format superseded the legacy PDB format flat file (9), originally used to store and distribute data. As of October 2017, the PDBx/mmCIF data dictionary encompassed almost 7000 data items, pertaining to atomic coordinates, experimental data, sample characteristics, structure-determination protocols, etc. The increasing size and complexity of macromolecular structures determined by MX, NMR and 3DEM, and introduction of new experimental methods have necessitated myriad changes to the PDBx/mmCIF data dictionary since its introduction.
The data dictionary has also been augmented in response to the evolution of the PDB archive. The wwPDB PDBx/mmCIF working group (www.wwpdb.org/task/mmcif) works with wwPDB partners to ensure that this process is an orderly one. Biocurators work closely with Data Depositors to maintain data consistency and conformity with the PDBx/mmCIF data dictionary throughout the deposition process. As the PDBx/mmCIF dictionary evolves, Biocurators undertake periodic remediation of the contents of the PDB archive. Working as a global team, the Biocurators help to ensure that PDB data are indeed Findable, Accessible, Interoperable and Reusable for all Data Consumers worldwide.
Data standardization
Data standardization is central to successful data resource management, as it ensures consistency and all of the benefits flowing therefrom (e.g. Interoperability). During standardization, data are brought to semantic integrity and to a common format that facilitates usability (and Reusability) and permits large-scale data analyses and distribution. Lack of standardization can result in incomplete, inconsistent and erroneous data retrieval, and thereby impede interpretation. Critical aspects of data standards, such as semantic consistency, use of controlled vocabularies and data-format consistency, are described in Figure 1 and Table 1. One of the most important components of the wwPDB biocuration process is ensuring that stored data are defined precisely and uniformly in machine-readable format, as defined by the PDBx/mmCIF data dictionary. The dictionary has a self-defining format in which every data item has attributes describing its features, including relationships to other data items, and supports validation of data items by providing controlled vocabularies and data types and ranges.
Controlled vocabularies are used throughout the PDBx/ mmCIF data dictionary to minimize ambiguity. About 600 mmCIF items in the PDB archive have enumeration lists, which are enforced during the deposition and biocuration processes (e.g. polymer types, entity types, instruments used in data collection and names of software packages). Such enumerations are extended as needed to keep pace with scientific and technological innovation. Although the dictionary can establish syntax for values, manual biocuration is often required to ensure accuracy and adherence to wwPDB policies. For example, polymer sequence, organism taxonomy, quaternary structure and ligand chemistry require expert manual inspection to validate correctness, scientific accuracy and internal consistency.
Semantic consistency represents another critical aspect of data standardization, helping to eliminate ambiguities in understanding existing data items. For example, Figure 1 shows the category relationships for a molecular entity that ensure semantic consistency with the PDBx/mmCIF data dictionary. The 'entity' category represents a unique polymer or non-polymeric constituent in the entry and is the key for '_entity_poly', which describes the sequence of the polymer, and for '_entity_src_gen' or '_entity_src_nat', which show how the polymeric entity was produced: genetically manipulated or naturally occurring, respectively. The relationships in these categories are described by a shared key identifier in a parent/child relationship (denoted with gray shading in Figure 1). Table 1 provides an example of '_entity_src_gen', which describes a protein from Mus musculus produced by heterologous expression of the mouse gene in Escherichia coli.
Data quality control
To enforce data standardization, consistent relationships with multiple biological resources are maintained. Multiple external data resources are used and cross-referenced by the OneDep system, including the National Center for Biotechnology Information (NCBI) (12) taxonomy database and the UniProt (13) sequence database. For example, the controlled vocabulary for organism name employed within the OneDep deposition user interface corresponds to that of the NCBI taxonomy database, while protein sequences are mapped to the appropriate UniProt identifier on the basis of the taxonomy and other information supplied by the Data Depositor.
The OneDep system also controls data quality by setting boundaries for data values. These limits are determined according to scientific principles or by examining distributions of existing data items in the archive. Some data items have 'hard' limits (e.g. pH value or absolute temperature), while others have 'soft' limits (or likely ranges), such as R-values in MX. These limits are maintained in the PDBx/mmCIF data dictionary, and outliers are reported during OneDep deposition, validation and biocuration. Soft limits are provided for many items that follow a normal distribution, with values more than three standard deviations from the mean are noted as outliers to Data Depositors who are asked to check and correct the value if necessary. For example, the boundary for the Note the use of the _entity_id key that links this category to the entity category as depicted in Figure 1. id, identifier. Figure 1. Semantic relationships in the PDBx/mmCIF dictionary. The partial diagram shows the relationships within an entity, its polymer sequence, source taxonomy and the method used to produce it. The relationships in these categories are described by a shared key identifier in a parent/child relationship as denoted with gray shading. The dictionary is available at mmcif.wwpdb.org/.
observed R value for merging intensity (Rmerge) is set between 0.01 and 0.2 for soft limits. The system will therefore flag this data item as a warning if the provided value is 0.7 or 13 when a value of 0.13 is expected.
Role of expert manual biocuration PDB data are curated by professional Biocurators. The 17 Biocurators currently working across the wwPDB have, among them, strong domain expertise in MX, NMR, 3DEM, chemistry, biochemistry and molecular biology. Their primary responsibilities are to examine/validate incoming data (in collaboration with Data Depositors) to maintain the quality of the PDB archive and to release these data in a timely manner. They also regularly review PDB archive contents and perform remediation to improve data uniformity, quality and consistency.
Biocurators check deposited data for completeness, selfconsistency and accuracy. For each incoming structure, they assess information about all steps in the structuredetermination process, from protein expression, crystallization, sample preparation and data collection to final model refinement, resolving conflicting information and providing Data Depositors and Data Consumers with a comprehensive description of the structure. Despite automation of many processes, there are significant points where 3D structure data biocuration requires manual inspection, extensive scientific knowledge (particularly for ligand and sequence processing) and sometimes requires dialog with the Data Depositor.
With the OneDep system, Biocurators' workloads are balanced using automated geographic distribution on the basis of Data Depositor location. Of the 11 641 global depositions received in 2016, RCSB PDB processed 45% (coming mainly from the Americas and Oceania), PDBe processed 36% (Europe and Africa) and PDBj processed the remaining 19% (Asia). Geographic distribution has enabled Biocurators to communicate more efficiently with Data Depositors, with the majority of Data Depositors located in similar time zones as the wwPDB regional data center handling their submissions.
It is critical that Biocurators communicate among themselves to develop and standardize common biocuration practices and policies, to resolve annotation issues and to set functional requirements for improvements in the OneDep system to ensure high data quality, thereby contributing to the success of the wwPDB. Beyond day-to-day local interactions, this international team communicates through daily emails, weekly virtual meetings and annual face-to-face meetings. wwPDB biocuration policies and procedures are fully documented (wwpdb.org/documentation/annotation).
The wwPDB has long-standing relationships with many journals, allowing coordination of PDB data release in the public domain with the appearance of the corresponding scientific publications. wwPDB policies stipulate that PDB structure data should be publicly available when the structure-determination report is published, either electronically or in print. A number of journals inform the wwPDB on a weekly basis about upcoming articles and provide corresponding PDB IDs, publication dates and citation information to ensure nearly simultaneous publication of the research and release of corresponding PDB structure data. Current wwPDB policies stipulate that depositions should not be withheld from public release for more than 1 year from the time of submission and that depositions are to be released upon publication of a relevant article. If no publication appears within the 1-year period, the deposited structure must either be released or withdrawn.
Many journals now require that authors submit the official wwPDB validation report as part of the article submission/review process. These reports provide information about structure quality and various analyses of experimental data (10). They are frequently used by referees to confirm the accuracy and quality of the work under review. Currently, wwPDB validation reports are required for article submission by the Nature Publishing Group journals, eLife, the Journal of Biological Chemistry, International Union of Crystallography (IUCr) journals, Structure, Federation of European Biochemical Societies journals, the Journal of Immunology and Angewandte Chemie International Edition. Others strongly encourage submission of wwPDB validation reports with articles.
Every Biocurator also participates in outreach, education and public engagement activities to serve structural biologists, other researchers, educators, students, schools and the curious public. The wwPDB maintains a customer service desk for Data Depositors and Data Consumers, receiving communications from around the world. Sometimes, these communications report errors and/or Depositors' corrections helping to improve the quality of the archive. PDB users often notify the wwPDB when a structure-determination report has been published to help trigger public release of relevant data into the PDB archive.
Biocuration by Data Depositors
Biocuration begins at the time of structure deposition through the OneDep system. Mandatory data items are validated against the PDBx/mmCIF data dictionary for format compliance and completeness. A valid PDB deposition provides not only primary data and associated metadata but also critical information that helps Biocurators properly annotate the structure without relying solely on the atomic coordinates. For example, every deposition must include information about polymer sequences, quaternary structure and ligands present in the PDB entry.
Many structure-determination studies focus on one or more bound ligands (chemical components), including drugs, inhibitors or substrates. The Data Depositor has the option of identifying each such ligand as a 'ligand of interest'. In cases where the connectivity, bond orders and chirality of the ligand do not exactly match an existing entry in the wwPDB chemical component dictionary (CCD) (14), Data Depositors are asked to provide additional chemical information to ensure accurate identification of each ligand. This information must include at least one of either a chemical drawing, a SMILES string, an appropriate CCD reference identifier or the ligand restraint file that was used during structure refinement. This information is particularly important when the ligand was not built in its entirety during structure determination, where a tautomeric ligand is present, or where correct geometry and bond order cannot be inferred readily from the atomic coordinates.
Data Depositors are required to provide the sequences of all unique amino acid and nucleic acid macromolecules present in the experimental sample, and they are required to reconcile these sequences with the sequences represented within the atomic coordinates. Data Depositors are encouraged to provide sequence database references (e.g. UniProt), together with a description of any deletions, insertions, engineered mutations or affinity purification tags present in experimental samples.
For higher order quaternary structures or assemblies (e.g. dimers, trimers and tetramers), Data Depositors are expected to identify assemblies present in the experimental sample and provide any geometric transformations necessary to generate the corresponding quaternary structure from the crystallographic asymmetric unit (e.g. apply a symmetry matrix to the coordinates of a protein chain to generate a dimer). Data Depositors are also able to provide information regarding any experiments used to determine the quaternary structure in solution as supporting evidence to be included in the PDBx/mmCIF archival data file.
Once primary data have been uploaded and harvested, the OneDep system generates a preliminary wwPDB validation report, which identifies potential issues with the structure and/or experimental data. Before concluding the submission process, Data Depositors are required to download and review this validation report, and either to accept the report as is or choose to improve the deposition by uploading revised data. Data Depositors are strongly encouraged to correct any issues prior to finalizing the deposition. Once the validation report and the terms of wwPDB submission are accepted, the Data Depositor can submit the data. At this point, PDB, BMRB and/or Electron Microscopy Data Bank accession codes are issued and the deposition is transferred for internal processing by Biocurators.
Biocuration by wwPDB Biocurators
The wwPDB biocuration workflow has been designed to execute mandatory tasks automatically and invoke other necessary tasks on demand. Based on extensive combined biocuration experience across the wwPDB, a series of mandatory and optional tasks have been identified and organized into several modules within the OneDep workflow as shown in Figure 2A. Each module is initiated upon successful completion of the previous module in the workflow. Because proper execution of many tasks is dependent upon successful completion of previous tasks, the workflow system ensures that all tasks are performed in the correct order. Some tasks require extensive review and/or input from Biocurators; others can be performed automatically.
The wwPDB biocuration workflow is controlled via an interactive workflow manager (WFM), which informs Biocurators when an automatic process has finished. For example, when an automated Proteins, Interfaces, Structures and Assemblies (PISA) calculation for quaternary structure prediction is completed, the workflow status changes from gray to yellow color informing Biocurators that they can access user interface in the value-added annotation module for further manual biocuration, as shown in Figure 2B. The system also allows Biocurators to monitor progress of multiple entries, access each module for inspection and perform manual curation of entries. The WFM tracks and logs completion of modules and provides Biocurators with the ability to restart processing at any point along the workflow or run individual modules outside of the normal workflow. The WFM manages correspondence with Data Depositors and signals whether sent messages have been read or require a reply. If a PDB deposition needs to be updated by the Data Depositor, the Biocurator can unlock the deposition interface, suspending further biocuration until appropriate Depositor action has occurred. Entries ready for release are highlighted by the WFM.
Following initial content review, Biocurators begin with the entity transformer module, which surveys the overall polymer versus non-polymer (ligand) representation. This is followed by instantiation of the ligand processing module to check ligand stereochemistry and assign the correct ligand reference identifier (CCD three-letter code). Thereafter, the sequence processing module enables crossreferencing of polymer sequences and taxonomy. Finally, the Biocurator provides value-added annotation with the aid of the annotation module. Once annotation of a PDB entry is complete, Biocurators use the validation module to assess the quality of the atomic structure and its agreement with experimental data. At the end of the biocuration process, Biocurators use the communication module to compose a letter (highlighting major issues), which is then sent together with the processed files and a validation report to the Data Depositor for approval or correction. The automated workflow tasks and manual biocuration tasks for each module are described in Table 2. Modular biocuration steps are described in further detail later.
Initial review
Upon initiation of the OneDep workflow, the report module analyses the data for errors and/or inconsistencies. The report module generates an internal report that includes both the results of these calculations and a listing of selected metadata. This initial review informs the Biocurator about the content of the deposition and highlights issues that may need to be examined and, if possible, corrected during processing.
Entity transformation
Within the PDB entry there may be multiple instances of a particular chemically distinct molecule, referred to as an entity (first module in Figure 2A). As discussed earlier, entities may be polymers (e.g. protein or nucleic acid) or non-polymers (e.g. organic ligands, ions or solvent molecules). Ligands covalently bound to polymers are usually defined as non-polymer entities independent of the polymers to which they are attached (with the exception of some common post-translationally modified residues).
The entity transformation module enables Biocurators to ensure that the ligands in a newly deposited structure are depicted in a manner that is consistent with others already present in the PDB archive. In some cases, the ligand as provided by the Data Depositor may need to be described in terms of smaller components. For example, a peptide-like small molecule, such as some antibiotic compounds, may be treated as a string of modified and/or unmodified amino acids, if the constituent parts adhere to the rules that designate a polymeric entity, or as a large ligand (non-polymer). Whereas a non-polymeric representation is usually convenient for defining overall connectivity and restraints during structure determination and refinement, polymeric representations are typically better at depicting the underlying biochemistry. Although each type of representation has intrinsic benefits, it is important to ensure their consistent representation in the atomic coordinate files across the PDB archive. Peptide-like small molecules were exhaustively reviewed in 2012, and since then have been represented consistently in both the CCD and atomic coordinate files. This process included introduction of an additional representation to describe peptide-like ligands, called the peptide reference dictionary (PRD), to retain an overall definition for peptide-like small molecules (15).
The entity transformation module searches the atomic coordinate file of the newly deposited structure and returns close peptide-like small molecule matches in the CCD and PRD. The interface allows Biocurators to compare polymeric sequences and 2D and 3D atomic configurations of ligands with matched PRD definitions. This module also includes tools that allow transformation between nonpolymer and polymer representations to ensure consistency across the PDB archive. In addition to changing how ligands are represented, polymer chains may need to be split or merged depending on whether or not they are covalently linked via a standard peptide bond or nucleic acid linkage. Since re-configuration of polymers and nonpolymers often requires repeating the biocuration process of either ligands or polymer sequences (or both), it is important that all entity types are properly defined at the outset.
Ligand processing
Structures of ligands bound to biological macromolecules provide atomic level insights to aid understanding of the function of protein molecules, aid in drug discovery and serve other research applications. About 75% of all structures currently in the PDB archive contain at least one ligand that is not a water molecule. Hence, ligand processing (16,17), involving verification of chemical identity, validation of geometrical quality and validation of atomic coordinates against experimental data, is one of the most important aspects of wwPDB biocuration. Verification of chemical identity involves matching of all instances of a given ligand within a newly deposited structure to a corresponding chemical definition in the CCD (14), and standardization of atom naming to conform to the nomenclature defined in the CCD. The ligand processing module extracts all non-polymer entities and non-standard polymeric residues from the deposited atomic coordinates and performs a sub-graph isomorphism search of the CCD. This search returns a list of top hits ranked by the matching scores and provides interactive 2D and 3D ligand views that allow visual inspection of both the Data Depositor-provided ligand structure and the corresponding matched CCD component (Figure 3). Additional chemical information (e.g. SMILES string), if provided by the Data Depositor, is illustrated in a 2D chemical drawing for Biocurator verification. If no match to an existing CCD entry is found, the Biocurator defines a new chemical component for the CCD using the ligand editor functionality of the ligand processing module.
Standardization of atom nomenclature to that in the CCD is a fully automated process but match identification is considerably more complex and often requires Biocurator review and manual intervention. The Biocurator notifies the Data Depositor of any problems regarding ligand identity, configuration and conformation. Typical steps followed during ligand biocuration may include but are not limited to: • Reconciliation of additional ligand information provided by the Data Depositor, [e.g. CCD IDs, SMILES strings, International Union of Pure and Applied Chemistry (IUPAC) names or images, with ligand instances] present in the entry. • Recognition and identification of existing CCD components, even in cases where portions of a ligand are not modeled, or where geometric errors (e.g. incorrect chirality, bond lengths and angles or bond orders) are detected. The Biocurator is able to review the chemistry within this module but may also need to review literature sources, consult with online resources (e.g. PubChem) or communicate with Data Depositors. • Creation of new ligand definitions when a match is not present in the CCD. Since a new CCD component will be used as a reference for all future PDB depositions containing the same ligand, Biocurators invest a considerable effort to verify the chemical identity of each ligand. In many instances, Biocurators seek confirmation from Redundancy and consistency checks among PRD, CCD and PDB entries are performed, and the Biocurator is alerted to any discrepancies found between the newly deposited atomic model and any of these resources. Examples of such discrepancies include a peptide-like small molecule present in the deposition that is not referenced to an existing PRD entry or a non-polymer ligand in the deposition that should have been represented as a polymeric peptide according to the PRD.
Sequence processing
This module compares the amino acid or nucleic acid polymer sequence provided by the Data Depositor to both the sequence represented within the deposited atomic coordinates and a sequence from an external reference database such as GenBank (12) or UniProt (13). wwPDB policy requires that Depositors report the actual polymer sequences of the molecules present in the experimental sample, including any modifications or added portions (e.g. engineered mutations, post-translational modification, affinity tags for purification and cloning artifacts). In addition, the deposited information must include any segments of the polymer chain that were not included (for any reason) in the atomic coordinates but which were present in the experimental sample (e.g. unmodeled loop regions). Moreover, there should be no discrepancies between the deposited sequence(s) and the atomic coordinates. The source organism for the deposited sequence (naturally obtained or engineered) should be provided, with the exception of non-biological sequences which have the source organism identified as 'synthetic construct'. If the deposited polymer sequence is consistent with a reference sequence entry from UniProt (for proteins) or GenBank (for nucleic acids), then the corresponding accession from these databases is captured and any discrepancies between the sample sequence and the reference are annotated. These mandatory elements are necessary but not sufficient to complete sequence annotation.
Sequence comparison [BLAST (18)] is run automatically against UniProt (for proteins) and GenBank (nucleic acids), with the result used by the Biocurator in conjunction with sequence identity and taxonomy matching to determine the appropriate cross-reference. In some cases, further clarification is required from the Data Depositor as to the exact content of their experimental sample. Comparisons between the experimental sequence, the sequence derived from the atomic coordinates and sequence database results are used to identify affinity tags (or cloning artifacts, depending on the Data Depositors' description), insertions, linkers, deletions, possible mutations or variants and start and end points of segments within chimeric constructs. Visual inspection of alignments of the deposited sequences and the reference sequences from UniProt or GenBank allows identification of any peptide and/or nucleic acid linkage issues within the atomic coordinates and identification of incomplete experimental sample sequences provided by the Data Depositor. Sometimes mismatches between the sequences reflect errors in the deposited atomic coordinates. Such cases require that Biocurators consult with Data Depositors for clarification and/or correction. The reference sequence also helps Biocurators identify and annotate chimeric constructs (i.e. those derived from multiple source organisms). After sequence alignment verification, residues that are missing some of their sidechain atoms and any residues labeled incorrectly as alanine or glycine are updated to match the sample sequence. External sequence references are also used to standardize the protein name, the scientific name of the source organism and its taxonomy. Figure 4 shows examples of sequence alignments used by Biocurators during sequence annotation. The Data Depositor-provided sample sequence, the sequences extracted from the atomic coordinates for each polymeric chain in the structure and closely matching UniProt sequences are aligned and presented for analysis. Discrepancies between the sequences are highlighted and listed in an interactive table, where Biocurators can select the appropriate annotation from a controlled vocabulary list ( Figure 4A). If any part of the structure requires visual inspection, Biocurators select the relevant residue range and use the 3D viewer available within the sequence processing module to examine the 3D structure of the corresponding sequence range ( Figure 4B). This feature is particularly helpful for inspecting sequence connectivity and alignment, particularly for disordered or poorly resolved regions of a structure where residues or sidechains were omitted from the deposited atomic coordinates. Figure 4C illustrates the case of a chimeric protein containing a fusion of partial sequences from two different proteins that align with sequences from distinct UniProt entries. Correct sequence annotation for chimeric proteins requires inclusion of the residue range and source organism name for each segment of such a chimera. If this information is not provided during deposition, Biocurators will request it from the Data Depositor.
Value-added annotation
The added annotation module of the OneDep system enables a series of automated calculations (tasks numbered i-v later) and semi-manual annotations of metadata (tasks numbered vi-viii): i. Ligand and solvent chain associations and numbering: Ligand and solvent chain identifiers and residue numbering are re-assigned automatically according to wwPDB policy where necessary.
ii. Solvent position: In MX structures, water molecules are moved to symmetry-related positions to place them closest to the polymer chains comprising the asymmetric unit. For water molecules that cannot be repositioned close to any polymer chain, Biocurators consult with Data Depositors.
iii. Links: Interatomic links between any non-standard or polymeric residues and ligands are automatically generated and made available for Biocurators to review and correct as needed. iv. Secondary structure: The OneDep system calculates protein secondary structure (19) for use by visualization programs that rely on PDB secondary structure records. On occasion, a Data Depositor may elect to provide secondary structure; in such cases, Biocurators incorporate this information and label the data as being author-determined. v. Extended checks: Although the official wwPDB validation report (10) is produced in a subsequent OneDep This chimeric acetylcholine-binding protein from Aplysia californica, PDB entry 5TVC, contains a loop C from the human alpha-3 nicotinic acetylcholine receptor. The alignment shows that residues 1-181 in the deposited sample sequence correspond to UniProt sequence Q8WSF8, residues 182-197 to UniProt sequence P32297 and residues 198-219 again to UniProt sequence Q8WSF8.
module, a series of tests are performed to evaluate the atomic coordinates and their fit to the experimental data. In addition to ensuring adherence of deposition contents against the PDBx/mmCIF data dictionary, results of additional scientific checks are provided for review (e.g. peptide-bond linkages, close contacts, unusual metadata values or inconsistent metadata values across different items). If Biocurators cannot correct identified issues, they consult the Data Depositor, and remaining issues may be highlighted in the final wwPDB validation report. vi. Quaternary structure (assembly) determination: By convention, atomic structures determined by MX deposited into the PDB encompass only the smallest possible representation of the molecular component(s) comprising the crystal lattice (i.e. the asymmetric unit that repeats to form the crystal). These asymmetric units may constitute only a portion of the macromolecular assembly present in the experimental sample. In such cases, both the atomic structure of the asymmetric unit and applicable geometric transformations (rotation/translation operators) are required to generate computationally the atomic structure of the macromolecular assembly in its entirety. Uncertainties concerning the correct choice of macromolecular assembly from MX structures are not unusual. For example, there may be more than one energetically favorable spatial arrangement of asymmetric units, each corresponding to a distinct assembly. Without additional experimental evidence, it is generally not possible to determine which, if any, of these putative assemblies are relevant, or even occur in solution or in vivo. OneDep collects experimental evidence that supports assembly provided by the Data Depositor. Determination of possible macromolecular assemblies from the results of an MX structure determination is a complex multi-step process. First, assembly information provided by the Data Depositor is considered. Second, PISA (20) software is used to predict assemblies, which are crosschecked against the information provided by the Data Depositor. For viruses and other complex assemblies with point or helical symmetry, depositoruploaded symmetry matrices are processed using the Pointsuite tool (21). vii. Metadata editor: Common and experimental method-specific views are provided for metadata annotation using PDBx/mmCIF data dictionary controlled vocabularies. viii. Method-specific features: Method-specific tools enable adjustment of metadata in both atomic coordinate and experimental data files. For MX depositions, e.g. the reported X-ray or neutron wavelength in the structure factor file is often misreported and can be corrected. For NMR, tools enable manipulation of chemical shift data files to ensure that their atom nomenclature is consistent with that of the atomic coordinates. For 3DEM, Biocurators can edit 3DEM map headers after checking the 3DEM maps themselves to ensure internal consistency with the other uploaded files; Biocurators also check the fit of the atomic coordinates to the 3DEM maps. Currently, this step is performed visually using The University of California, San Francisco Chimera (22) graphics display software. In addition, 3D interactive difference electron-density maps of ligands for MX entries are provided at different contour levels for Biocurators to verify structural details. For example, Figure 5A displays the electron-density fit of heparin oligosaccharide bound to annexin in PDB entry 2HYV (23). This case is an example of a good electron-density fit for four well resolved monosaccharides (residues 801-804) and with partial density Figure 5. Comparison of ligand structures with 3D electron-density views. The electron-density maps shown in Figure 4A and B are 2mjFoj-DjFcj maps contoured at 1.0 r cutoff. (A) Good electron-density fit for heparin oligosaccharide at residues 801-804 bound to annexin in PDB entry 2HYV. (B) Poor electron-density fit for NADP bound to alcohol dehydrogenase in PDB entry 1ZK4.
fit for a disordered monosaccharide (residue 805). Figure 5B illustrates an example of poor electrondensity fit for the Nicotinamide adenine dinucleotide phosphate (NADP) ligand bound to alcohol dehydrogenase in PDB entry 1ZK4 (24). The wwPDB validation report ligand-related statistics for this entry include an extremely high real space R-factor value (25, 26) of 0.67 for the NADP ligand. Detailed analyses of this particular case were reported by Weichenberger et al. (27) and Shao et al. (28).
Validation/final review An important goal of data quality control through biocuration is to ensure that the interpretation of the experimental data is consistent. At the end of the OneDep biocuration pipeline, a wwPDB validation report (10) is generated for the Data Depositor. This document, which was developed in collaboration with community experts (29)(30)(31), serves as the official wwPDB validation report that the Data Depositor is strongly encouraged to provide to scientific journals to aid article review. The wwPDB validation report highlights any unusual geometric features within the atomic coordinates. For MX structures, the report also highlights any discrepancies between the atomic coordinates and the experimental data from which the structure was determined. The report is separated into sections that describe polymer and non-polymer components. Outliers are highlighted in tabular form within the report and are also shown in the form of a high-level summary. The validation measures of the deposited structure are compared with those of similar entries in the PDB and given a percentile score so that the Data Depositor (as well as journal editors and referees and subsequently Data Consumers) can see at a glance how the quality of this structure compares to that of others in the archive.
Communication with Data Depositors
The OneDep communication module enables all communication between Data Depositors and Biocurators, for a particular deposition, to be archived in one place. Once the wwPDB biocuration process is complete, Biocurators summarize any outstanding issues in a standardized letter, much of which is generated automatically. This summary letter along with the atomic coordinates, experimental data and wwPDB validation report are all made available to the Data Depositor through the OneDep deposition user interface. The Data Depositor receives an email notification to log back into the OneDep system and review the curated data files and the wwPDB validation report. At this stage, corrections may be requested to remedy any major issues identified during biocuration, such as polymer chain breaks, stereochemical (chirality) errors in residues or ligands and interatomic clashes. Frequently, Biocurators also seek Data Depositor clarification on the sample sequence used in the experiment, annotation of the quaternary structure macromolecular assembly, ligands and inconsistent data items, etc. Timely response helps expedite completion of the deposition process and preparations for public release. On receipt of the Data Depositor's response, Biocurators incorporate changes to the deposition and send updated files and validation reports back for review/approval. Once finalized, the new PDB entry is released in accord with the Data Depositor's instructions and wwPDB policy.
Data Depositors are notified 3 months, 2 months and 1 month prior to the 1-year hold-expiration date. The PDB entry is released at the end of the 1-year period if the Data Depositor does not respond to the hold-expiration notification. The wwPDB is alerted to publication dates and citation information by Data Depositors, some scientific journals and frequently by Data Consumers. In addition, the OneDep citation tracker scans the literature for publications on a weekly basis. Once a citation has been found, the relevant Data Depositor is notified about the upcoming release date and the citation details.
Outcomes Improved efficiency Figure 6 illustrates the average number of PDB depositions processed annually per Biocurator full-time equivalent (FTE) and the number of total global depositions as a function of time. This graph shows that productivity has nearly doubled since 2008, reflecting a regime of continuous improvement that was accelerated by the OneDep system. During the transition period (indicated as * in the Figure 6), productivity was not improved due to the OneDep system was first put into production, and Biocurators had to operate both new and legacy systems in parallel and were learning to use the new system. In addition, the total number of annual depositions fell slightly in 2014. Efficiency gains continued once the OneDep system was fully implemented and replaced the legacy systems. These productivity improvements come despite year-on-year increases in the complexity of the structure depositions (7) along with a significant increase in betterquality added-value such as ligand annotation, quaternary structure definitions and comprehensive validation report provided in the OneDep system. The dash line indicates the comprehensive wwPDB validation report was first introduced to Depositors prior to the OneDep system in August 2013.
Assisted by the OneDep system, Biocurators not infrequently identify issues with deposited data and request corrections from Data Depositors. Based on wwPDB correspondence records, the most frequently raised issues during biocuration are as follows: ligand chirality errors (26% of all issues raised), polymer backbone linkages (24%), interatomic clashes (12%) and sequence discrepancies between reference and Data Depositor-provided sequences (8%) as shown in Table 3. About 13% of the total number of depositions within a recent 6-month period had at least one of the issues listed in Table 3 raised by Biocurators.
In the most serious cases, Data Depositors provide replacement data (atomic coordinates and/or experimental data). In 2015, 29% of depositions underwent data replacement (falling to 25% in 2016). Although time-consuming for Biocurators, the wwPDB regards this as a 'good problem to have'. It also helps to inform on-going improvements to the OneDep system so that Data Depositors are alerted to potential issues as early as possible.
The wwPDB is committed to helping all Data Depositors improve data quality, while working to improve Biocurator efficiency. We, therefore, provide an anonymous wwPDB validation server for use prior to deposition and are working to make this facility as widely known as possible. We are collaborating with major structure determination and refinement software developers to promote use of the wwPDB validation webservice application programming interface so that Data Depositors can more easily validate their structures prior to deposition. We continue to improve the way in which the OneDep deposition module reports major issues to our Data Depositors, making it more likely that these issues will be addressed before the expert Biocurators begin their work.
Improved data quality
Following introduction of the wwPDB OneDep system in 2014, data completeness has improved, as the number of data items in the dictionary that are mandatory has nearly doubled (2280 versus 1249 mandatory data items) since year 2014. In addition, depositions have become more consistent because of increased use of controlled vocabularies (596 versus 474 data items that have controlled vocabularies defined). Structures deposited using OneDep are also exhibiting higher data quality (10,28). PDB Data Depositors and Data Consumers have become more aware of quality assessment since 2015, when wwPDB validation reports became available for the entire PDB archive. The OneDep system has also enabled better representation of chimeric proteins through complete annotation of each sequence fragment within a polymer entity.
The wwPDB is committed to maintaining uniformity and standardization across the entire archive. Data representation for newly determined structures can be challenging as methods in structural biology evolve and as the structures themselves become more complex. To address these challenges, Biocurators regularly review the archived data and perform archival updates (i.e. remediation) to improve data representation and ensure consistency. Data categories and items in the PDBx/mmCIF data dictionary are often extended or enhanced during remediation campaigns.
The wwPDB has undertaken several major archival remediation projects over the past decade. In 2007, efforts were made to standardize atom nomenclature, update sequence references and provide taxonomy information (32). In 2008, representation of icosahedral viruses was made uniform (21). In 2011, uniform/dual representation for peptide-like small molecules was accomplished (15). In 2014, very large structures, which were historically split into multiple PDB entries (due to the limitations of the legacy PDB file format) were combined into single files and entries in PDBx/mmCIF format. This measure allowed the remediated large structures to be visualized in 3D in their entirety and validated against experimental data for the first time. In 2017, the PDBx/mmCIF atomic coordinate files in the PDB archive were updated to conform to the latest version of PDBx/mmCIF data dictionary. In addition, the representation of chimeric proteins was standardized through complete annotation of each sequence fragment within a polymer entity. wwPDB remediation efforts are on-going to ensure consistency across the archive. Major wwPDB remediation undertakings have been reported in peer-reviewed scientific publications (15,21,32), and all wwPDB remediation activities are documented on the wwPDB website (www.wwpdb.org/docu mentation/remediation). Importantly, the OneDep system contains functionality to support remediation efforts, thus making them more efficient.
Future challenges and conclusion
There are many challenges ahead that the wwPDB partners need to address.
1. Keeping pace with new developments in structure-determination techniques: New and evolving techniques in structural biology, such as 3DEM and serial femtosecond X-ray crystallography using X-ray free electron lasers (XFEL), and entirely new approaches to structure determination, such as integrative/hybrid methods (I/HM) (33), are coming to the fore. These advances will require major additions to the PDBx/mmCIF data dictionary and changes in the OneDep system to properly represent the outcomes of multi-scale/time course structure determinations and to capture structural information, experimental data and metadata. The wwPDB has begun working with community experts in XFEL and I/HM to develop PDBx/mmCIF dictionary extensions for data standards that can be used in the OneDep system to support these techniques. 2. Scaling up the day-to-day operations: These accelerating changes in the science and technology of structural biology will also present challenges for the Biocurators; e.g. XFEL and I/HM domain expertise will be required. Both the OneDep system and biocuration practices need to evolve in the face of these changes. As the number of depositions per year increases and the size and complexity of incoming structures grows, there is a pressing need for further automation of the wwPDB biocuration processes. Moreover, with growing concerns about accuracy and reproducibility across the sciences, the OneDep validation module will require further enhancement. 3. Training and retention of workforce: The wwPDB places considerable emphasis on training and retention of our highly skilled Biocurators. We are committed to ensuring that biocuration is a rewarding and valued career within our organization. Looking more broadly across biology and medicine, the scientific community depends critically on ready access to comprehensive, high-quality primary archival data resources. The International Society for Biocuration (www.biocuration.org/) helps Biocurators develop throughout their professional careers through annual International Biocuration Conferences, workshops, communication forums, etc.
In conclusion, we wish to reiterate that the scientific community, and society in general, requires a durable and permanent record of the results of research. For these data to be Findable, Accessible, Interoperable and Reusable, they must be expertly and thoroughly curated. Ideally, experimental data and metadata should be prepared for archiving prior to publication, not after the fact (or never as is unfortunately often the case). Since its inception in 1971, the PDB has served as the exemplar of a first-rate curated scientific data archive. Skilled Biocurators, enabled with stringent software checks, apply their domain expertise to ensure access to high-quality data for Data Depositors and Data Consumers alike. Since 2003, the global wwPDB partnership has provided a robust framework for expert biocuration in furtherance of its mission to maintain and grow a sustainable archive of structural biology data made freely available without limitations on data usage for researchers, educators, students and the curious public around the globe.
Usage notes
PDB data are public and open access (ftp://ftp.wwpdb.org/ pub/pdb/data/structures/) for experts and non-experts with no limitation on usage. We ask users to cite 'Berman et al.
(2)' when PDB data are referenced. The PDBx/mmCIF data dictionary and CCD are defined at mmcif.wwpdb.org/ and www.wwpdb.org/data/ccd, respectively. Information about the OneDep system including tutorials and an FAQ list is available at www.wwpdb.org/depos ition/system-information. The documentation for wwPDB biocuration procedures and policies is maintained at www. wwpdb.org/documentation/annotation.
|
2018-04-03T01:33:19.611Z
|
2018-02-07T00:00:00.000
|
{
"year": 2018,
"sha1": "963af1da956625e944509fd065cbcd6aea4f250a",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/database/article-pdf/doi/10.1093/database/bay002/27438365/bay002.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "963af1da956625e944509fd065cbcd6aea4f250a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
2563774
|
pes2o/s2orc
|
v3-fos-license
|
Predicting sRNAs and Their Targets in Bacteria
Bacterial small RNAs (sRNAs) are an emerging class of regulatory RNAs of about 40–500 nucleotides in length and, by binding to their target mRNAs or proteins, get involved in many biological processes such as sensing environmental changes and regulating gene expression. Thus, identification of bacterial sRNAs and their targets has become an important part of sRNA biology. Current strategies for discovery of sRNAs and their targets usually involve bioinformatics prediction followed by experimental validation, emphasizing a key role for bioinformatics prediction. Here, therefore, we provided an overview on prediction methods, focusing on the merits and limitations of each class of models. Finally, we will present our thinking on developing related bioinformatics models in future.
Introduction
Bacterial small RNAs (sRNAs) are an emerging class of small regulatory RNAs of about 40-500 nucleotides in length [1]. Originally they were called small non-coding RNAs [2]. However, some recent studies showed that some sRNAs, including SgrS and RNAIII [3,4], can also encode some small proteins. Thus, this class of RNA molecules is called small regulatory RNAs [5]. Through binding to their target mRNAs or proteins, these sRNAs are involved in many biological processes to regulate the expression of outer membrane proteins [6,7], iron homeostasis [8][9][10], quorum sensing [11,12] and bacterial virulence [13,14]. For example, RNAIII of Staphylococcus aureus was associated with bacterial pathogenesis [14].
The functional importance of these sRNAs in responding to environmental changes has encouraged people to find more and more sRNAs. According to the sRNA database sRNAMap [1], more than 900 sRNAs have been reported, which are mostly transcribed from the intergenic regions. sRNAs are heterogeneous in terms of sequence length and secondary structure. In addition, sRNAs are not sensitive to frame-shift or nonsense mutations. Therefore, it is still difficult to find sRNA genes directly using genetic screening methods. The current strategies often use a combination of bioinformatics prediction and experimental validation [15]. For example, through the combination of genome sequencing techniques and comparative genomics-based analysis, 88 sRNAs have been identified in the TIGR4 strain of the human pathogen Streptococcus pneumonia [16]. Therefore, developing prediction models for sRNA discovery is extremely critical. Up to date, two classes of prediction methods have been developed, i.e., comparative genomics-based [17][18][19][20][21][22] and machine learning-based methods [23][24][25][26].
With more and more sRNAs obtained, determining their functions will also become an important part of sRNA biology. According to the locations of sRNA genes and their targets [27], sRNAs can be classified into cis-encoded sRNAs and trans-encoded sRNAs. For the cis-encoded sRNAs, sRNA genes overlap with their target genes and there exists a perfect base pairing region between their transcripts, while for the trans-encoded sRNAs, sRNA genes are separate from their target genes and there is often an imperfect base pairing region between their transcripts ( Figure 1). For example, an imperfect base pairing region is present between the sRNA IstR and its target mRNA tisB [5] (see sRNATar-Base for detailed information, http://ccb.bmi.ac.cn/srnatarbase/). The imperfect base pairing results in much difficulty in detecting target mRNAs, which renders the experimental validation essential after computational prediction. Nevertheless, the computational methods have provided a timesaving and less labor-intensive way for the identification of sRNA targets. To this end, several prediction models have been developed [28][29][30][31][32][33][34][35][36].
Taken together, bioinformatics prediction plays an important role in discovering sRNAs and their targets, as pointed by some reviews on bioinformatics prediction and experimental discovery [37][38][39][40]. In the current review, we focus on the merits and limitations of each class of models and provide some perspective on future development in this field.
Prediction of bacterial sRNAs
In essence, the process of developing bioinformatics models is to learn the rules from known samples and then to apply the rules for new samples for experimental validation. Therefore, understanding the characteristics of bacterial sRNAs is vital in developing sRNA prediction models. The available literature indicates that sRNAs possess the following features [37][38][39][40]. First, sRNAs are widespread and each bacterium is assumed to contain sRNA genes. Second, sRNAs are heterogeneous in sequence length and secondary structure as mentioned previously. The sequence of sRNAs ranges from 40 to 500 nucleotides in length.
Third, unlike tRNAs with the conserved cloverleaf secondary structure pattern, or eukaryotic microRNAs with similar sequence lengths and hairpin structured precursors [41], different sRNAs often have different secondary structures. Fourth, sRNAs are involved in many biological processes, such as posttranscriptional regulation of gene expression, RNA processing, mRNA stability and translation, protein degradation, plasmid replication and bacterial virulence [42][43][44][45][46][47]. The above features, on the one hand, reflect the importance of sRNAs, and on the other hand, bring difficulties in developing general models for sRNA prediction. Although many empirical models have been developed for sRNA discovery [17][18][19][20][21][22][23][24][25][26] (Table 1), there is little overlap between the prediction results from different models. We are still a long way from developing a perfect model for sRNA prediction.
Comparative genomics-based models for sRNA prediction
Comparative genomics-based models are a class of commonly-used methods for sRNA prediction at present. The basic assumption is that an sRNA gene should have a certain conservation of both sequence and secondary structure among a group of closely-related genomes. Therefore, how to choose the right set of closely-related genomes plays a key role in the success of comparative genomics-based models for sRNA prediction, and usually depends on the research purposes and models employed. For example, to find the sRNA genes in the intergenic regions of Escherichia coli [46], Argaman et al. applied the BLAST program Figure 1 The action mechanisms of cis-encoded and trans-encoded sRNAs For cis-encoded sRNA-target mRNA interactions, there exists a perfect base pairing region and these genes overlap but are localized on different strands. Here the interaction GadY:gadX was provided to demonstrate such interaction, in which the blue color represents sRNA and the red color stands for the target mRNA. However, for trans-encoded sRNA-target mRNA interactions, there exists an imperfect base pairing region. These genes are separate from each other and therefore there is no overlap between them. The interaction MicC:ompC was shown as an example. Please see sRNATarBase for detailed information (http://ccb.bmi.ac.cn/srnatarbase/). The entry names for GadY:gadX and MicC:ompC are SRNAT00067 and SRNAT00015, respectively. to compare potential sRNA regions against the genomes of Salmonella typhi, S. paratyphi and S. typhimurium and identified 24 putative sRNA genes. In addition, Rivas and Eddy applied the WUBLASTN program to compare 2367 intergenic sequences of E. coli against the complete genome of S. typhi [17]. The 11,509 generated alignments were scanned using the QRNA model and finally, 33 out of 115 known ncRNAs were identified. The E. coli genome was also used to test the performance of the sRNAPredict program. Using sequence conservation between E. coli intergenic regions and Shigella flexneri, Livny et al. identified 50 out of 55 known sRNAs [21]. Therefore, it is very difficult to provide a general rule for how many genomes and which genomes should be included in studies of comparative genomics-based sRNA prediction.
The main steps for comparative genomics-based models to predict sRNA genes are as follows. The first step is to find closely-related genomes to a given bacterial genome. The second step is to extract intergenic regions among the selected genomes and to apply the BLAST program to compare intergenic regions pair wisely. Then, the pair wise BLAST hits are gathered into clusters of two or more sequences, and these sequence clusters are aligned using ClustalW or ncDNAlign [48]. Finally, the resulting alignments are scored using RNAz [18] or EvoFold [19]. The third step is to carry out structural conservation analysis for the intergenic regions using the above alignment. Here structural conservation means that, for some positions in each sequence, even though there is no perfect conservation of nucleotides, the base pairing information is kept. The fourth step is to predict whether the conserved intergenic regions contain the signal of promoter, transcript factor binding sites or Rho-independent terminator.
Based on some or all steps above, some programs, including QRNA [17], RNAz [18], EvoFold [19], SIPHT [20] and sRNAPredict [21], have been developed and successfully applied to finding bacterial sRNA genes. QRNA takes blast alignment of two sequences as the input, while RNAz and EvoFold take multiple sequence alignment as input, before structure analysis such as conservation and thermodynamic stability is performed to predict potential sRNA genes. Different from these tools, sRNAPredict and SIPHT only use information from blast alignment and Rho-independent terminator signal without considering structural information.
Four comparative genomics-based methods, QRNA, RNAz, sRNAPredict/SIPHT and NAPP (nucleic acid phylogenetic profiling) [22] were systematically compared using 10 sets of benchmark data in a recent evaluation paper [49], The authors found that sRNAPredict provided the best performance by comprehensively considering multiple factors such as low false positive rates, ability to identify the correct strand of sRNAs and speed of execution.
There are limitations for this class of methods. First, the aforementioned models are only applicable to the discovery of evolutionarily-conserved sRNA genes rather than the genes unique to a given genome. Second, these models are of no use if there are no closely-related genomes available for a given genome. Third, the conserved intergenic regions may contain other gene structures such as transcription factor binding sites or untranslated regions of mRNAs rather than sRNA genes. Therefore, the comparative genomics-based models are only applicable to identify some sRNA genes.
Machine learning-based models for sRNA prediction
The basic assumption of this class of models is that a given genome is composed of two parts, i.e., sRNA genes and the remaining part of the genome. If we take sRNA genes as signal, the remaining part of the genome will be viewed as the background. The first step to develop machine learning-based models is to construct a training dataset including positive and negative samples. The known sRNA genes are often used as positive samples, while randomlyselected DNA sequences from the given genome are taken as negative samples. The second step is to extract features describing the samples, which is a key step in developing models. Only suitable features can improve the model performance. In addition, feature selection is also important in machine learning-based model construction. For example, in Tran's model for sRNA prediction [26], they firstly constructed a training dataset including 936 non-redundant ncRNA sequences as the positive set and the shuffled sequences of those positive samples as the negative samples. Then, they applied a t-test to find a set of features with statistical significance (P < 0.05) for neural network-based model construction. In fact, many feature selection methods have been applied in gene expression profile-based sample classification studies such as the Tclass system developed by our laboratory [50]. All those feature selection methods can be applied to select proper feature sets for sRNA prediction. Third, the machine learning methods such as neural networks and support vector machines are applied to develop the models. Fourth, the models developed are applied to genome-wide discovery of sRNA genes for experimental validation. If the number of predicted sRNA genes is very large, the comparative genomics-based models can be further applied to reduce the number of the genes. The main challenge in developing machine learningbased models lies in constructing training samples and features. For example, in the neural network-based model presented by Carter [23], the genetic algorithm-based model presented by Saetrom [24] and the model presented by Wang [25], the number of positive samples was enlarged by incorporating the tRNA and rRNA sequences into the training dataset.
Compared to the comparative genomics-based models, machine learning-based models for sRNA gene prediction have some advantages. For example, these models can be applied to find sRNA genes unique to a given genome. However, when we apply these models to do genome-wide discovery of sRNA genes, we often divide the genome into fragments with a certain length for prediction separately. If the fragment is too short, it might not contain enough information for sRNA genes. Conversely, if the fragment is too large, it might contain noise information. Therefore, it is very difficult to choose the optimal window size for machine learning-based models due to the length heterogenicity of sRNA genes. Because of this, Tran et al. constructed different models using different window sizes. This might be the reason why the positive prediction value of machine learning-based models is less than that from comparative genomics-based models [26].
Prediction models for general RNA-RNA interactions
In essence, the sRNA-target mRNA interactions in bacteria fall into the class of RNA-RNA interactions. Therefore, the models for general RNA-RNA interaction prediction (RIP) can also be applied to investigate sRNA-target mRNA interaction.
The earliest methods for RIP are to find hybridization structure with the minimum binding free energy for two RNA molecules, using the program RNAfold [53,54] or Mfold [66] to fold the two concatenated RNA sequences. Hybridization artifacts can arise from folding the concatenation of two RNA sequences. To prevent such artifacts, many programs such as RNAcofold [54], RNAhybrid [55,56] and RNAplex [57] were presented by extending the classical RNA secondary structure prediction algorithm to two sequences. For instance, RNAhybrid [55,56] was a modification of the classic RNA secondary structure prediction method, by neglecting intra-molecular basepairings and multi-loops. This method was originally proposed for miRNA target prediction, but it was also applied to sRNA target prediction by Sharma et al. [67]. Compared to RNAhybrid, RNAplex [57] used a slightly different energy model to reduce computational time. RNAplex performed 10-27 times faster than RNAhybrid [57].
The methods mentioned above ignore the secondary structures of two RNA molecules before they interact. To improve the prediction performance, Muckstein et al. applied a dynamic programming algorithm to search the minimum extended hybridization energy, which was defined as the sum of hybridization energy and the energy for making the binding sites accessible [68].
Since pseudo-knots were not considered in both the classical and the extensions of RNA secondary structure prediction algorithms, the aforementioned programs cannot find loop-loop interactions (kissing complex) between two RNA molecules. To address this problem, Alkan et al. presented inteRNA [59] based on joint structure of two RNA molecules. When applied in CopA-CopT and OxyS-fhlA interactions, inteRNA detected the loop-loop interactions successfully. Thereafter, multiple programs such as piRNA [60], inRNA [61], rip [62], RactIP [63], ripalign [64] and PETcofold [65] have been presented based on joint structure of two RNA molecules.
Although many programs for general RIP have been presented, most programs only provide the potential binding sites between two RNA molecules rather than determine whether two RNA sequences interact or not. In fact, two randomly selected RNA sequences can present many potential binding sites, which cannot guaranty that two RNA sequences interact. These programs are only suitable for searching binding sites given the interaction between an sRNA and a target mRNA. Therefore, it is impractical to apply these models for genome-wide prediction of sRNA targets. It is necessary to develop specific prediction models for sRNA targets.
Prediction models specifically designed for sRNA-target mRNA interactions
The first prediction model specific to sRNA-target mRNA interaction was presented by Zhang et al. [28]. They incorporated the following five features into the model: (1) Hfqbinding sites in both sRNA and target mRNA sequences; (2) flanking sequence À35 to +15 nt around the translation initiation sites in target mRNA sequences; (3) Hfq-binding sRNA structures; (4) extension alignment based on the center of loop or bulge regions from sRNA secondary structure; and (5) conservation profiles of the sRNAs and their targets among 8 closely-related organisms of E. coli K-12. For a given sRNA, this model scores each potential sRNA-mRNA interaction based on a modified Smith-Waterman local sequence alignment algorithm (a reward for a match and a penalty for a mismatch) and takes the mRNAs with top 10 or 50 scores as the potential targets. Among 10 experimentally-validated sRNA-target interactions, there are 7 pairs ranked in the top 50 scores. However, this model has not been applied widely because of the following reasons. First, this model was designed specifically for E. coli genome. For example, the conservation Note: The main features and properties of the related models were provided in column "Main features". For example, for QRNA, both sequence and secondary structure information were applied, and the model was suitable for two sequence alignment.
profile associated with E. coli was considered, which hinders people from applying the model in other organisms. Second, the model only considers secondary structures of sRNAs rather than the joint structures of two RNA sequences, which makes the model less competitive in comparison with the models presented later. Third, there is no program provided for sRNA biologists. The second model, termed TargetRNA, was presented by Tjaden et al. [29,30]. TargetRNA included an individual base pair model and a stacked base pair model for calculating hybridization score for sRNA-target interactions. The individual base pair model was based on a modified Smith-Waterman local sequence alignment algorithm, and the stacked base pair model was a straightforward extension of RNA folding approaches with intra-molecular base-pairing prohibited, which is very similar to the statistical idea from RNAhybrid [55,56]. However, TargetRNA was optimized on a training dataset containing 12 experimentally-verified sRNA-target mRNA interactions. The optimal translational initiation region was -30 to +20 nt and seed length was 9 nt. For each potential sRNA-target mRNA interaction, the model calculates the hybridization score, which was assumed to abide by extreme value distribution. The extreme value distribution was obtained by considering a large number of randomly-generated sRNA-target mRNA interactions. Therefore, for a given sRNA, all potential sRNA-target mRNA interactions will be considered and the interactions with the top 10 or 50 smallest P values will be taken as the putative interactions. As a result, TargetRNA can pick up 8 from the 12 interactions with top 10 smallest P values.
Mandin et al. proposed a model for sRNA target prediction by searching strong sRNA-mRNA duplexes [31]. Each sRNA-mRNA duplex was scored as a sum of both positive contributions and negative contributions, which correspond to pairing nucleotides and bulges/internal loops, respectively. The cost of bulges and internal loops was empirically gauged using four validated sRNA-mRNA interactions. The statistical significance of the duplex was used as the criterion for interaction, which was assessed by comparing to an ensemble of random sequences. During prediction, the flanking regions, À140 to +90 nt around the translation initiation sites and À60 to +90 nt around the translation stop sites in target mRNA sequences, were considered.
Obviously all aforementioned models only take a certain number of top predictions (with the larger comparison scores, small free energies or small P values) as potential targets. To determine clearly whether a given sRNA-mRNA complex interacts or not, our group have systematically collected 46 positive samples (true interactions) and 86 negative samples (no interaction) as the training dataset. Then, according to the positions of mRNA binding sites from the validated sRNA-target mRNA interactions at that time, sub-sequences located within À30 to +30 nt of the initial start codons of targets were selected as core binding regions. Based on the hypothesis that sequences flanking the core binding regions are also likely to influence the interactions, we also extracted these flanking sequences using sliding windows. For each sub-sequence, 10 features were computed, including the percent composition of bases in interior loops, the minimum free energy (MFE) of hybridization, and the difference in the MFE values before and after hybridization. Each sRNA-target mRNA interaction was described by 10,000 features. Third, we applied the Tclass system [50] and support vector machines to construct prediction models sRNATargetNB and sRNATargetSVM, respectively [33,34]. The main difference between sRNATargetNB and sRNATargetSVM is that the former only takes six features, which were selected from 10,000 initial features using the Tclass system [50], to determine whether a given pair of sRNA and mRNA interacts or not, whereas the latter needs 10,000 features. Therefore, sRNATargetNB runs faster. Finally, the performance of the two models above was evaluated on an independent test set containing 22 positive samples and 1700 randomly-generated negative samples. Prediction accuracies are 93.03% and 80.55%, respectively.
IntaRNA was presented by Busch et al. [32], which incorporated accessibility of binding sites of two RNA molecules and a user-definable seed. Similar to RNAup [53,58], IntaRNA searched the optimal interaction with the minimum extended hybridization energy, which was defined as the sum of hybridization energy and the energy to make the binding sites accessible. The difference between IntaRNA and RNAup is that MFE values for seed regions are also included in the calculation of the minimum extended hybridization energy in IntaRNA. Three factors make IntaRNA outperform other simpler programs like RNAhybrid: (i) finding the optimal structure with the MFE; (ii) summing the energy for opening original structures of binding sites and (iii) involving the MFE of seed regions. IntaRNA provides the binding sites of two RNA molecules and the energy of the hybridization, rather than the judgment of interacting or not.
From these models, we can see that different potential binding regions are considered in different models. So, which regions are suitable for sRNA target prediction? To address this problem, we continued our efforts to collect sRNA targets in peer-reviewed papers and constructed the database sRNATarBase [5], which contains 138 sRNAtarget interactions and 252 non-interaction entries. Using this database, we found that binding regions of 95.79% of the targets (91 of 95 entries containing binding regions) are located in the region À150 to 100 nt around the initial start codon of the targets. We therefore proposed another method termed sTarPicker to improve the performance of sRNA target prediction [36].
The sTarPicker method was based on a two-step model for hybridization between an sRNA and an mRNA target. The model first selects stable duplexes after screening all possible duplexes between the sRNA and the potential mRNA target. Next, hybridization between the sRNA and the target is extended to span the entire binding site. Finally, quantitative predictions are produced with an ensemble classifier generated using the Tclass system, originally developed for gene expression profile-based sample classification by our laboratory [50]. In determining the hybridization energies of seed regions and binding regions, both thermodynamic stability and site accessibility of the sRNAs and targets were considered. The major difference between the hybridization model in sTarPicker and the one used in IntaRNA lies in the filtering of seed regions. IntaRNA does not filter any seed regions and instead, searches the optimal hybridization of two RNA molecules with the minimum extended hybridization energy in the whole length of two RNAs. sTarPicker first finds all possible seed regions, then removes the seed regions with high hybridization energy. Here we assume that only stable seed hybridization results in stable hybridization between two RNA molecules, which was verified by the real sRNA-target mRNA interactions from sRNATarBase [5].
Compared to IntaRNA, sRNATarget and TargetRNA, sTarPicker performed best in both performance of target prediction and accuracy of the predicted binding sites on 17 non-redundant validated sRNA-target pairs [36].
Recently, Eggenhofer et al. developed a webserver termed RNApredator specifically for prediction of sRNA targets [35]. RNApredator predicts sRNA targets using RNAplex [57]. To improve the prediction specificity, RNApredator also takes into account the accessibility of the target. To enable fast computation, the accessibility is pre-computed using RNAplfold [69,70]. During prediction, the web server considers the regions À200 to +200 nt of both 5 0 and 3 0 UTR (default) as the potential binding regions and top 100 predictions as the potential interactions.
Future thinking in developing bioinformatics models for bacterial sRNAs and their targets
Here we briefly present an overview of prediction models for bacterial sRNAs and their targets, and point out the advantage and disadvantage of each class of models. Although these models have provided much support for experimental discovery of sRNAs and their targets, they are not perfect. Here we want to emphasize three future directions in developing bioinformatics models.
The first thing is to improve the existing prediction models. Compared to methods for open reading frame identification, the prediction accuracy of sRNAs is still very low. For example, sTarPicker has the highest positive prediction value on the independent test dataset [36]; however, a large number of false positive samples were included in the prediction results. Therefore, developing better models for sRNAs and their targets is still necessary. From the perspective of statistics, we firstly need more samples. At present, some databases, such as sRNAMap [1] and Rfam [71] for sRNAs and sRNA-TarBase [5] for sRNA targets, have been developed. These databases provide a data source for model development.
The key point is to construct suitable features to describe the bacterial sRNA gene and sRNA-target mRNA interaction. To this end, before new features are explored, it might be better to comprehensively integrate all features currently available to describe sRNAs or sRNA-target mRNA interactions. Then, different strategies for feature selection in machine-learning based model construction can be applied to search suitable features or their combinations.
The considerations mentioned above can also be applied to the second direction, i.e., developing prediction models for sRNA target proteins. To our knowledge, there is no prediction model specifically for sRNA target proteins. Although the general prediction model for RNA-protein interaction can be applied here [72], we believe that models based on the sRNA-protein interaction in bacteria will provide better support for the discovery of sRNA target proteins. To this end, we have been collecting the validated sRNA-protein interactions in the database sRNATarBase [5]. However, the number of samples is so low that we are not able to develop a reliable model yet.
The third direction involves developing comprehensive bioinformatics pipelines for the discovery of sRNAs and sRNA-target interactions using high throughput sequencing technology (HTS). With the application of HTS, a large number of short reads will be generated. How to efficiently manage these short reads and to find potential sRNAs has become an important bioinformatics topic in HTS-based sRNA discovery. For example, in their recent paper [73], Pellin and his colleagues presented a bioinformatics pipeline for sRNA discovery in Mycobacterium tuberculosis using RNA-seq and conservation analysis, and a list of 1948 candidate sRNAs was found. Currently, HTS has been widely applied in molecular biology, resulting in the discovery of sRNA transcripts [74][75][76][77][78][79][80][81], identification of human miRNA-mRNA [82] or RNA-protein interactions [83][84][85] and determination of mRNA secondary structure [86][87][88]. However, HTS has not been applied to investigate the interactions of sRNA-protein and sRNA-mRNA in bacteria. We can predict that HTS will soon have a widespread application in sRNA biology.
|
2015-07-06T21:03:06.000Z
|
2012-10-01T00:00:00.000
|
{
"year": 2012,
"sha1": "510e4854b7cea37706660b4c94fe825d2bfa54c4",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.1016/j.gpb.2012.09.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "510e4854b7cea37706660b4c94fe825d2bfa54c4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
}
|
15011878
|
pes2o/s2orc
|
v3-fos-license
|
A Shot Number Based Approach to Performance Analysis in Table Tennis
Abstract The current study proposes a novel approach that improves the conventional performance analysis in table tennis by introducing the concept of frequency, or the number of shots, of each shot number. The improvements over the conventional method are as follows: better accuracy of the evaluation of skills and tactics of players, additional insights into scoring and returning skills and ease of understanding the results with a single criterion. The performance analysis of matches played at the 2012 Summer Olympics in London was conducted using the proposed method. The results showed some effects of the shot number and gender differences in table tennis. Furthermore, comparisons were made between Chinese players and players from other countries, what threw light on the skills and tactics of the Chinese players. The present findings demonstrate that the proposed method provides useful information and has some advantages over the conventional method.
has indicated critical factors of the scoring process. The investigation of the structure or stochastic model of the scoring process is essentially important. However, elements or input variables of the models are now strictly limited as some variables, for example mechanical parameters of stroke motions or balls, physiological or psychological variables of players, are hardly measured during a match. In addition, since this type of the approach requires detailed information of a shot, the workload of data collection is always enormous. Many researchers and practical analysts have employed a simpler approach, such as the investigation of the scoring rate, losing rate or usage rate (Hao et al., 2010;Hsu, 2010;Hsu et al., 2014;Zhang et al., 2013). They implicitly assumed that we could analyse the outline of the performance by identifying the shot number (the shot number starts from the service, shot number one is for service, shot number two for service received, and so on) that has a high probability of getting or losing points, even if the details of the structure of table tennis are unknown. The main drawback of this approach is the inability to show specific skills or tactics that cause scoring or losing. Despite this drawback, such statistics are known to be useful for practitioners (Zhang et al., , 2013. In addition, this approach has some advantages in terms of practical application; the required data can be collected in a short span of time and the statistics generated are easy to understand for practitioners of table tennis. The method proposed in this paper is categorised as such an approach that provides simple statistics and can be conducted in a short time. The purpose of this study was to propose a novel approach for performance analysis in table tennis. The basic concept was similar to conventional research (Hao et al., 2010;Hsu, 2010;Hsu et al., 2014;Zhang et al., 2013). The most important and unique notion that this study proposes was to record the shot number of a scoring shot and compute the frequency or the number of shots of each shot number. By introducing the concept of frequency, statistics are more accurate and easier to understand without the need for data collection. In this paper, the advantages of the proposed method were demonstrated through the performance analysis of some of the world's elite players. In addition, a statistical comparison between Chinese players, who dominated in international competitions, and players from other countries was performed.
Analytical points
The two analytical points that were considered in the proposed method were a serving rally and a shot number. In table tennis, the tactical and technical challenges strongly vary considering these factors.
A serving rally is one of the most important points to consider in table tennis analysis. The tactical and technical challenges faced by servers and receivers are different due to the special nature of service. In table tennis, a service has to rebound twice on a table, once on the server-side followed by once on the receiverside. If we could serve directly at the opponent's court as in lawn tennis, the server would have an enormous advantage. In addition, the server has to ensure collision between the racket and the ball. This table tennis rule intentionally reduces the advantage of the server. However, since table tennis players have the ability to make multiple types of services with identical motions, a service still has a great impact on a rally and its importance is often mentioned in tutorials (Geske and Muller, 2010;Molodzoff, 2008). Serving skills can influence one's scoring or losing tendency. In general, servers should take maximum advantage of a service in order to score and this is regarded as the most important challenge for them. On the other hand, for receivers, it is important to minimize the effect of a service. Thus, as described above, tactical or technical challenges vary with the serving rally.
The other point of consideration in this research is the shot number. Again, owing to the special nature of the service in table tennis, technical and tactical challenges vary with how it is received and the subsequent shots, each of which is denoted by a shot number. A service has a strong impact on the scoring or losing tendency in the early phase of rallies. If a server has good serving skills and fires a service with rotation or traveling direction difficult to anticipate, the server would get high scoring rates at the first and third shots. Therefore, the opponent would get high losing rates at the second and fourth shots. Although subsequent shots still may be affected © Editorial Committee of Journal of Human Kinetics by the service or shot number one, the effect of the service will gradually reduce as the number of shots increase. As the effect of services decreases, players need to face the challenge of scoring under conditions where no players have apparent advantages. Thus, as described above, it can be said that tactical or technical challenges vary with the shot number and it is one of the most important analytical points in table tennis.
Inputs for the proposed method
The inputs required for the proposed method are two types of data: a server and the shot number of the scoring shot. These two types of data per rally need to be recorded in the database. Although the scoring or losing player is not included in the inputs, they are uniquely determined through the server data and the shot number of scoring shots based on a table tennis rule that shots are hit alternately by the players.
Computation of the number of shots
At first, the number of shots for each shot number is computed. In the current study, the number of shots was defined as the 'number of shot opportunities', thus including the number of shots which missed the ball. Let us denote the number of the i-th shot as , the server as p , the target player under analysis as p , the opponent player as p , the number of scoring shots as and the function that counts the number of rallies that meet the criteria as (Criteria). Then, is computed by the following equations.
if is odd, The computation of the number of shots is simple enough to perform the calculations using a spreadsheet program.
Computation of the scoring rate, losing rate and effectiveness
Next, we compute the three performance indicators: scoring rate, losing rate and effectiveness of each shot number. Let us denote the scoring player as p , the number of the i-th shots scored as , the number of the i-th shots lost as , the scoring rate of the i-th shots as , the losing rate of the i-th shot as , the effectiveness of the i-th shots as . , and , are computed by the following equations.
= ⁄
(3) The scoring rate represents how good the scoring skills or tactics are at the i-th shot. Losing rate represents how poor the stability or returning skills are at the i-th shot. If a player has good defensive skills or stability, will be low. Effectiveness represents the scoring or losing tendency at the i-th shot. Even if a player has good offensive skills and a high value, the value of can be low when the player's shot is liable to fail and has a high value. can be regarded as contribution of the i-th shot to winning a match. Thus, we can see that the statistics proposed in this research are simple, intuitive and easy to understand.
Comparison with the conventional method
Conventionally, different types of statistics were used. The most commonly used statistics were a scoring rate, which is different from the one proposed in the current study, and a usage rate. Let us denote the number of scored points in a match as p, the number of lost points in a match as q, the conventional scoring rate of the i-th shot as , the usage rate of the i-th shot as . and were computed by the following equations.
Conventional scoring rate represents the scoring tendency or bias at the i-th shot. If is high, the players' skills or tactics may be favourable to score a point at the i-th shot. However, it is not always the case. Let us consider the case where the target player hit a service 50 times and scored only once. If none of the services failed, takes the maximum value of 1.0, in spite of a rather low scoring probability. To avoid Journal of Human Kinetics -volume 55/2017 http://www.johk.pl this type of misunderstanding, the conventional scoring rate was understood with the usage rate , which represents how often the player used the i-th shot. There is a method to integrate the scoring rate and the usage rate into the effectiveness (Zhang et al., 2013). The conventional effectiveness of the i-th shot ′ was computed by the following equations.
The most important advantage of the proposed method is the usage of the number of shots. In table tennis, a shot results in any one of the following three outcomes: scoring a point, losing a point or neutral, which implies that nobody scored a point. The number of neutrals was not conventionally used to understand the usage of the i-th shot (Equation 7) and the reason for such an approach may be related to the workload of data collection. If every number of the i-th shots needs to be counted manually during a match, the recording operation would become taxing and more complicated. Inaccurate methods could be selected, if they are easier to conduct and still useful. Considering this, the proposed method has little limitations. The inputs needed for the proposed method are identical to those needed for the conventional method. The player who scored and the shot number of the scoring shot are required for the conventional method, while the inputs for the proposed method are a server and the shot number of the scoring shot. In fact, the inputs for the proposed method can also be changed to the scoring player and the shot number of the scoring shot, as the data particular to the server can be obtained from the data on the scoring player and the shot number based on the rules of table tennis. Technically speaking, the inputs needed for both methods are the same. This implies that the workload required for the proposed method is identical to that required for the conventional method. The difference between the proposed method and the conventional one is only statistical accuracy. The impact of introducing the number of shots is discussed in the following sections of this paper.
Match samples
In the current study, 39 matches between male players (1 match played between Chinese players; 7 matches played between a Chinese player and a player from another country; and 31 matches played between players from other countries) as well as 31 matches between female players (1 match played between Chinese players; 6 matches played between a Chinese player and a player from another country; 27 matches played between players from other countries) were selected from the matches played at the 2012 Summer Olympics held in London. The selected matches were played by the top 50 players based on the world ranking in July 2012. The data was recorded by observing video recordings which were broadcasted on television or the internet. Written consent from the subjects was unnecessary as the matches were played in public.
Statistical analysis 1. Effectiveness, scoring rate and losing rate in table tennis
The effectiveness, scoring rate and losing rate of shot numbers, which hereinafter will be referred to as 'proposed indicators', were computed by the proposed method. The seventh and subsequent shots in a serving rally were unified into a group denoted by '≥#7' and the eighth and subsequent shots in the receiving rally were unified into a group denoted by ' ≥ #8'.
When the number of shots of a particular shot number was less than five, none of the proposed indicators was computed to avoid contamination by inaccurate statistics. This kind of a process is required for all statistical analysis, not only for the proposed method. Kruskal-Wallis tests were used to assess the effect of the shot number on the proposed indicators. Wherever significant differences were observed between shot numbers, the Steel-Dwass test was used to compare the shot numbers. In addition, male players and female players were contrasted. The proposed indicators were compared with the Welch's t-test as long as the observations within two groups were normally distributed. When normality could not be assumed, the Mann-Whitney rank test was applied. Shapiro-Wilk tests were used to test the normality. Every statistical analysis was tested at a 95% confidence level.
© Editorial Committee of Journal of Human Kinetics
Comparison between Chinese players and players from other countries
Chinese players were contrasted with players from other countries by the proposed method. The proposed indicators were compared with the Welch's t-test as long as the observations within the two groups were normally distributed. When normality could not be assumed, the Mann-Whitney rank test was used. Shapiro-Wilk tests were performed to test for normality. 3. Comparison between the proposed method and the conventional method The conventional effectiveness was computed and the linear regression coefficients between conventional effectiveness and effectiveness of the new method were determined. Pearson correlation coefficients were used to examine the relationship between the regression coefficient and the frequency of each shot number. The relationship between conventional effectiveness and the proposed method might vary with the number of shots, since conventional effectiveness does not consider frequency in its computation. Figure 1a shows the distribution of effectiveness of male and female players as well as difference between genders. Table 1a shows a significant difference between different shot numbers for male and female players. The shot number had a significant influence on effectiveness in matches played by male ( =147.7, < .01) and female players ( =86.2, < .01 ). Significant differences between the genders were observed in the first shot (u=2957. 5,p<.05), the fourth shot ( = −2.1, < .05), the sixth shot ( = 1661.5, < .05) and the seventh and subsequent shots in the serving rally ( = 1582.5, < .05). Figure 1b shows the distribution of the scoring rate of male and female players as well as difference between genders. Table 1b presents a significant difference between different shot numbers for male and female players. The shot number had a significant influence on the scoring rate in matches played by male ( = 91.4, < .01) and female players ( = 54.3, < .01 ). Also significant differences between the genders were observed in the first five shots (u= u=3008.5,p<.05; t=3.3,p<.01; t=4.8,p<.01; t=3.4,p<.01; u=3138,p<.01). Figure 1c shows distribution of the losing rate of male and female players as well as difference between genders. Table 1c presents significant difference between different shot numbers for male and female players. The shot number had a significant influence on the losing rate in matches played by male ( = 306.7, < .01) and female players ( = 204.0, < .01).
Losing rate
Furthermore, significant differences between the genders were observed between the second and the sixth shot (u=3014,p<.05; u=3188.5,p<.01; t=5.52,p<.001; t=5.26,p<.01; u=3216,p<.01) and from the seventh and subsequent shots in serving rallies ( = 2835, < .01). Figure 2 shows the distribution of effectiveness, scoring rate and losing rate of Chinese players and players from other countries. Chinese male players displayed higher effectiveness at the first shot ( = 299, < .05), the second shot ( = 270, < .01) and the fourth shot ( = −3.5, < .01 ), while Chinese female players presented higher effectiveness at the third shot (u=162.5,p<.01), the seventh and subsequent shots in serving rallies ( = 123, < .01), as well as the eighth and subsequent shots in receiving rallies ( = −2.9, < .01). Chinese male players displayed a higher scoring rate at the first shot (u=313.5,p<.05), the second shot (t=-3.6,p<.01), the third shot ( = 285, < .01), the fourth shot (t=-2.5,p<.05) and the sixth shot ( = 260, < .05 ), while Chinese female players presented a higher scoring rate at the second shot (u=182.5,p<.05), the third shot ( = −2.2, < .05 ), the sixth shot (t= − 2.4,p<.05) and the seventh and subsequent shots in serving rallies ( = 112, < .01) . The significant difference of the losing rate was not observed in most shot numbers except the fourth shot by male players ( = 2.2, < .05). Figure 3 shows the relationship between conventional effectiveness and effectiveness of the new method. A significant correlation was observed between conventional effectiveness and effectiveness of the proposed method ( = 0.88, = 60.2, < .01). The dashed lines in the scatter plot and the table next to the scatter plot in Figure 3 show the regression equations that were computed from the data where a specific range of the number of shots was performed. The correlation between the regression coefficient and the number of shots was significant ( = 0.69, = 29.7, < .01). Thus, it can be stated that the relationship between conventional effectiveness and effectiveness of the new method varied with regard to the number of shots.
Effectiveness, scoring rate, and losing rate in table tennis
The results show characteristics of shot numbers in table tennis rallies. The effectiveness of the first three shots was identical and higher than this of the other shot numbers. Effectiveness can be analysed on the basis of the scoring and losing rates. The common characteristic of the first three shots was a low losing rate. Although the first two shots had a low probability of scoring, the effectiveness was high due to the low losing rate. The third shot had a relatively low losing rate and, moreover, a high scoring rate especially in matches played by male players. This suggests how table tennis players take advantages of the service. It is easier to score at the third shot with a service, even if one hardly scores directly with a service itself. However, the effect of the service or the advantages of servers do not persist for long in a rally. According to Table 1, few differences were found among shot numbers that were greater than three.
Male players hit a ball with a higher probability of scoring than female players. This result is intuitively understandable because, generally, male players can hit a ball with stronger force than female players. Keeping this fact in mind, the scoring rate of the first shot can be considered remarkable, as the strength of male players hardly contributed to the velocity of services based on the rule of table tennis, i.e. services have to rebound twice. These results imply two possible explanations: male players have good serving skills, which female players rarely have, or male players offensively hit a ball at the second shot with a risk of failure. Although the current study cannot provide more details, some insights into table tennis can be gained through the proposed method.
Comparison between Chinese players and the players from other countries
The results suggest that male Chinese players had better skills at receiving rallies than players from other countries (Figure 2a). The difference in the second and the fourth shot, namely the first two shots in a receiving rally, were most noticeable. Let us assume 10 receiving rallies were performed and a player hit the ball 10 times at the second and fourth shots, which roughly corresponds to the receiving rallies in a game (a game is a part of a match in table tennis, a match consists of an odd number of games ) . In this case, the difference in the effectiveness of the second and fourth shots, 0.073 and 0.187, is Journal of Human Kinetics -volume 55/2017 http://www.johk.pl equivalent to the difference of 0.7 and 1.9 points, respectively. The fourth shot showed the biggest difference and this could determine the winner of a game. Although the difference in the second shot was smaller than that of the fourth shot, the effectiveness of the fourth shot can be strongly influenced by the skills and tactics used at the second shot. As a result, the topmost priority for the players from other countries to win against Chinese players is to minimize the difference in receiving rallies. Since the differences in effectiveness were derived from both the scoring rate as well as the losing rate, their scoring skills and returning skills need to improve. Improving scoring skills at the first shot or service will be the next challenge.
The results suggest that female Chinese players had better scoring skills at the third shot and long rallies (Figures 2d and 2e). These differences were mostly derived from the scoring rate (Figures 2e and 2f). The scoring rate of the seventh and subsequent shots, especially in serving rallies, was high and close to the third shot of male Chinese players. This result is noteworthy because, as stated earlier, high shot numbers in a rally result in conditions where none of players has any advantage. These results show that female Chinese players had outstanding scoring skills in long rallies, which might be related to the high scoring rate at the third shot.
Thus, it was shown that performances of table tennis players may be successfully compared on the basis of the proposed method.
Comparison between the proposed method and the conventional method
The proposed method is more suitable for analysis and aims to evaluate players' skills and tactics. A strong correlation between the effectiveness of the new method and conventional effectiveness was observed with certainty. However, it was also observed that the relationship between them considerably varied with the number of shots. That is, when a shot number with lower frequency was executed, lower conventional effectiveness was computed, even if the effectiveness of the new method remained the same. Considering the example shown in Table 2, the difference in conventional effectiveness was 0.043, which is a two-level difference in the conventional evaluation criteria (Zhang et al., 2013) consisting of four levels in total. However, the difference in effectiveness of the new method was 0.001 and it would have made a difference of only 0.1 points even if 100 rallies had been performed. This difference occurs when we compare data sets which contain values for the number of shots that are substantially different from each other. The conventional effectiveness becomes considerably low when the sum of the number of scoring and losing points is small. This characteristic is statistically undesirable for the evaluation of skills and tactics because effectiveness is independent of the sum of the number of scoring and losing points. This result suggests that conventional effectiveness was designed to evaluate the actual volume of effect caused by each shot number in a match rather than to evaluate the skills or tactics. When the purpose of the analysis is to evaluate the players' skills or tactics, the proposed new method should be used instead of the conventional one. Even when the purpose of the analysis is to evaluate the actual effect caused by each shot number in a match, the effectiveness of the proposed method considering the number of shots might be more accurate and suitable than indicators that do not include the number of shots.
The analysis of effectiveness can be obtained through the proposed method. The scoring rate is related to scoring skills and the losing rate is connected with the stability or defensive skills. When we find high effectiveness for a certain shot number, we are able to determine if it was derived from a high scoring rate or a low losing rate. Conventionally, the analysis of the effectiveness cannot be performed.
The proposed method is simpler and easier to understand than the conventional one. The effectiveness proposed in the current study can be computed as a unit of points by multiplying the assumed number of shots, as shown in an earlier subsection. In contrast, conventional effectiveness cannot be derived without comparing the distribution of multiple matches. Even when some difference between two data sets was seen, the impact of the difference on a match was hardly perceptible. In addition, it is essentially impossible to compare different shot numbers, since their number of shots are substantially different. We need to understand the conventional effectiveness with multiple criteria for each shot © Editorial Committee of Journal of Human Kinetics number. As a result of the above discussion, it can be said that the proposed method has many advantages over the conventional method, and is useful for performance analysis in table tennis.
Limitations of the proposed method
The proposed method cannot determine the dominant factor of a high or low scoring / losing rate, while it provides more specific information than the conventional one does. For example, a high scoring rate in the third shot seems to indicate that the player has good scoring skill at the third shot. However, there are also some other possibilities, such as that for example the first shot was good and it created an opportunity to score at the third shot or the opponent's skill of the second shot was poor. The proposed method provides no information to discriminate between them.
To make the statistics reliable, the lower bound of the number of shots needs to be decided arbitrary. When the number of shots is extremely small, the effectiveness, scoring rate or losing rate may be extremely high or low. Although this problem is common to many other statistical methods, it can induce a difficult discussion in the practical scenario where the proposed method is used. Many additional trials may be required to find the optimal lower bound of the number of shots to calculate the proposed statistics in a reliable way. It is a limitation of this study and an issue to be considered in future research.
Conclusion
A novel method for performance analysis in table tennis was proposed. The proposed method improved the conventional analysis by introducing the number of shots for each shot number, which was computed on the basis of the shot number of a scoring shot. The advantages of the proposed method over the conventional method may be summarised as follows: (1) Skills or tactics are more accurately evaluated than in the conventional method.
(2) Scoring skills and returning skills can be evaluated separately, whereas these insights cannot be gained conventionally.
(3) The results can be understood easily as they can be converted into a unit of points. (4) The results can be understood with a single criterion, whereas conventional analysis requires multiple criteria for each shot number.
The performance analysis of matches played in the 2012 London Olympics was conducted on the basis of the proposed method. The results of the analysis clarified the scoring or losing bias caused by shot numbers and gender difference in table tennis. Chinese players and players from other countries were then compared. The results showed that skills and tactics in receiving rallies were significantly different in matches played by male players, and scoring skills and tactics in long rallies were significantly different in matches played by female players. The present findings demonstrate that the proposed method provides useful information and has advantages over the conventional method.
|
2018-04-03T05:27:21.244Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "f636adfd45d5760276b561b2d2924ccb5a5a39ab",
"oa_license": "CCBYNCND",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/hukin/55/1/article-p7.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f636adfd45d5760276b561b2d2924ccb5a5a39ab",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14000366
|
pes2o/s2orc
|
v3-fos-license
|
Functional Alleles of Chicken BG Genes, Members of the Butyrophilin Gene Family, in Peripheral T Cells
γδ T cells recognize a wide variety of ligands in mammals, among them members of the butyrophilin (BTN) family. Nothing is known about γδ T cell ligands in chickens, despite there being many such cells in blood and lymphoid tissues, as well as in mucosal surfaces. The major histocompatibility complex (MHC) of chickens was discovered because of polymorphic BG genes, part of the BTN family. All but two BG genes are located in the BG region, oriented head-to-tail so that unequal crossing-over has led to copy number variation (CNV) as well as hybrid (chimeric) genes, making it difficult to identify true alleles. One approach is to examine BG genes expressed in particular cell types, which likely have the same functions in different BG haplotypes and thus can be considered “functional alleles.” We cloned nearly full-length BG transcripts from peripheral T cells of four haplotypes (B2, B15, B19, and B21), and compared them to the BG genes of the B12 haplotype that previously were studied in detail. A dominant BG gene was found in each haplotype, but with significant levels of subdominant transcripts in three haplotypes (B2, B15, and B19). For three haplotypes (B15, B19, and B21), most sequences are closely-related to BG8, BG9, and BG12 from the B12 haplotype. We found that variation in the extracellular immunoglobulin-variable-like (Ig-V) domain is mostly localized to the membrane distal loops but without evidence for selection. However, variation in the cytoplasmic tail composed of many amino acid heptad repeats does appear to be selected (although not obviously localized), consistent with an intriguing clustering of charged and polar residues in an apparent α-helical coiled-coil. By contrast, the dominantly-expressed BG gene in the B2 haplotype is identical to BG13 from the B12 haplotype, and most of the subdominant sequences are from the BG5-BG7-BG11 clade. Moreover, alternative splicing leading to intron read-through results in dramatically truncated cytoplasmic tails, particularly for the dominantly-expressed BG gene of the B2 haplotype. The approach of examining “functional alleles” has yielded interesting data for closely-related genes, but also thrown up unexpected findings for at least one haplotype.
γδ T cells recognize a wide variety of ligands in mammals, among them members of the butyrophilin (BTN) family. Nothing is known about γδ T cell ligands in chickens, despite there being many such cells in blood and lymphoid tissues, as well as in mucosal surfaces. The major histocompatibility complex (MHC) of chickens was discovered because of polymorphic BG genes, part of the BTN family. All but two BG genes are located in the BG region, oriented head-to-tail so that unequal crossing-over has led to copy number variation (CNV) as well as hybrid (chimeric) genes, making it difficult to identify true alleles. One approach is to examine BG genes expressed in particular cell types, which likely have the same functions in different BG haplotypes and thus can be considered "functional alleles." We cloned nearly full-length BG transcripts from peripheral T cells of four haplotypes (B2, B15, B19, and B21), and compared them to the BG genes of the B12 haplotype that previously were studied in detail. A dominant BG gene was found in each haplotype, but with significant levels of subdominant transcripts in three haplotypes (B2, B15, and B19). For three haplotypes (B15, B19, and B21), most sequences are closely-related to BG8, BG9, and BG12 from the B12 haplotype. We found that variation in the extracellular immunoglobulin-variable-like (Ig-V) domain is mostly localized to the membrane distal loops but without evidence for selection. However, variation in the cytoplasmic tail composed of many amino acid heptad repeats does appear to be selected (although not obviously localized), consistent with an intriguing clustering of charged and polar residues in an apparent α-helical coiled-coil. By contrast, the dominantly-expressed BG gene in the B2 haplotype is identical to BG13 from the B12 haplotype, and most of the subdominant sequences are from the BG5-BG7-BG11 clade. Moreover, alternative splicing leading to intron read-through results in dramatically truncated cytoplasmic tails, particularly for the dominantly-expressed BG gene of the B2 haplotype. The approach of examining "functional alleles" has yielded interesting data for closely-related genes, but also thrown up unexpected findings for at least one haplotype.
Keywords: B-g, membrane protein, adaptive immunity, innate immunity, B7 family inTrODUcTiOn The chicken major histocompatibility complex (MHC) was first described as the B blood group, based on serological reactions mainly with the so-called BG antigen on erythrocytes. Later experiments showed that recombination events could separate most of the BG antigen reactivity in the BG region from the antigens encoded by classical class I and class II genes in the BF-BL region (1)(2)(3)(4)(5). The fact is that BG molecules, like class I and class II molecules, are highly polymorphic cell surface antigens with wide tissue distributions and encoded in the MHC led one eminent researcher to refer to them as the class IV antigens and to the early speculation that they might be the ligands of the newly discovered chicken γδ T cells, but various approaches to demonstrate this possibility failed (6,7). Now it is clear that some homologs of the BG molecules, such as the butyrophilin (BTN) and butyrophilin-like (BTNL) molecules, may indeed to be the ligands of mammalian γδ cells (8)(9)(10)(11). The discovery of myelin oligodendrocyte glycoprotein (MOG) in the nervous system of rodents and of BTN in lipid droplets of cow milk (12)(13)(14) eventually led to the description of the BTN gene family. This BTN family includes BTN, BTNL, skin T cell (SKINT), and BG genes, based mainly on the sequence relationships of the immunoglobulin-variable-like (Ig-V) extracellular domain, and is overall part of the larger B7 gene family (15). Certain BTN family members are known to be involved in immunological reactions, including some expressed on T cells reported to be involved in negative co-stimulation and some expressed as heterodimers on epithelial cells involved in recognition by T cells with certain restricted γδ TCRs (8,10,16,17).
There are similarities but also differences between the mammalian BTN family members and chicken BG molecules. Both the BTN and BG genes are multigene families with wide tissue distribution, some members being expressed on hemopoietic cells, and others being expressed on other cell types, particularly epithelial cells (8-11, 18, 19). Some BTN family genes are known to function as heterodimeric glycoproteins in recognition by mammalian γδ T cells (16,17); BG molecules have long been known to be disulfide-linked dimers, although without apparent glycosylation, and the presence of homo-versus hetero-dimers has not been resolved (20)(21)(22). However, there are various intronexon and domain organizations within the mammalian BTN family (8)(9)(10)(11), none of which are identical to the BG genes (20,23,24). In particular, the cytoplasmic tails of mammalian BTN family members have only a few heptad repeats and typically end with a 30.2 (also called PRY-SPRY) domain; by comparison the BG molecules all have long cytoplasmic tails composed of many heptad repeats. Moreover, high serologic polymorphism, copy number variation (CNV) and rapid evolution of BG genes in the BG region have been reported compared with the mammalian BTN family members (24).
At the moment, it is not clear whether the polymorphism of the BG genes is functionally important. Comparison of alleles of BG loci was easy for the two singleton genes: a nearly monomorphic BG0 gene on chromosome 2 and the polymorphic BG1 gene in the BF-BL region on chromosome 16 (25). All other known BG genes are located head-to-tail in the BG region on chromosome 16, which renders them targets for apparent gene conversion (meaning that the polymorphism might be due to drift rather than selection) and also subject to unequal crossingover (meaning that the CNV makes it hard to unequivocally identify orthologous alleles in different BG haplotypes) (24,25).
To approach these problems, we have assumed that the genes from different haplotypes expressed in particular cell types could be considered alleles in a functional sense. If such "functional alleles" could be reliably identified, then the sequences could be compared for amount and location of variation, and assessed for selection at the protein level.
The BG genes of the B12 haplotype are the most intensely studied, and one of the simplest patterns was from peripheral T cells, in which the BG9 gene was strongly expressed and the BG12 gene was weakly expressed, as assessed by reverse-transcriptase polymerase chain reaction (RT-PCR) with SS-TM primers that amplified the signal sequence to transmembrane region, followed by cloning and sequencing (24). In this study, we developed "HU" primers from near the beginning of the 5′ untranslated region (5′UTR) of hemopoietic ("H") BG genes to near the end of the 3′ untranslated region (3′UTR) of all known (universal or "U") BG genes, and sequenced the nearly full-length amplicons from four chicken lines with other B haplotypes: line 61 (B2), line 15I (B15), line P2a (B19), and line N (B21). We expected to find a single or dominantly expressed BG gene in each haplotype that would be closely related to the BG9 gene found in the B12 haplotype, which would allow us to determine whether the sequence variation between haplotypes is localized and/or selected in the extracellular region, the cytoplasmic tail, both, or neither.
MaTerials anD MeThODs chicken lines and haplotypes
Four lines of White Leghorn chickens were maintained under specific pathogen-free condition at the Pirbright Institute (formerly the Institute for Animal Health) in Compton, UK: line N, line P2a, line 15I, and line 61, with the MHC haplotypes of B21, B19, B15, and B2, respectively. The history of these lines is described (26).
isolation of cells
Peripheral blood was taken from wing veins with heparin and washed twice with cold PBS by centrifugation at 300 g at 4°C for 5 min and resuspension in cold PBS. Cells were counted using a hemocytometer, and around 5 × 10 7 lymphocytic cells in 2 ml were stained at 4°C in the dark for 1 h using T cell specific antibodies [10 µl mouse anti-chicken CD4-FITC and 10 µl of mouse anti-chicken CD8b-FITC for lines N and P2a; 10 µl mouse anti-chicken CD4-RPE and 10 µl of mouse anti-chicken CD8-RPE for lines 15I and 61 (all antibodies from Southern Biotech)]. Then the cells were washed 3-4 times with cold PBS and resuspended into 1 ml cold PBS for sorting, using magneticactivated cell sorting ( rna isolation, cDna synthesis, and Pcr amplification Roughly 1 × 10 6 sorted T cells were extracted for total RNA following the manufacturer's protocol for the NucleoSpin RNA II RNA extraction kit (Machery-Nagel). First strand cDNA was produced from 5 to 10 ng RNA following the manufacturer's protocol for the Maxima H Minus First Strand cDNA Synthesis Kit (ThermoFisher). Briefly, the RNA was mixed with oligo-(dT)18 primer and dNTP mixtures, heated at 65°C for 5 min, chilled on ice for 3 min, RT buffer and Maxima H Minus Enzyme Mix added, and the reaction mixture incubated at 55°C for 45 min, followed by 85°C for 45 min to inactivate the enzyme.
cloning and sequencing
Several bands were generated after HU-PCR reaction, as illustrated by 1% agarose gel electrophoresis of a representative example (Figure 1). Amplification with SS-TM primers in pilot experiments revealed that all bands from 1,500 to 3,000 bp contained BG cDNA sequences. For the final experiments, a single region was cut out of the gel after shorter times of electrophoresis (20 min at 100 V), so that sequences of all these sizes were treated in parallel. DNA was extracted using the ISOLATE II PCR and Gel Kit (Bioline).
DNA fragments were cloned into the pJET vector (CloneJET PCR cloning kit, ThermoFisher), 92-96 colonies were picked for colony PCR using HU primers, and DNA from the 40-60 positive clones was prepared by Miniprep (PureLink Quick Plasmid Miniprep Kit, Invitrogen) and sent for dideoxy chain termination sequencing (DNA Sequencing Facility, Department of Biochemistry, University of Cambridge). Sequencing primers were T7 (5′ TAATACGACTCACTATAGGG 3′), pJETR (5′ AAGAA CATCGATTTTCCATGGCAG 3′), UC699 (5′ TTTTCTATGATC ATCC 3′), UC700 (5′ TTTTCTATGATCATCC 3′), UC701 (5′ TGGCTCTGCACYTCCTCS 3′), and UC703 (5′ TGRACCTG GAGGTGTCAG 3′). Sequencing identified 35-45 BG clones, some of which were BG0 and BG1 and therefore were not further analyzed. Names were given to the sequences from the remaining clones, according to the following convention: abbreviated line name, "T" for T cells, "BG, " a letter representing the exon 2 sequence with "a" being the most frequently detected exon 2 sequence (and "b" being the second most frequently detected exon 2 sequence, and so forth), a dash and then a number representing the alternative splicing variant with "1" being the most frequently detected clone (and "2" the second most frequently detected clone, and so forth). Some of these clones were eventually found to be chimeras, and were not further considered in the analyses, leading to 57 final sequences ( Figure S1 in Supplementary Material).
sequence analysis
Sequencing trace data were viewed, trimmed, and assembled in CLC DNA Workbench 5 (QIAGEN). Primary sequence alignments were carried out in CLC and finished sequences were exported into MEGA7 1 for Clustal W alignment, from which the ".meg" file was generated and used for phylogenetic analysis using Neighbor Joining method in MEGA7 with bootstrap (1000) for phylogeny test. Sequence alignments were imported into BioEdit Sequence Alignment Editor, 2 then exported as a "rich text with current shaded view setting, " opened in Word (Microsoft) and modified manually by adding annotations. Helical wheel analysis for the cytoplasmic tails was done using online program DrawCoil10 3 and Figure 8 and Figure S5 in Supplementary Material were modified from the diagrams generated from this program. The model for Ig-V domain of BG8 in Figure 7 was built by Swiss-Model 4 based on the template of the MOG molecule (PDB ID 3csp.1) sharing 40.35% identity in amino acid sequences. The structure was then viewed, edited, and annotated in PyMOL. 5 All the other figures were designed and manipulated in Word or Powerpoint (Microsoft).
resUlTs
One Dominant and several Other Bg genes are expressed in Peripheral T cells of each B haplotype, With Most Part of the Bg8-Bg9-Bg12-Bg13 clade Peripheral T cells were isolated from the blood of four chickens from lines with different B haplotypes: line 61 (B2) and line 15I (B15) by FACS and line P2a (B19) and line N (B21) by MACS, and both with a cocktail of monoclonal antibodies (mAb) to CD4 and CD8. Total RNA was converted to cDNA using reversetranscriptase and an oligo-dT primer, and then nearly full-length transcripts were amplified by PCR using HU primers, cloned and sequenced on both strands. Two independent PCR reactions were analysed, and for line 61 (B2) a third PCR reaction was carried out using SS-TM primers, expected to detect all BG transcripts.
For each chicken line, 26-84 BG cDNA clones (excluding BG0 and BG1 clones) were isolated and then sequenced with a variety of primers, with the reads assembled and analysed (Figure 2). Fifty-seven unique sequences were found ( Figure S1 in Supplementary Material). Assuming that each unique sequence of the extracellular Ig-V domain (encoded by exon 2) corresponds to a gene, the 57 unique sequences originate from 16 genes, with 3-5 genes expressed in each haplotype, none of which were shared between any two of the four haplotypes (Figure 2). Comparison of the nearly full-length sequences within each gene based on exon 2 sequences revealed that almost all differences were due to alternative splicing events in the cytoplasmic tail, which will be further described in a later section of the Results. However, some exon 2 sequences within a line differ in only one nucleotide and were only found in a single PCR (see Figure S1 in Supplementary Material). It seems likely that some of these clones are due to nucleotide mis-incorporation during amplification, but they were considered as separate genes since there are examples of separate genes with single nucleotide differences within the B12 haplotype (24). Based on the number of clones with different exon 2 sequences, there is one gene expressed more than the others in all four samples, but only in the N line (B21) was one gene really overwhelmingly dominant as found previously for the B12 haplotype (Figure 2).
Based on the intron-exon structures of BG genes (Figure 1) from the well-characterized B12 haplotype, the cDNA sequences from the four haplotypes could be organized conceptually into transcript sequences without introns, which could be used for the first stage of analysis. By comparison with the 14 BG genes of the B12 haplotype (24), the conceptual transcript sequences from these cDNA clones were mainly from the phylogenetic clade of BG8-BG9-BG12-BG13 genes of the B12 haplotype (Figure 3). The sequences of this clade have a 5′UTR characteristic of hemopoietic BG genes (as expected for genes amplified with an H forward primer) with a cytoplasmic tail and 3′UTR characteristic of the so-called type 2 sequence, quite different from the 5′UTR sequences of tissue BG genes and those genes with so-called type 1 cytoplasmic region and 3′UTR sequences (Figure 4). Some features of the BG genes from line 61 (B2) are different from those of the other three haplotypes. Throughout the length of the sequences, the dominantly expressed conceptual transcripts (as defined in the previous paragraph) from B15, B19, and B21 (BG8, BG9, and BG12 gene sequences from B12) are much more closely related with each other than with the dominantly expressed conceptual transcript from B2 (and the BG13 gene sequence) (Figures 3 and 5). The BG13 gene had already been seen to have an apparent gene conversion in exon 2 (25), but based on these latest data, we now consider the BG8-BG9-BG12 clade as having a type 2a cytoplasmic tail, with the BG13 (and other sequences such as BG3, BG4, and BG6) having a type 2b cytoplasmic tail (but not for the 3′UTR, which are nearly identical in all these sequences). Another surprise was the fact that many of the cDNA sequences isolated from line 61 (B2) are in fact identical (or very nearly so) to BG genes from the B12 haplotype (Figure 3), an unexpected finding for us since from the four chicken lines; HU, hemopoietic forward and "universal" reverse primers to give nearly full-length sequences; SS-TM, signal sequence forward and transmembrane reverse primers to give SS, extracellular Ig-V domain and TM regions. Different colors indicate different exon 2 sequences, except those sequences that are only found in one PCR reaction. Names follow the convention: abbreviated line name, "T" for T cells, "BG" and a letter representing the exon 2 sequence with "a" being the most frequently detected exon 2 sequence (and "b" being the second most frequently detected exon 2 sequence, and so forth); numbers in parentheses indicate the number of clones found for a particular exon 2 sequence out of the total number for the particular PCR reaction. Bottom panel, the total results for four chicken lines from this paper and for the CB line (B12) from Ref. (24). the B haplotype was originally defined by serology predominantly of the BG region. In fact, the serological identity of B2 and B12 molecules on erythrocytes was noted long ago (5), and confirmed by two-dimensional gel analysis (27). The mystery deepens with the realization that the dominantly expressed BG gene from B12 T cells is BG9 (despite the presence of a BG13 gene), whereas the dominantly expressed BG transcript from B2 T cells is identical in sequence to BG13, with no BG9 sequence found (Figures 2 and 3).
Finally, the subdominant cDNA sequences varied between haplotypes. Most of these subdominant sequences are also most closely related to the BG8, BG9, and BG12 sequences, but some are more closely related to the BG5-BG7-BG11 clade (Figures 3 and 4), which has 5′UTR sequences characteristic of hemopoietic BG genes but with a cytoplasmic tail and 3′UTR characteristic of the so-called type 1 sequence (24). In particular, of the four subdominant transcripts in line 61 (B2), one is identical and another nearly identical to BG7 while a third (for which only the V domain sequence is complete) is identical to BG11 (Figures 3 and 4).
The Dominantly expressed Bg genes for Three haplotypes show evidence for clustering of Variation but not selection in the extracellular Domain compared With selection but not clear clustering in the cytoplasmic Tail All BG genes can be divided up into the 5′UTR and signal sequence encoded by exon 1, the Ig-V extracellular domain FigUre 3 | Phylogenetic tree of nucleotide sequences for the "(nearly) full-length conceptual transcripts" (i.e., exons without introns) for the 16 genes from four chicken lines identified in this paper, and for the 14 BG genes of the B12 haplotype from Ref. (24). Names of the transcripts follow the convention: abbreviated line name, "T" for T cells, "BG" and the letter "a" representing the most frequently detected clone from the most frequently detected exon 2 sequence (and "b" representing the most frequently detected clone from the second most frequently detected exon 2 sequence, and so forth). Names of the genes follow the convention "BG" and the number of the gene locus from the B12 haplotype. Indicated by color are those clades with 5′ ends of hemopoietic (blue) and tissue (green), and by brackets for 3′ ends of type 1 and type 2. Branch lengths are scaled by genetic distance, and percentage bootstrap values are indicated at the nodes. 6TBGd is not present in this tree since it was only detected by the SS-TM amplification, and some sequences may be due to nucleotide mis-incorporation during amplification (for instance, NTBGa may have given rise to NTBGb and NTBGc, see Figure S1 in Supplementary Material). encoded by exon 2, the transmembrane region encoded in exon 3, a cytoplasmic tail of heptad repeats mostly encoded by many 21 nucleotide exons, and 3′UTR encoded within the final exon (Figure 1) (24). The phylogenetic relationships seen for exon 2 are true for the whole of the conceptual transcripts, except for the few that have a type 1 cytoplasmic tail and 3′UTR.
It is of interest to gain insight into the features of the sequences at the nucleotide and amino acid level, including the location and potential clustering of the sequence variation as well as any evidence for selection. As mentioned above, the dominantly expressed conceptual transcript from line 61 (B2) is identical to the BG13 sequence of the B12 haplotype, so there is no allelic variation to consider (Figures 3 and 5). However, there is variation throughout the conceptual transcripts of the dominantly expressed cDNAs from the three other haplotypes, which can be compared with the BG genes of the B12 haplotype (Figure 5; Figure S3 in Supplementary Material).
The 5′UTR of the dominantly expressed BG sequences expressed in T cells, like all other hemopoietic BG genes, has a large indel compared with those BG genes of the B12 haplotype that are expressed primarily in tissues ( Figure S3 in Supplementary Material). Only 15 positions out of 137 nucleotides in the 5′UTR (excluding the primer binding site) differ in one or another of the dominantly expressed sequences from the four haplotypes (including B2) as well as the BG8, BG9, BG12, and BG13 genes of the B12 haplotype, and this variation is of unknown significance.
In the portion of exon 1 encoding most of the signal sequence (Figure 6; Figure S4 in Supplementary Material), only 1-6 differences out of 99 nucleotides leading to 0-4 changes in 33 amino acids are found in the dominantly expressed sequences from the four haplotypes (including B2) as well as the BG8, BG9, BG12, and BG13 genes of the B12 haplotype. There is only one (silent) nucleotide change that fails to lead to an amino acid change, so the variation might appear to be selected. However, this variation does not change the overall hydrophobic sequence nor does it change the signal peptidase site of three small amino acids (the last codon of which is split, with the second and third positions located in exon 2).
Compared with the BG8 gene of the B12 haplotype, the variation in the part of exon 2 encoding the extracellular Ig-V domain ranges from 4 to 9 differences out of 342 nucleotides leading to 1-5 changes in 114 amino acids in the three haplotypes (B15, B19, and B21), and 22 nucleotides and 12 amino acids for line 61 (B2) and BG13 (B12) (Figure 7; Figure S4 in Supplementary Material). The location of the variation is not clustered along the sequence, but for the three haplotypes (and BG8, BG9, and BG12 of the B12 haplotype), a structural model (Figure 7) shows that nearly all the amino acid variation is located in the membrane distal loops presumably pointing away from the cell surface, with one position in the β-strands and one in the loops underneath the Ig-V domain. For line 61 (B2) and BG13 (B12), there is more variation away from the distal loops. However, there was no change in the cysteines that form the intra-domain disulfide bond, or the cysteine located in the equivalent of complementarity determining region 1 (CDR1) that form a disulfide bond between the two chains of a BG dimer.
Comparison of the codons in the extracellular Ig-V domain for which there is nucleotide variation (Figure 6; Figure S4 in Supplementary Material) shows that for each sequence of the three haplotypes, the nucleotide changes that lead to no change in the amino acid (silent or synonymous changes) versus those that lead to a change in the amino acid (replacement or non-synonymous FigUre 5 | Alignment of amino acid sequences from the "(nearly) full-length conceptual transcripts" (i.e., exons without introns) for the dominantly expressed genes from four chicken lines identified in this paper, and for the appropriate BG genes (BG8, BG9, BG12, and BG13) of the B12 haplotype from Ref. (24). Names of the transcripts follow the convention: abbreviated line name, "T" for T cells, "BG" and the letter "a" representing the most frequently detected clone from the most frequently detected exon 2 sequence. Names of the genes follow the convention "BG" and the number of the gene locus from the B12 haplotype. Regions of the amino acid sequence are indicated with colors (but with amino acids split between two exons indicated by both colors): signal sequence, darker green; Ig-V domain, bright green; transmembrane region, darker brown (with lysine/threonine dimorphism in gray-blue); heptad repeats based on 21 nucleotide exons, alternating orange and light brown (except for some repeated exons in gray, light green, purple yellow, and light blue). Letters indicate amino acids by single letter code, dots indicate identities with BG8 sequence, dashes indicate no sequence present (deletion). Cytoplasmic tail of 6TBGa is conceptual, as alternative splicing leads to intron read-through and an early stop codon.
Chicken BG Genes in T Cells
Frontiers in Immunology | www.frontiersin.org May 2018 | Volume 9 | Article 930 changes) range from two silent and one replacement change to three silent and five replacement changes. Comparison of the dominantly expressed BG from line 61 (B2) and BG13 (B12) with the other three haplotypes (and BG8, BG9, and BG12) shows 7 silent and 12 replacement changes. Given that random changes would be expected to lead to only twice as many replacements as silent changes, these data are not consistent with strong selection. There are two kinds of transmembrane regions described for BG genes, which are also found in the conceptual transcripts of the four haplotypes ( Figure S3 in Supplementary Material). The dominantly expressed BG sequence for line 61 (B2) is identical to BG13, with the transmembrane region bearing a lysine in the otherwise hydrophobic region. The dominantly expressed BG sequences from the other three haplotypes (and BG8, BG9, and BG12) all have with a threonine instead of the lysine along with nine other amino acid differences compared to BG13. There is no variation between the transmembrane region sequences of the three haplotypes (and only one amino acid difference in BG12). | Compared with BG8 of the B12 haplotype, the number of silent and replacement changes by codon position for the "(nearly) full-length conceptual transcripts" (i.e., exons without introns) of the dominantly expressed genes from four chicken lines identified in this paper, and of the other appropriate BG genes (BG9, BG12, and BG13) of the B12 haplotype from Ref. (24). Names of the transcripts follow the convention: abbreviated line name, "T" for T cells, "BG" and the letter "a" representing the most frequently detected clone from the most frequently detected exon 2 sequence. Names of the genes follow the convention "BG" and the number of the gene locus from the B12 haplotype. Values are based on the alignments in Figure S4 in Supplementary Material, and amino acids from split codons at the edges of the exons are assigned to the exon with two of the three nucleotides of the codon (for instance, last amino acid of the signal sequence is assigned to the Ig-V domain, which in fact starts with glutamine in the mature protein).
By contrast, there are only three silent nucleotide changes out of 17 total in line 61 (B2) and BG13, and three codons have multiple nucleotide changes, again consistent with some selection between the BG8-BG9-BG12 sequences and the BG13 sequences (Figure 6; Figure S4 in Supplementary Material).
The cytoplasmic tail is composed of amino acid heptad repeats encoded by 21 nucleotide exons (with a few exons of 18 or 24 nucleotides), the numbers of which vary between BG genes ( The presence of amino acid heptad repeats encoded by 21 nucleotide exons strongly suggests that the two cytoplasmic tails of a BG dimer form an α-helical coiled-coil, similar to what is sometimes called a leucine zipper (28,29). In such coiled-coils, the first and fourth amino acids in a true heptad repeat (which from here will be called a and d positions) act as the interface between the two chains, with some contribution by the neighboring amino acids (e and g positions) (30). To better understand the sequence features of the cytoplasmic tail, as well as location of any variation, representations of helical wheels were inspected.
It seems unlikely that a and d amino acids forming the interface of the two chains in the coiled-coil would involve the first amino acid of each 21 nucleotide exon, since that amino acid is encoded by one nucleotide from the previous exon followed by two nucleotides from the exon under consideration, and thus the first amino acid encoded by this split codon would vary depending on the previous exon. In fact, the helical wheels of both BG8 and BG13 revealed a clear pattern (Figure 8): the amino acids from the fourth codon and the last codon of the 21 nucleotide repeat are mostly hydrophobic, presumably corresponding to the a and d amino acids of the true heptad repeat that would form a hydrophobic interface between the two chains. Moreover, there FigUre 7 | Alignment of amino acid sequences for the Ig-V domains of the dominantly expressed genes from four chicken lines identified in this paper, and for the appropriate genes of the B12 haplotype from Ref. (24), along with structural models of the Ig-V domains with the location of variation compared with the BG8 sequence of the B12 haplotype indicated. Names of the transcripts follow the convention: abbreviated line name, "T" for T cells, "BG" and the letter "a" representing the most frequently detected exon 2 sequence. Names of the genes follow the convention "BG" and the number of the gene locus from the B12 haplotype. In the top panel, letters indicate amino acids by single letter code, dots indicate identities with BG8 sequence, residues that differ from BG8 are boxed in blue for the three lines, and red for line 61 and BG13; yellow indicates the intra-domain cysteines. The β-strands of the V region are indicated by arrows in the top panel, and are colored dark green for one face of the domain and light green for the other face. The same color scheme is used for the three panels below, with the positions of residues for the three lines (and BG9 and BG12) that differ from BG8 colored blue in the middle panel, and positions of residues for line 61 and BG13 that differ from BG8 colored red in the right hand panel. are many fewer hydrophobic amino acids at the other positions, with many of the amino acids from the first and third codon (corresponding to the e and g positions) charged, potentially allowing salt bridges between oppositely charged amino acids of the two chains (30). It is not immediately clear from the data whether the potential salt bridges might be for homodimers or for heterodimers with some of the subdominantly expressed chains. However, the charges in the five positions other than those forming the hydrophobic stripe between the chains are clustered into acidic, basic, and polar patches along the coiled-coil, with a particularly clear acidic patch at the C-terminus. Also striking is the presence of a cysteine residue in the same position of the cytoplasmic tail of the conceptual transcripts of all the dominantly expressed BG molecules.
There is no sequence variation in the cytoplasmic tail between the dominantly expressed BG conceptual transcripts for line 61 (B2) and BG13 from the B12 haplotype, but the variation in the three other haplotypes, BG8, BG9, and BG12 is scattered along the sequence (except for an apparent insertion in the B15 sequence from line 15I), with only one a and one d position being variable out of 25 variable positions total (Figure 8; Figures S2 and S3 in Supplementary Material). This variation is all di-allelic, most of which is arguably conservative changes (A/T, M/T, E/D, Q/H, A/G, L/V, Y/S, A/I, and A/P) with only a few arguably radical changes (A/E, K/E, L/Q, K/N, Q/R, K/Q, and R/H). Decorating the coiled-coil representation of the cytoplasmic tail sequence revealed that much of the variation is located in two parts of the coil, 11-18 and 23-26 of 33 heptads (Figure 8), but whether this constitutes clustering is not yet clear. The cytoplasmic tail from the dominantly expressed conceptual transcript of line 61 (B2) and from the BG13 gene (B12) is shorter (27 heptads) than the dominantly expressed genes from B15, B19, and B21 and the BG8, BG9, and BG12 genes from B12 haplotype (Figure 8). Interestingly, the actual cytoplasmic tails of the dominant and some subdominant sequences of line 61 (B2) are much shorter (Figure 8; Figure S5 in Supplementary Material), as discussed below.
Unlike the protein coding regions including the cytoplasmic tail, the final exon (which includes the 3′UTR) of the dominantly expressed BG genes of all four haplotypes as well as the BG8, BG9, BG12, and BG13 genes are co-linear (except for a 20 nucleotide insertion in BG9 that is shared with most BG genes not in the BG8-BG9-BG12-BG13 clade) and nearly identical in sequence ( Figures S3 and S4 in Supplementary Material). Including the 27 nucleotides that code for protein in BG9 and the dominantly expressed The analysis thus far has assumed that the RNA transcripts correspond to the exons as identified by their sequence features without any introns that were present, a minimal length for the mRNA. However, many of the 57 unique sequences actually isolated include stretches of sequence that are clearly introns, based on comparison with known genes in the B12 haplotype (Figure 9; Figures S1 and S3 in Supplementary Material).
Almost all of the retained introns lead to in-frame stop codons, some of which are long before the stop codon expected from the conceptual transcripts (Figure 9; Figures S1 and S3 in Supplementary Material). The dominantly expressed BG sequence from line 61 (B2) retains the intron directly after the first 21 nucleotide exon, which truncates the cytoplasmic tail after only 13 amino acids (Figure 8; Figure S1 in Supplementary Material). Thus, the dominantly expressed BG sequence from line 61 (B2) has a different sequence from the other three haplotypes, but also lacks the long cytoplasmic tail. Some of the subdominant sequences also have truncated cytoplasmic tails, some with clusters of cysteines (Figures 8 and 9; Figure S5 in Supplementary Material).
DiscUssiOn
To overcome the difficulty of identifying truly orthologous alleles in the ever-shifting panoply of BG genes in the BG region, we adopted the approach of looking at the BG transcripts in single cell types to identify "functional alleles. " Based on our limited examination of the transcripts in cells and tissues of the B12 haplotype (24), we began with peripheral T cells from chicken lines bearing four additional haplotypes. The overall results are summarized as a cartoon (Figure 10).
Based on our previous results, we expected a single dominantly expressed BG transcript (perhaps with another subdominant transcript at a low level) for each haplotype. We hoped that the transcripts from the five haplotypes would be similar enough that we could identify limited variation and determine whether such variation was clustered in regions of the protein with functional significance and/or under selective pressure. In particular, we wanted to ascertain whether such variation in the extracellular Ig-V domain and the cytoplasmic tail showed evidence for selected function, since there is no evidence that the serological polymorphism found in the extracellular Ig-V domain has functional significance while the two reported examples of function have been localized to the cytoplasmic tail. In fact, we found a series of surprises.
First, only line N (B21) cells had one really dominantly expressed BG gene, as we had found with the CB (B12) line. The other three lines with different haplotypes had one dominant gene expressed, but the subdominantly expressed genes were present in significant amounts. We were so concerned about this result that we carried out a third amplification of the cDNA from line 61 (B2) using SS-TM primers (the same used for the B12 experiments), which gave the same dominantly expressed BG gene but a completely different subdominant BG gene compared with the amplifications with HU primers. Therefore, we are not completely convinced that our amplifications are without bias. Unbiased approaches such as RNAseq or proteomics might be suitable for answering this question.
The significant levels of subdominant BG transcripts in three of the four haplotypes did lead us to wonder whether the dominantly expressed BG protein in some haplotypes might associate with subdominant BG proteins to make heterodimers. Another possibility is that some or all of these expressed proteins might associate as heterodimers with BG0 or BG1 chains, which were found in all cells at significant amounts. We were unable to see an obvious pattern from our helical wheel analyses. Careful analyses at the protein level of ex vivo cells as well as flow cytometry and biochemical analysis of cells transfected with one versus two BG genes might help answer this question.
Second, three of the four haplotypes have dominantly expressed BG genes with sequences close enough with each other (and to the gene strongly expressed in the B12 haplotype) to allow good comparisons for allelic variability, but one haplotype is rather different. The dominantly expressed BG gene from all four haplotypes came from one clade of BG genes in B12 chickens (BG8-BG9-BG12-BG13, hemopoietic 5′ UTR with type 2 cytoplasmic tail and 3′UTR). The T cells from line 15I (B15), line P2a (B19), and line N (B21) expressed BG genes that are very closely related to the BG9 gene (and also to BG8 and BG12). By contrast, the dominantly expressed BG gene from line 61 (B2) has many differences throughout the sequence (except in the 3′UTR). This B2 gene seems identical with the BG13 gene of the B12 haplotype (as presaged by serology of erythrocytes (5)), which is not expressed in B12 T cells (at least as assessed by amplification with the SS-TM primers (24)) despite being present in the B12 haplotype.
Among the subdominant BG genes expressed at significant levels, most are closely related to BG8, BG9, and BG12, with none that clustered with BG13. However, there were several subdominant BG genes whose overall sequences clustered with the BG5-BG7-BG11 clade (hemopoietic 5′ UTR with type 1 cytoplasmic tail and 3′UTR), three from line 61 (B2) and one from line P2a (B19). The potential significance of these different sequences is unclear.
Third, the comparison of the Ig-V domains from closely related BG genes showed clustering of the variation but no evidence for selection at the protein level. Only low levels of variation were found in the Ig-V domain of the dominantly expressed BG genes of the three haplotypes along with the closely related BG8, BG9, and BG12 genes of the B12 haplotype. This variation is mainly localized to the distal loops, which could suggest selection for functional interactions with other molecules, but there was no evidence for selection of variation based on non-synonymous (replacement) versus synonymous (silent) changes; perhaps data from additional haplotypes will help. By contrast, the dominantly expressed BG gene of line 61 (B2) is identical to the BG13 gene of the B12 haplotype, the differences with the other three haplotypes and three other genes of the B12 haplotype were scattered throughout the structure, and again there was no support for selection at the protein level.
Fourth, by contrast to the extracellular Ig-V domain, there was clear support for selection of variation in the cytoplasmic tail, which could be mapped to a conceptual model of an α-helical coiled-coil. The presence of exons with 21 nucleotide repeats FigUre 10 | Cartoon summary of the findings, showing that there are no expressed sequences identical between the four chicken lines but that line 61 has sequences identical to genes of the B12 haplotype (which, however, are not expressed in T cells of the B12 haplotype), that most expressed sequences are from the B8-B9-B12 clade although line 61 (B2) has a dominantly expressed BG sequence of the BG13 clade and two subdominant sequences from the BG5-BG7-BG11 clade, and that cytoplasmic tails are mostly type 2a and that the length varies due to alternative splicing (intron read-through). The cartoon shows BG proteins in cells of each haplotype, with the numbers of each protein reflecting the ratio of different sequences in that haplotype. Extracellular Ig-V domains are represented by shapes to indicate relationship to clades of BG genes from the B12 haplotype (ovals, BG8-BG9-BG12 clade; pentagons, BG13 clade; and diamonds, BG5-BG7-BG11 clade) and by color (colors as in Figure 2, with those sequences found in only one PCR represented by ovals and diamonds not filled with color); cytoplasmic tails indicated by boxes representing heptad repeats, with lengths correlated with the length of the tail taking into account alternative splicing and with colors representing the clade (type 1, blue and green; type 2a, bright red and brown; type 2b, dark red and brown; and 6TBGd, white since no data are available).
encoding heptad amino acid repeats in a molecule known to be a dimer originally prompted the view that this portion of the molecule is a coiled-coil of two α-helices (20,24,31). This view was supported by the isolation of a soluble BG cytoplasmic tail as a molecule that displaces tropomyosin (32,33), an actin-myosin regulator composed of a coiled-coil (34).
Visualization of the repeats in the BG cytoplasmic tails by forms of helical wheels (30) identified the amino acid encoded by the fourth codon and the last codon of each 21 nucleotide exon as predominantly hydrophobic, and thus likely to be the first ("a") and fourth ("d") amino acids of the true heptad. The fact that the true heptad repeat spans two exons was unexpected, but perhaps obvious in retrospect, given that the first codon is split and thus depends on two exons. This finding is most easily interpreted as a hydrophobic stripe on one α-helix interacting with a hydrophobic stripe on the other α-helix, buried between the two chains, whether as a homo-or a hetero-dimer. The other residues would project into the cytoplasm and, in the dominantly expressed BG genes, are present as patches of highly charged residues, along with a highly conserved cysteine. In some of the subdominantly expressed BG genes, there are clusters of cysteine residues. It is very likely that these various patches interact with other molecules, which might be identified by proteomics. Another possibility for the cysteines is modification, for instance palmitoylation (35) which could bring the α-helical coiled-coil to the underside of the membrane.
The variation in the cytoplasmic tail for the conceptual transcripts from the three haplotypes with similar BG sequences (and the BG8, BG9, and BG12 genes from the B12 haplotype) is predominantly located in two stretches at the five positions that are not in the hydrophobic stripe between the two α-helices of the coiled-coil. The functional significance of this variation is not clear, but the evidence from silent versus replacement substitutions supports selection for this variation. The only two known examples of function for BG molecules, regulation of actin-myosin by "zipper protein" in intestinal epithelial cells and effect of the BG1 gene on viral disease (32,33,36), are both associated with the cytoplasmic tail rather than the extracellular Ig-V domain, which may fit with the notion that the cytoplasmic tail is under selection for variation.
Fifth, many of the real transcripts had intron read-through that shortened the cytoplasmic tail compared to what was expected from the conceptual gene sequence, most dramatically truncating nearly the whole cytoplasmic tail of the dominantly expressed BG gene of line 61 (B2). Such intron read-through, a form of alternative splicing, was first noticed long ago in the cytoplasmic tails encoded by BG cDNAs (20,31). Some intron read-through seems to have become fixed, for example BG1 genes in which an active immunotyrosine inhibition motif (ITIM) is located in an exon bounded by two 21 nucleotide repeats (25,36). A bioinformatic analysis of the 14 BG genes of the B12 haplotype found many read-through introns that led to in-frame stop codons, but no additional signaling motifs were obvious in those introns that read through in-frame (24).
The large proportion of transcripts with intron read-through was unexpected. One possibility that cannot be ruled out from our data is that these RNAs are incompletely spliced nuclear RNAs which would never be translocated to the cytoplasm or be translated. However, the RNA was primed with oligo-dT for the reverse transcription step and the amplicons are nearly fulllength, so it seems most likely that these RNAs are polyadenylated. Ultimately, isolation and analysis of cytoplasmic or polysome RNA and/or analysis at the protein level is required to be sure that these transcripts encode real BG molecules.
Assuming that the intron read-through leads to truncated cytoplasmic tails in a real BG dimer, this kind of alternative splicing could be a way to regulate the interaction of the BG dimer with other molecules (i.e., the interactome) between different cell types. One possible interaction might be with orphan 30.2 (PRY-SPRY) domains (37), which would result in BG-30.2 complexes reminiscent of BTN and BTNL proteins. It is also possible that the truncated cytoplasmic tail of the dominantly expressed BG from line 61 (B2) serves to ensure that the type 2b tail is not present in T cells, if it is type 2a tails that are necessary.
In summary, the work described in this report provides a first basis from which additional experiments can clarify the nature of the BG molecules found on the surface of different cell types, with the ultimate aim of determining the function of different domains of the molecules and the selection pressure under which they evolve. The unexpected results lead to many questions, which eventually will be answered in our quest to understand the structure, function and evolution of the BG genes and molecules.
DaTa aVailaBiliTY sTaTeMenT
Twenty-nine sequences generated in this work have been deposited in GenBank, and given accession numbers are from MH156615 to MH156643.
eThics sTaTeMenT
This study was carried out in accordance with the recommendations of Home Office guidelines. The protocol was approved by the Local Ethics committee of the Pirbright Institute.
|
2018-05-01T13:03:56.133Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "4a8bb60893227937cebf41b0485ad30d19306dc9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2018.00930/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a8bb60893227937cebf41b0485ad30d19306dc9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
1229056
|
pes2o/s2orc
|
v3-fos-license
|
Kosterlitz-Thouless theory and lattice artifacts
The massive continuum limit of the 1+1 dimensional O(2) nonlinear $\sigma$-model (XY model) is studied using its equivalence to the Sine-Gordon model at its asymptotically free point. It is shown that leading lattice artifacts are universal but they vanish only as inverse powers of the logarithm of the correlation length. Such leading artifacts are calculated for the case of the scattering phase shifts and the correlation function of the Noether current using the bootstrap S-matrix and perturbation theory respectively.
Introduction
In this paper we study the properties of the two-dimensional O(2) nonlinear σmodel, better known as the XY model. This model has been the subject of extensive theoretical and numerical analysis, starting with the seminal papers of Kosterlitz and Thouless (KT) [1]. For a review of KT theory, see [2].
Analitical work is usually based on a series of mappings that starts at the original (lattice) XY model and arrives at the Sine-Gordon (SG) model or its fermionic equivalent, a (deformed) version of the chiral Gross-Neveu (CGN) model. This latter formulation is most useful if one wants to study questions related to the dynamically generated SU(2) symmetry of the model.
Most papers on the XY model study its properties interesting for Statistical Physics, in particular the pecularities of the KT phase transition, which is of infinite order. In this paper we look at the XY model as an example of 1 + 1 dimensional relativistic Quantum Field Theory. More precisely, we study the massive continuum limit of the lattice theory, which, in the language of Statistical Physics, means that we approach the KT phase transition point from the high temperature phase.
Treating the XY model as the n = 2 member of the family of O(n) nonlinear σ-models gives additional insights, since a lot is known about the n ≥ 3 models [3]. More importantly, using the SG language, we show that the approach to the continuum limit in this model is much slower than in most other lattice models. Lattice artifacts vanish in this model, instead of the usual Symanzik type behaviour [4] (i.e. integer powers of the lattice spacing), as inverse powers of the logarithm of the lattice spacing only. On the other hand, we can show that the leading artifacts are universal and calculable. Our main result is Eqs. (82) and (84) in Section 4, which allow us to calculate leading lattice artifacts in terms of SG data.
We can make use of the fact that the SG model is exactly solvable and its bootstrap S-matrix is exactly known. We calculate the leading artifacts for the scattering phase shifts using the bootstrap results. An alternative method is perturbation theory (PT). Since the SG model is asymptotically free if we use suitable expansion parameters [5], the methods of renormalization group (RG) improved PT are thus available.
In Section 2 we review the relation of the XY model to the O(n) models with n ≥ 3 and describe the chain of mappings leading from the XY model to the SG model and its equivalent fermionic formulation.
In Section 3 we recall the analysis of the phase diagram of the model in the vicinity of the KT phase transition point. This is described in the SG language.
In Section 4 we explain how to calculate the lattice artifacts and apply this to the case of the scattering phase shifts.
Finally in Section 5 we calculate the lattice artifacts for the two-point correlation function of the Noether current corresponding to the O(2) symmetry. Here we use the method of RG improved PT. To calculate the value of a non-perturbative constant needed here, we also consider the system in the presence of an external field coupled to the Noether charge. (This calculation is analogous to the one used previously to determine the M/Λ ratio for the O(n) models [6].) We give here a parameter-free two-loop formula for the lattice artifacts.
A precision MC study of the massive continuum limit of the O(2) model will be described in a forthcoming paper [7].
2 From the XY model to the Sine-Gordon model and beyond In this section we describe in some detail the chain of mappings starting with the XY model and ending at the SG model and its fermionic equivalent. We can treat the XY model as the n = 2 member of the family of O(n) nonlinear σ-models with Lagrangean The n ≥ 3 models are known to be integrable. Polyakov [8] and Lüscher [9] have shown the existence of respectively local and nonlocal higher spin conserved charges, whose existence implies quantum integrability. Assuming the spectrum of the model consisted of an O(n) vector multiplet of massive particles the exact S-matrix of the n ≥ 3 models was found by bootstrap methods [3]: where and the 'isospin 2' phase shift s (2) is given by Much less is known about the O(2) model. A simple observation is that (3) and also (5) have a smooth n → 2 limit. It is natural to assume that the O(2) model is also integrable, its spectrum consists of a single O(2) doublet of massive particles whose scattering is indeed described by the n → 2 limit of the S-matrix (2).
Although taking the formal n → 2 limit of the bootstrap results valid for n ≥ 3 is not convincing in itself, the conclusion turns out to be correct because as we will see it also follows from the Kosterlitz-Thouless theory [1] of the XY model.
Before turning to the KT theory we make a small digression to discuss the twodimensional Sine-Gordon (SG) model. Its Lagrangean can be written as where α is the dimensionless mass parameter, a is a constant of dimension mass −1 and β is the SG coupling. It is also integrable and its spectrum and S-matrix was also found in [3]. The spectrum depends on β in a complicated way but it becomes simple for the range 8π > β 2 > 4π when it is free of any bound states and consists of a single O(2) vector of massive particles whose S-matrix can again be written as (2) but now where we have introduced the parametrization The 'isospin 2' phase shift for the SG model is Note that in the β 2 → 8π (ν → 0) limit the SG S-matrix coincides with the n → 2 limit of the O(n) S-matrix, in particular lim ν→0k (ω) = lim n→2Kn (ω). The identification of the XY model with the ν → 0 limit of the SG model is surprising since in this limit the bootstrap S-matrix (7) becomes SU(2) symmetric, coinciding with the S-matrix of the SU(2) chiral Gross-Neveu (CGN) model [10]. It is not obvious where this enlarged symmetry comes from. The existence of a nontrivial XY model is even more surprising in the light of the fact that the betafunction of the coupling g 2 in (1) vanishes for n = 2 and by making the substitution S 1 = cos ϕ , S 2 = sin ϕ the Lagrangean (1) naively becomes free.
Kosterlitz and Thouless [1] argued that the fact that ϕ is a periodic (angular) variable plays an important role and therefore the model has nontrivial dynamics. They have shown that topologically nontrivial objects, vortices, are present in typical spin configurations and their interaction makes the theory nontrivial.
The standard lattice action of the XY model is We denoted by K the inverse of the XY model coupling to avoid confusion with the SG coupling β. Assuming universality, not only the cosine function but any other 2π-periodic function W (ϕ) which has a local minimum at ϕ = 0 defines a possible XY model lattice action. The Villain model action [11] is chartacterized by Kosterlitz and Thouless showed that typical spin configurations can be represented as a mixture of smooth, topologically trivial configurations (spin waves) and a gas of vortices (of integer topological charge). The KT vortices are not interacting with the spin waves, but there is a logarithmic interaction potential between the vortices which are therefore identical to a two-dimensional Coulomb gas. This spin wave + Coulomb gas (SWCG) picture is only approximate if we start from the standard action (11) but it is an exact duality transformation [12] for the Villain action corresponding to (12). That the XY model with standard action is in the same universality class as the Villain model was demonstrated using Monte Carlo renormalization group techniques [13]. On the other hand, it has been shown rigorously [14] that the Coulomb gas has a phase transition point at some finite critical coupling K c . KT interpreted this phase transition as one of vortex condensation and by a (heuristic) energy-entropy consideration showed that in the vicinity of K c vortices of topological charge ±1 only are important, higher vortices can be neglected. It is easy to see that this system (SWCG with unit charge vortices only) is exactly equivalent to the SG model. In ref. [5] it was shown that the extremal SG fixed point β * = √ 8π , α * = 0 is appropriate to discribe the KT phase transition. The renormalizabilty of the SG model around this point was explicitly demonstrated up to two-loop order in a simultaneous perturbative expansion in α and δ = β 2 −8π 8π . Finally, there is a further transformation that explains the dynamical SU(2) symmetry of the XY model. The SG model can be exactly mapped [15] to a fermionic model formulated in terms of a two-component Dirac fermion ψ. The transformation is similar to the well-known one that relates the SG model to the massive Thirring model [16]. Here the fermionic model is a deformation of the chiral Gross-Neveu model with four-fermion interaction: where is the fermionic SU(2) current. The relation between the SG couplings δ , α and the fermion couplings g 0 , f 0 is where the dots indicate that the relations (15) receive higher order corrections in perturbation theory. In the fermionic formulation the KT fixed point is the Gaussian one and for vanishing deformation parameter, f 0 = 0, the model is manifestly SU (2) symmetric. The corresponding relation in the SG language is α + 8δ = 0 at lowest order.
To summarize, the XY model in the vicinity of the Kosterlitz-Thouless transition point is believed to be described by the SG model with extremal coupling β = √ 8π. This is further equivalent to the two-component chiral Gross-Neveu model around its Gaussian point. We will use the SG language throughout this paper.
The SG description of the O(2) model
In this section we review the SG description of the KT theory closely following the approach of Amit et al. [5]. Without loss of generality we can adopt the somewhat unusual regularization scheme of the authors, since, as we will see, all important results are universal, i.e. independent of the regularization scheme. Nevertheless, it would be interesting to repeat all the calculations below using some of the more customary regularizations like the lattice or dimensional regularization.
Our starting point is the Euklidean Lagrangian [5] where m 0 is an IR regulator mass and a is the UV cutoff (of dimension length). We have denoted the dimensionless SG couplings by β 0 and α 0 to emphasize that they are bare (unrenormalized). UV regularized correlation functions are calculated by using where K 0 is the modified Bessel function, as the φ propagator. Our strategy is slightly different from [5], who really considered the renormalization of the massive SG model (16) of mass m 0 . We treat m 0 as an IR regulator mass and consider IR stable physical quantities for which we can take the limit m 0 → 0 already at the UV regularized level (before UV renormalization). All renormalization constants are, for example, IR stable and independent of m 0 . The SG coupling β 0 is close to its special value √ 8π and a simultaneous perturbative expansion in the mass parameter α 0 and the deviation δ 0 is defined, where and the parameters are renormalized according to Here the Z-factors are functions of the renormalized couplings α and δ and the combination where µ is an arbitrary mass parameter (basically the normalization point). Similarly, a renormalization constant Z is necessary to make G, the spin-spin 2-point function finite: For vanishing mass parameter α the Lagrangian (16) is trivial and Z φ = 1 since there is no need to renormalize the SG coupling. The spin-spin correlation function (which is an exponential of the basic field φ) gets renormalized even in this point, but in this case its renormalization constant is simply In addition, there is a symmetry α ↔ −α (which corresponds to a π β 0 shift in the basic field).
Taking into account the above constraints, the perturbative expansion of the Z-factors must be of the form Amit et al. [5] found the following results.
Furthermoreh 1 = −1/256, but the number h 1 is not known at present. The above two-loop beta-function coefficients were calculated also by other methods. The original calculation has recently been reconsidered and the results (27) have been confirmed [17]. The spin-spin correlation function satisfies the equation where D is the renormalization group (RG) operator and the β and γ-functions are given by Now, as is well known, not all β-function coefficients are universal. For example, under a redefinitionα the coefficients change as ((33-34) is the most general perturbative redefinition respecting the α 0 ↔ −α 0 symmetry together with the requirement that for α 0 = 0 δ 0 is not redefined.) From (35) we see that in addition to the one-loop coefficients g 1 and f 1 there exist also two-loop invariants. They are g 3 and the combination Two important physical quantities are the correlation length where M is the mass of the physical particle and the dimensionless susceptibility From (28) it follows that the exponents satisfy It is useful to introduce the RG invariant quantity Q(α 0 , δ 0 ) which satisfies DQ = 0. Introducing the inverse function k(δ 0 , Q) that satisfies we can define new β-and γ-functions: The advantage of using the variables δ 0 and Q is that the RG invariant Q can be treated in many respects as if it were a numerical constant and δ 0 were a single coupling constant. For example, if we write and then the functions Ψ 1 and Φ 1 can be determined from and respectively. Using (30) and (31) Q can be determined.
and using this in (42) we find while the Γ function to this order is Here or numerically but we will keep these constants in the following to explicitly demonstrate that all our results are universal.
We know that a one-parameter renormalizable subspace in the δ 0 -α 0 plane is equivalent to the SU(2) chiral Gross-Neveu model. (This is most evident in the fermionic formulation.) This subspace must correspond to the Q = 0 RG trajectory because we know that it goes through the point α 0 = δ 0 = 0. Moreover, it must be the δ 0 < 0 branch of the Q = 0 trajectory, since it is the one that is asymptotically free in perturbation theory. Following [5] the phase diagram of our model is represented in Figure 1. The CGN model corresponds to the separatrix S 2 on this plot, which will be referred to as PD for short. Region III corresponds to Q > 0, whereas Regions I and II correspond to Q < 0, δ 0 > 0 and Q < 0, δ 0 < 0, respectively.
²± ³°Á
In the neighbourhood of S 2 , i.e. close to the CGN case we have It is a crucial observation that near the CGN line the correlation length exponent Ψ 1 is a smooth analytic function of Q: where We will calculate the value of the nonperturbative constant Ψ 0 in Section 5. Now we can integrate (46) using the perturbative expansion (49) and for Q > 0 we get Here the dots stand for higher order terms in the perturbative expansion (these are, in principle, calculable) and also for an unknown (nonperturbative) function of Q only. For small Q (59) becomes where the dots stand for terms higher order in Q. They come from the higher perturbative terms of (59) and also from the nonperturbative function mentioned above. The point is that there are no terms, singular in Q, coming from any of these two sources. This is obvious for the perturbative terms, but must also be true for the nonperturbative contributions, since otherwise Ψ 1 would be singular on the S 2 line. Requiring it to be nonsingular on S 2 , we force Ψ 1 to diverge on S 1 , which is therefore part of the critical surface of the phase diagram PD. An other line, on which we know the correlation length must diverge is the δ 0 axis α 0 = 0, because this corresponds to a free, massless model. For Q < 0, it is convenient to parametrize Q in terms of the δ 0 value at which the RG trajectory intersects this axis. In other words, we have to express Q in terms of d that solves (Note that in this parametrization |δ 0 | ≥ |d|, because of α 2 0 ≥ 0.) The perturbative solution of (61) is Using this parametrization the perturbative solution for Q < 0 is where for d < 0 (64) is obtained by matching (63) to (56) for small Q, while (65) is formally true since this is the only way to achieve that the correlation length diverges on the positive half of the δ 0 axis. (Without the infinite constant ξ actually vanishes there.) Now we can discuss the phase diagram of our model. The entire Region I is critical. The massive phase is Regions II and III and the critical surface bordering them is S 1 plus the negative part of the δ 0 axis. They are smoothly connected across S 2 , which is the (bare) CGN model. The O(2) NLS model corresponds to the dashed curve of PD. In a MC experiment we are approaching the critical point c from the massive phase (Region III). We will denote the δ 0 coordinate of c by d 0 . Because the RG trajectories are running basically parallel to S 1 , it is physically irrelevant at which point the critical surface is reached and therefore the parameter d 0 is irrelevant. The continuum model will be the same for all points on S 1 , including the origin. But the origin is the point, where (coming along S 2 ) the continuum CGN model is defined! So our continuum theory is inevitably identical to the (massive part of the) SU(2)-invariant CGN model.
If we start from somewhere in the middle of Region II, we can define a massive continuum limit by approaching the negative half of the δ 0 axis. The intercept d is then relevant. The continuum theory is the SG model with Returning to the O(2) model, the dashed trajectory can be parametrized as where is the reduced coupling and we have assumed that physical quantities are analytic in K. (K c = 1.1197(5) [13].) Then also Q is analytic in τ : From (60) we see that along the O(2) curve where the dots stand for terms analytic in Q (and vanishing for Q = 0). In other words, for the O(2) model [1] where the constants C and b are not universal. This is the famous KT formula showing that the phase transition is of infinite order in the reduced temperature. It is more important for us that (71) can be rewritten as where which is given (if d 0 is sufficiently small) perturbatively by Note that the leading 1/(log ξ) 2 term in (73) is universal and only the subleading terms (depending on the value of the parameter u) are model-dependent. The susceptibility exponent Φ 1 can be studied similarly. It can be written as Now the crucial observation is again that, for small Q, because the (calculable) perturbative terms are analytic in Q, while the (not calculable) purely Q-dependent terms inΦ 1 must also be analytic in Q otherwise these latter singularities would also turn up for the CGN line S 2 , where they must not. From (78) we have i.e. there are no (multiplicative) log corrections in the scaling relation for the susceptibility. The possibility of such multiplicative logarithmic corrections is discussed in [18].
Determination of the lattice artifacts
Recall that the RG invariant Q has a completely different meaning for Region III (which contains the massive phase of the XY model) and for Region II (where the usual massive SG model with β 2 < 8π can be defined). Indeed, in the positive δ 0 part of Region III, close to S 1 , the (positive) parameter Q merely measures the distance from the XY critical surface on which it vanishes, whereas in Region II Q is a relevant (negative) parameter related to the SG coupling β by (61) and (66). Our main assumption is that in spite of this difference physical quantities are smoothly depending on Q in the vicinity of the separatrix S 2 connecting the two regions. More precisely, we will assume that close to S 2 any physical quantity U has the form Here U 0 = U(0) is its value for the CGN model (and thus also in the continuum limit of the XY model). The first correction coefficient U 1 can be calculated from the SG model as follows. Using the identification (61) and its perturbative solution (62) together with (66) and (8) we have This means that if in the SG model, close to the CGN point ν = 0, we have then Translated to the language of lattice artifacts by (73) we thus have This means that lattice artifacts typically go away very slowly, only as 1/(log ξ) 2 .
On the other hand the leading artifacts are universal and calculable. We apply this method first to the scattering phase shifts. Recall the SG model S-matrix (2) with (7). The three distinct S-matrix eigenvalues are wherek(ω) is given by (10). We now write for i = 0, 1 and 2. Here δ (i) 0 are the CGN phase shifts which, as remarked before, coincide with the n → 2 limit of the O(n) phase shifts. The first correction coefficients can be obtained by a simple calculation. The result is This can be used to obtain the leading lattice artifacts in the XY model by the relation (84).
Current-current 2-point function and free energy
Consider the 2-point function of the Noether current Its Fourier transform I(p) is defined by J µ (x)J ν (y) = d 2 p (2π) 2 e −ip(x−y) p µ p ν − p 2 δ µν p 2 I(p) .
where κ = Ω 0 − Ψ 0 (102) and the coupling λ is the solution of The asymptotic expansions (100) and (101) are valid for λ ≪ 1, i.e. for p → ∞ but also Q must satisfy so that the expansion (99) makes sense. It is by now standard how the nonperturbative constant κ can be calculated. For this it is necessary to consider the free energy in an external field and we now turn to this calculation. We follow here [19] and start from the modified Lagrangian which corresponds to adding a term ihJ 2 to the Lagrangian density. The modified ground state energy must be of the form where F (h) is dimensionless. In perturbation theory we get whereΩ (h, a) = f 1 (ln ha +Ω 0 ) (108) For Q = 0 therefore F (h) has the following asymptotic expansion
|
2014-10-01T00:00:00.000Z
|
2000-11-23T00:00:00.000
|
{
"year": 2000,
"sha1": "340ce0863955593b1c764146d397874f4cb453f6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7d8c775c181aa86e2c4f19f5f9847ca77b1567b1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
221104328
|
pes2o/s2orc
|
v3-fos-license
|
The burden of bites and stings management: Experience of an academic hospital in the Kingdom of Saudi Arabia
Purpose The main aim of this study is to estimate the economic burden and prevalence of bites and stings injuries in Saudi Arabia. Methods A retrospective, cross-sectional study was conducted at King Saud University Medical City (KSUMC) for all bites and stings cases presented to the Emergency Department (ED) between the period June 2015 and May 2019. Results A total of 1328 bites and stings cases were treated in the ED at KSUMC. There were 886 insect bites and stings cases, 376 animal bites, 22 human bites, 34 scorpion stings, and ten snakebites. Most cases were reported in April - June. Females account for 62% of the reported cases, and the mean age was 24 years old. The total management cost of bite and sting cases during the study period was 3.4 million Saudi Riyal (SR). The spending cost of the management of animal bites was the highest as it cost 1,681,920.76 SR, followed by insect's management costing 1,228,623.68 SR. Conclusion Bites and stings have a considerable health care burden on our society. Although the vast majority of the cases were not associated with a severe life-threatening condition, many were visit ED and associated with high medical costs. Increased awareness of the hazards of animal-related injuries, especially during spring and summer, where most cases take place may lower its incidence and decrease EDs visits.
Introduction
Bites and stings are a common source of injuries seen in the emergency department (ED). Substantial trauma, tissue damage, infection, allergies, rabies exposure, disability, psychological effects, and rarely death may result from bites and stings (Langley, 2009;Sinclair and Zhou, 1995;Christian et al., 2009).
Bites and stings continue to cause a major public health challenge, and the clinical sequelae of bites and stings can extend far beyond simple wound management (Moosavy et al., 2016). Identification of people bitten and stung remains a challenge and incomplete (Patronek and Slavinski, 2009). Previous data indicate that most animal bites to humans in Saudi Arabia have involved snakes, dogs, cats, rodents, and foxes (Organization, 1992). Rabies is endemic in some animals in the Arabian Peninsula, and feral dogs were the primary cause of human Rabies (Memish et al., 2015).
Insects bites and stings are a coman injureis and many patients visitng the ED complaining about allergic reaction to insest bites or stings. Insect stings are also an important cause of anaphylaxis (Moffitt, 2003). Medical implication and social nuisance through poisonous and severely painful insect sting is also set by Samsum ant, Brachyponera (Pachycondyla) sennaarensis. Its sting also constitute anaphylactic shock in some of the cases. Samsum ant in various provinces of Saudi Arabia. B. sennaarensis is a harmful insect in human settlements. Infestations of B. sennaarensis are immense in the spring and summer seasons and the ants construct nests in moist places and in cracks of cemented structures, but infestation is declined in winter (Al-Khalifa et al., 2015).
Bites and stings may lead to costly healthcare utilization, starting from ED visits, hospitalization, and death (Al-Sadoon and Jarrar, 2003;Abrahamian and Goldstein, 2011). The current epidemiology of bites and sting injuries in Riyadh province is highly required to enhance the public health interventions toward its prevention.
Few studies reviewed the epidemiology for bites and stings in a different region in Saudi Arabia and often focusing on snakes and scorpion cases (Jarrar and Al-Rowaily, 2008;Malik, 1995;Neale, 1990). Identification of the common stings and bites in our region, as well as the cost associated with it, is crucial in the plan for management and to reduce health resources. However, up to our best knowledge, no study reviewed the epidemiological along with the total expenditure of management to these injuries in the region. The primary objective of this study was to estimate the prevalence of bites and stings that lead to ED visits and hospitalization in Riyadh, Saudi Arabia. The secondary objective was to estimate the direct medical cost and health care resources associated with the management of bite and stings injuries. This study will help to understand the epidemiology of bites and stings injuries and associated costs.
Material and methods
A retrospective, cross-sectional -prevalence based cost study was conducted at King Saud University Medical City (KSUMC) for all bites and stings cases presented to its ED between the period June 2015 and May 2019. KSUMC is large tertiary care with more than 1800 beds that provides health care services to a large population in Riyadh, which is the capital and largest city of Saudi Arabia, with a population of over 7 million. The patient population is composed mainly of Saudi citizens who are predominantly residents of the capital city of Riyadh; also, it serves as a national referral center.
Data sources
Data were retrospectively identified all bites and stings cases from the electronic health care records (EHR) for the period 2015-2019 at KSUMC. The retrieved data included demographic information such as patient's age, gender, types of bites, or stings.
The severity score was assessed based on the Poisoning Severity Score (PSS), this scoring system was developed by European Association of Poisons Centers and Clinical Toxicologists (EAPCCT) (Persson et al., 1998). The score ranging from 0 to 4 as follows: 0 for an asymptomatic patient, 1 for mild symptoms, 2 for prolonged symptoms, 3 for Severe or life-threatening symptoms and 4 for death (Persson et al., 1998). Additionally, all direct cost data related to each case were collected and calculated from the payers' perspective, and therefore only direct medical costs were included. The cost analysis was based on the patient's length of stay at the hospital, plus the cost of the interventions provided to the patient, including medications and vaccinations. The costs were estimated based on all resources required for the management of bites and stings, including ED visit, laboratory, medications, and admission. The estimated cost/day was extracted from the Business Center at KSUMC according to the patient location area (ED, ICU, and Ward admission). The cost is presented in Saudi Riyals; however, in order to make international comparisons, we converted the costs to USD.
The average total costs per each patient were calculated for each type of bites and stings separately. Total direct medical cost per patient = (average number of visits  visit fees) + (average number of tests  fees for each test) + (number of medications prescribed in the course of treatment  fees of every unit of medicine) + (average number of hospitalization days  fees for each day admission) + (average number of diagnostic services  fees for each diagnostic service) + (average number of other services  fees for each course of service). An average number of health services (visits, tests, diagnostic services) was calculated as the total number of visits divided by the total number of patients receiving those services.
Study population
The study included all bites and stings cases presented to KSUMC-ED between 1 June 2015 till 31 May 2019. Types of bites and stings included in this study were; animals, humans, snakes, insects, and scorpions.
Statistical analysis
Descriptive statistics (frequency and percentages) were used to summarize the categorical variables (sex, marital status, nationality, bites and stings types, severity score) and all costs. Means and standard deviations were calculated for continuous variables (age). All statistical analyses were conducted using the Statistical Analysis Software, version 9.2 (SAS Ò 9.2).
Ethical consideration
Before commencing the study, approval was granted by the Institutional Review Board (IRB) of the King Saud University Health Colleges as project E-17-2351. All the research activities were carried out in compliance with fundamental ethical principles and policies of the IRB. Confidentiality and privacy of all data were maintained throughout the study period.
Results
From June 2015 to May 2019, there were 1328 reported cases treated in ED at KSUMC for injuries related to bites and stings. Table 1. shows the main characteristics of the cases, frequency and percentage distributions of bites and stings, arrival time to ED, and their outcomes. Analysis by sex reveals that 823 (62%) of the reported cases were females. The mean age was 24 years old, and most of the cases (48%) arrived at the ED in less than six hours. There were 886 insect bite and sting cases, which account for highest number of cases in our study, followed by 376 animal bites and 34 scorpion stings. Moreover, there were a total of ten snakebites reported, and only one out of the ten died. The severity of the cases was measured based on PSS, and we found that the majority (94%) of bites and stings cases were mild, and only about 5% of the cases were considered severe. The snake and scorpion cases among the highest severity, which account for 20% and 7% of their cases (see Table 2).
The most common site of bites was the upper extremity (48.2%), followed by the lower extremity (41%) and head (7.4%). Almost 96% of the cases were discharged after receiving the appropriate treatment at ED, and only 32 cases (2.4%) with severe complications got admitted to the hospital. Forty percent of snakebites cases were admitted to hospital, and only 3% of insect bites required hospitalization. The duration of hospitalization ranged from 1 to 23 days and averaged at six days. The yearly incidence of bites and sting cases indicated that there was an increase in the number of cases each year (Fig. 1). The majority (78%) of cases, were bites and stings injuries that occurred during the night time, and 16% occurred at daytime, while 6% were not reported.
The seasonal percentage and incidence of cases demonstrate a considerable variation throughout the year. Most of the cases took place during summertime (May-July) and again during the winter (Oct-Dec). The seasonal distribution of the cases is represented in Fig. 2. Most cases were reported in May (165 cases), followed by April (150 cases), then June (144 cases), while February (52 cases) got the least reported cases.
The average cost per case was highest for snake bites (26,460 SR), followed by human bites (5,638 SR) and Scorpion stings (4,615 SR), while insect stings were the lowest at (1,387 SR) per case (Table 3). The cost was significantly high with severe cases. The cost of severing snake bites cases was associated with the higher cost (46,143.00 SR) per case following by severing animal cases (23,832.16 SR) and sever scorpion cases which cost around (17,638.19 SR) per case. Diagnostic tests and medication costs were accounted for the highest utilization in bites and stings patients.
The total cost of managing bite and sting cases during the four years was 3,429,629 SR (903,063 USD). 1,681,921 SR was the cost spent on the management of animal bites. While 1,228,624 SR for the management of insect stings and the cost spent on the management of snake bites was 238,140 SR. Cost analysis was presented in detailed in Table 3.
Discussion
Bites and stings cases indicate a high burden of diseases, with a total of 1,328 cases occurred during the study period. The mean age was 24, which is similar to what has been reported in different studies (Al-Sadoon et al., 2017;Jarrar and Al-Rowaily, 2008;Al-Sadoon and Jarrar, 2003). Overall, in this study, 62% of the patients were females. Previous international studies demonstrated that males were more likely to be victims, as they are more likely to be outdoors (Anil et al., 2010;Graham and McCurdy, 2004). However, this difference is probably due to the high number of insect bites and stings cases reported in this study, which occur mostly in females. They were superficial stings that caused moderate to severe itching and skin manifestations that warrants them to seek medical assurance. The Saudi culture embraces and enjoys socializing and sitting on the floor and yards, with that, although they were not witnessed, they are most likely to be ants.
The bites and stings were commonly encountered during night time. This probably because many of the patients were staying late nights, and sleeping outdoors in the desert is a common activity in Saudi Arabia. Also, because snakes, scorpions, and insects are more active at night due to the hot climate (Agrawal et al., 2001). On the other hand, the majority of cases in this study presented to the ED within six hours of the injury. Few cases presented after 12 h, which can worsen the condition and need more intensive and costly treatment. This may be due to a lack of awareness about the need for early management. Further study is needed to understand such behaviors. The prevalence of insect bites in this study is very high (66.6%) compared to other types of bites and stings in this study. This finding is in agreement with other studies conducted in the US. Where they reported that insect bites were the common animal-related injuries (Nogalski et al., 2007). Majority of insect cases were complain about allegies. Althought, ant allergy is also a rare clinical problem involving local to systemic reaction and life-threatening anaphylaxis. The black ants (samsum), are considered health haz-ard in many parts of Saudi Arabia and the World. Cases with history of recurrent anaphylaxis following black (samsum) ant stings in Saudi Arabia have been reported in the literature (Al-Shahwan et al., 2006). Morphology and ultrastructure of the venom gland of queens and workers of the samsum ant Brachyponera sennaarensis, which is known for its very painful sting (Billen and Al-Khalifa, 2018).
Furthermore, in our study the occurrence of cat bites 281 cases (75%) is much higher than dog bites 41 cases which only account for (11%) of the total animal bites. This finding is inconsistent with other reports which have shown dog bites incidence of up to 90% of all animal bites cases. This can be attributed to the culture of avoiding having dogs, leaving more cats to be the most popular choice as home pets. However, cat bites can be severe and involve infections more than dog bites, and this also indicates how necessary is the early treatment to reduce the likelihood of infection spreading to deeper structures such as bone and joint infection (Griego et al., 1995;Benson et al., 2006;Quirk, 2012). When observed, the seasonal variation of the cases, bites and stings injuries were more prevalent during spring and summer months, similar findings have been stated in other studies (Kama et al., 2019). That is because people tend to go outdoors for camping and picnicking in mountain and desert during this period (Mosbech and Bay-Nielsen, 1991).
The majority of bites and stings cases in this study were mild, and patients were treated and released from the ED. Although the majority of the cases were mild, they went to the ED and sought medical help associated with high medical costs. This is consistent with literature were animal-related injuries required hospitalization in only 1.8-2.7% of the cases (Nogalski et al., 2007;Langley et al., 2014). Hospital stay ranged from 1 to 23 days; this is lower than what is reported in other studies were duration hospitalization ranged from 2 to 68 days (Nogalski et al., 2007;Kama et al., 2019). This is mainly because most of the cases were mild, and they arrive early at the hospital.
There were more than 50 species of snake in Saudi Arabia; about 20% of them are venomous (Al-Sadoon, 2015). In this study majority of cases were recovered and cured with only one patient died as a complication of snakebites. This mainly because those patients arrived in a reasonable time frame and receive the appropriate medical intervention and supportive care, in addition to the availability of poly antivenom targeting the local venomous snakes. This is in agreement with several studies that demonstrate the low mortality rate (0.3%) from snake bites in Saudi Arabia (Al-Sadoon, 2015;Al Durihim et al., 2010). Due to the nature of this study, we don't have detailed information about the snake species involved with each case, so it is hard to conclude the actual cause of death. Still, we noted that the patient did arrive more than 6 h, which consider a little late compared to the other cases.
Majorities of bites and stings injuries occurred in upper and lower extremities, which is similar to other studies that found most of the cases were on upper hand or arms, and approximately 27.5% were on legs (Langley et al., 2014). As those areas were often left uncovered, this indicates that increase awareness about the importance of wearing long sleeve shirts, full pants and, shoes or boots can prevent many bits and sting cases. Also, Appling insect repellent may prevent many insect bites.
Bites and sting cases indicate a high economic burden where the total cost of managing bite and sting cases during the study period was 3,429,628 SR. The cost of snake bites cases was associated with the higher cost range from 3,000 SR per case for mild cases to 46,143.00 SR per case for severe cases, followed by severe animal bites (23,832.16 SR) and severe scorpion cases which cost around (17,638.19 SR) per case. A US-based study estimated that the cost of treating snakes bites range from $242 to $1,813,253, with an average of $86,333 per case. On the other hand, the cost associated with the management of scorpion stings ranged from $905 to $253,511, with an average of $31,322 per case. For wasp and bees, the cost ranged from $117 to $746,799, with an average of $20,598 (Forrester et al., 2018). Another study conducted in Iran demonstrated that the cost of treatment snake bites was $ 2,104 (7,890 SR) per case, and scorpion stings cost 1192$ (4470SR) (Mashhadi et al., 2017). There are additional expenses for bites and stings injuries that we are not included in this study, such as non-medical and indirect medical costs like lost earnings due to work absences, as we did not have the necessary data to calculate such cost. In the USA, it has been estimated that indirect medical costs in the form of lost productivity can cost $5,674,230,000 annually (Langley et al., 2014). However, as many of the cases in our study were not severe, the impact of these costs could be low.
Our study indicates that diagnostics tests and medications accounted for a large share of the health care cost of treating bites and stings. This result is in line with the literature where they demonstrated the treatment of snake bites and scorpion stings is effective but expensive. Therefore, considering the value of all medical health resources used in the treatment and diagnosis of bites and stings when developing a treatment protocol would result in a more appropriate and efficient cost-saving strategy. This cost can be further reduced by establishing awareness programs about bites and stings, emphasizing the importance of wearing shoes, and sleeping indoors will be effective preventive steps. The findings of this study would be of use for decision making in planning specific interventions to mitigate such cases. National epidemiological, social, and economic data are required to explore the burden of bites and stings on the health care system in Saudi Arabia, which will help the decision making in prioritizing health resources.
Limitations
This was a retrospective study, which is associated with some inherent biases in such a study design. The data were calculated from only one academic institution for those who arrived in the ED with bites and stings injuries. Finding of this study may not reflect the treatment pattern and resource utilization in another institute who may use different guideline in treating bite and sting cases.
In this study, we did not include cases that untreated or those who were self-treated, which may underestimate the healthcare burden associated with bites and stings. A national survey would provide a better understanding of the problem and could help generate a more thorough discussion of these issues.
Another limitation is that majority of insect bite cases were classified as an unspecific insect, as in many cases, patients may be bitten but not seen the insect, or it may be due to a limitation of the documentation of the source of insect bites in our data. This information is essential, especially in the case of anaphylaxis. Despite these limitations, this study provides a better understanding of the magnitude and economic burden of bites and stings.
Conclusion
Bites and stings represent a common burden in Saudi Arabia. Although a majority of cases were not serious, many cases required admission in hospital wards and intensive care units. Findings from this study may guide decision-makers about the importance of providing a public education campaign about how to deal with animal-related injuries and when to seek emergency care focus on the people who were frequently injured, and develop policies to mitigate such injuries. This will lead to reduce the number of injuries treated in EDs and accordingly reduce the hospital cost needed to treat and manage the injury result from bite or sting.
|
2020-07-16T09:06:00.819Z
|
2020-07-09T00:00:00.000
|
{
"year": 2020,
"sha1": "e655bec47108cfcc7fff8fc364863ed74428f9cc",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jsps.2020.07.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c89a6acd4f415ba0f3d008ff6eff46c2f357a0bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53057567
|
pes2o/s2orc
|
v3-fos-license
|
Integrated Development Plans without Development Indicators : Results from Capricorn District Municipalities in South Africa Human
The use of development indicators in the Integrated Development Planning (IDP) process of municipalities is not only a legislative requirement but also ensures that municipalities effectively assess the impact of their development programmes and projects on the objectives of sustainable development. Central to the constitutional mandate, is the Municipal Systems Act of 2000, which details the general matters pertaining to IDPs. Development indicators provide municipalities, as the main implementing agencies for government policies and programmes, a framework to present aggregated data on human development and provide evidence-based pointers to the evolution of society. A properly constructed set of indicators may not only suggest the planning measures which should be employed but also throw light on a better formulation of targets, goals and objectives of planning. Based on secondary research, interviews were held with gatekeepers and a questionnaire for other stakeholders. This paper explores the limited use of these development indicators in the IDPs of local municipalities. It shows that municipalities have little understanding on development indicators and how they can help them in addressing challenges experienced by communities. The development of a compendium of development indicators by local government and the inclusion of those indicators in strategic documents of municipalities must be mandatory. The major contribution of this article is the model for data collection and processing for municipalities which it posits and consists of a step by step procedure from data collection, single to composite indicators generation to assessment and interpretation of data and results including monitoring and evaluation. The efficiency of the use of development indicators for planning may not be fully realized because of the manner in which municipal planners have been found to employ indicators for their own purpose.
Introduction
Service delivery in South Africa is of crucial importance because of the central role it can play in poverty alleviation.According to Krige (1998), the South African local authorities were historically not economically viable and that the level of service delivery, particularly in townships, was inexcusable.The introduction of integrated development planning by the Department Cooperative Governance, Human Settlement and Traditional Affairs (CoGHSTA) was an attempt to improve the planning process and enhance service delivery at the municipal level.The Integrated Development Plan (IDP) is at the core of South Africa's post-apartheid municipal planning system and is regarded as a key instrument in an evolving framework of intergovernmental planning and coordination.The introduction of the IDP was a response to challenges facing the post-apartheid government and the need to speed up service delivery for a better life for all.It is primarily a plan concerned with directing and coordinating the activities of an elected municipal authority.
The 1998 White Paper on Local Government (hereafter, White Paper) identified the IDP as a key tool of local government which is concerned with promoting the economic and social development of communities.Linked to the IDP is a broader package of instruments which include performance management tools, participatory processes and propositions on service delivery partnerships.The White Paper emphasizes the role of the IDP in providing a long-term vision for a municipality, setting out the priorities of an elected council, linking and coordinating sectoral plans and strategies, aligning financial and human resources with implementation needs, strengthening the focus on environmental sustainability and providing the basis for annual and medium-term budgeting.The purpose of Integrated Development Planning is to provide a framework within which municipalities can be coordinated, based on the understanding of their own situation (RSA, 1998).
Municipalities across the country prepare IDPs which are supposed to improve the living conditions of the people.However, the planning process does not identify measurable indicators, thus making it difficult if not impossible to determine the success levels of an IDP and its effectiveness.It is therefore against this background that this article investigates measurable indicators which should exist within the IDP as baseline and benchmark indicators in order to assess progress being made by the municipality and obviate service delivery protests which have since become the order of the day.In the context of this study, development indicators refer to information that allows organisations to measure progress in an effort to eradicate poverty.
The aim of this article is to provide information on development indicators in ensuring the effectiveness of IDPs with the overarching objective of investigating the existence of and gaps within current development indicators in four local municipalities in the Capricorn District Municipality, namely, Aganang, Molemole, Polokwane and Lepelle-Nkumpi in Limpopo South Africa 2. Literature Review Kriege (1998) highlighted that the conditions and the level of service delivery in townships were degrading and municipalities did not have adequate financial muscles.CoGHSTA introduced the integrated development planning process in their endeavour to improve planning at local level.The discourse on governance and planning internationally is centred on integration, performance management and participation (RSA, 1998).
Local government is the key agent in transforming and democratizing development in South Africa (Parnell et al. 2002).As Rauch (2003) showed, to mandate grassroots development and public participation, integrated development planning should be used as the vehicle.Physically, local government is the closest to the community: therefore it is expected that opportunities to facilitate development and engage directly with local people in a local sphere should be created by municipalities rather than by other spheres of government (Ceasar and Theron, 1999; Chapter 7 of The Constitution of the Republic of South Africa (1996)).Chapter 5 of the Municipal Systems Act (MSA) (2000) consists of four parts, detailing the general matters pertaining to IDPs; contents of IDPs; process for planning, drafting, adopting and reviewing IDPs; and the miscellaneous features of IDPs, which include effecting the IDP and the status of the IDP.Importantly, part two of Chapter 5 in the MSA (2000) outlines several core components of IDPs which provide a yardstick for municipal integrated planning.
South African local authorities use integrated development planning as a method to plan future development in their areas principally because rural areas were left undeveloped and largely un-serviced.As much as the IDP was a contextual response to challenges facing the post-apartheid government -in particular, the need to gets a new system of local government working.The National Department of Cooperative Governance (DCoG) adopted good practices from other countries to improve governance at local government level.Municipal planning for service rendering in South Africa is a compulsory process for all municipalities in terms of section 25 of the Municipal Systems Act, 2000.The main objective of this planning initiative is to ensure that, current service delivery challenges are met by examining relevant modern systems and joint venture approaches so that municipalities perform their functions diligently and in a way that is developmental and fiscally responsive.
According to The IDP Guide-Pack (DPLG -1999/2004), integrated development planning provides a process though which municipalities prepare strategic development plans for a five year period.It is a principal strategic planning instrument which guides and informs all planning, budgeting, management and decision-making in a municipality.Furthermore, it is a legislative requirement that an IDP be seen as the primary blanket plan which takes precedence over all other plans which guide development regarding municipal planning (Naude, 2002).Harrison (2003) pointed out that the IDP was a contextual response to challenges facing the post-apartheid government -in particular, the need to get a new system of local government working -but the nature and form of the IDP were strongly circumscribed by the international discourse and practice which prevailed at the time of its introduction and early development.Phago (2006) argues that although developing an IDP is a legislative requirement as well as standard practice for municipalities in South Africa, prescriptions on its content or specifications are not provided.Each municipality is responsible and accountable for its planning process (Craythorne, 2003).
As such, a municipal IDP should be a clear manifestation of prioritized communal needs that require urgent attention from the local government.As a style of strategic planning that departs from the master planning models of the past, the preparation of IDPs represents a more flexible model for responding to the many challenges that face local authorities.IDPs are local planning processes that are intended to give strategic direction to the work of municipalities (their programmes, projects and budgets) and to activities undertaken by provincial and national government departments operating in their areas.It is an approach to planning that involves the entire municipality and its citizens in finding the best solutions to achieve good long-term development.It represents an opportunity to forge a stronger relationship between planning and implementation, which can be argued, that planners have generally been weak at achieving in the past.
As stated in the IDP for the Capricorn District Municipality (2011), IDP is a comprehensive, integrated and multifaceted plan that: • links, integrates and co-ordinates the functions and strategies of a municipality; • aligns the resources of a municipality with the agreed-upon objectives and outcomes; • forms the overall strategic plan for the municipality; and • is a mechanism for participation and democratization of local government The core components of development planning are in compliance with the constitutional mandates of local government which are to ensure the provision of services to communities, to strengthen democratic values at local municipality level and to encourage the involvement of communities, including the marginalized groups (RSA, 1996).The IDP as a strategic tool guides the formulation, implementation and execution of strategies including the effective use of scarce resources by focusing on the most important needs of local communities to speed up service delivery to the poorest of the poor.It should also ensure the empowerment of local communities in local economic development, as outlined in section 152(1) of the Constitution of the Republic of South Africa, 1996(RSA, 1996).Todaro et al. (2009) posits that the planning process can be described as an exercise in which a government first chooses social objectives, then sets various targets and finally organizes a framework for implementing, coordinating and monitoring a development plan.They rightly argued that the economic value of a development plan depends to a great extent on the quality and reliability of the statistical data on which it is based.When these data are weak, unreliable or non-existent, the accuracy and internal consistency of economy-wide quantitative plans are greatly diminished.Thus, IDPs are designed to act as a vehicle which will facilitate development within municipal areas.This has also created expectations from local communities that the state has to provide employment.As such this lack of understanding of what the IDP seeks to achieve has not allowed communities to exploit the available resources to create self-employment and become self-reliant.Marais, Human and Botes (2008) argued that the IDPs fail to achieve their objectives precisely because they do not use development indicators as a basis on which their strategic decisions are made when monitoring their programmes or projects.Measurable indicators are not identified during the IDP process and thus making it difficult if not impossible to determine the level of success of the IDPs.Their absence and the impact they have on the effectiveness of the IDPs needs a thorough analysis.Secondly, Chapter 4 of the MSA of 2000 requires that municipalities in preparation and reviewing of their IDPs should encourage participation and it further indicates clearly how the development of community participation should unfold.A municipality must develop a culture of municipal governance that complements formal representative government with a system of participatory governance.The constitutional mandate of local government is to ensure the involvement of communities in the IDP process (RSA, 1996).Marais, et al., (2008) pointed out that the use of development indicators has shifted from focusing on economic indicators to indicators attempting to measure sustainable development.It was during the 1990s that the United Nations Development Programme (UNDP) developed the Human Development Index (HDI) to measure the average achievements of a country in three basic dimensions of human development: a long and healthy life, as measured by life expectancy at birth, knowledge, as measured by the adult literacy rate and the combined primary, secondary and tertiary gross enrolment ratio and a decent standard of living, as measured by the GDP per capita (PPP US$) (Todaro et al., 2009).Other indicators, such as the Gender Equality Index (GEI), which measures the inequalities in attainments of human development indicators between females and males, the Human Poverty Index (HPI) captures deprivation in three dimensions of human development, namely, economic, education and health.GDP which directly captures the economic attainments and hence the level of well-being of individuals and the Gini coefficient which is normally used to measure inequality in wealth, were also developed.The high value of the Gini coefficient (0.69) in South Africa illustrates the skewed development the country finds itself in.
The use of development indicators in planning is not only required by legislation (in terms of MSA, 2000 among others), but also ensures accountability by decision-makers when measuring development indicators (Mukherjee, 1981;Parnell and Poyser, 2002;and Marais, et al., 2008).The level of development in municipalities is informed by development indicators which guide municipalities where scarce resources should be allocated.However, it has to be stressed that any indicator is only as good as the data upon which it is built.The strengths and weakness of indicators lie in their selection, which facilitates decision-making but also opens the door to data manipulation.Data sets can be of poor quality or of good quality and there may be gaps.Indicators are needed to increase focus on development and assist decision-makers at all levels to adopt sound national sustainable development policies.
The Performance Monitoring and Evaluation (PME) unit (RSA, 2007) points out that development indicators provide a framework to present aggregate data on human development and provide evidence-based pointers to the evolution of society.Indicators are seen as tools for guiding public policy and programmes towards the development goals of the society and at the same time provide criteria to evaluate the process of social change.The PME unit (RSA, 2007) considers indicators as markers that help define the milestones in a journey of social change.An indicator can be compared to a road sign which shows whether you are on the right road, how far you have travelled and how far you have to travel to reach your destination.
Municipalities in South Africa are the main implementation agencies for government policies and programmes in the country and their developmental obligations have been clearly spelt out in various policy documents (SA-DPLG 2001a).CoGTA has provided municipalities with extensive prescriptions and guidelines to implement performance assessments as part of their statutory obligations regarding integrated development planning in the local government sphere (DPLG 2001a(DPLG , 2001b(DPLG , 2003)).CoGTA's Municipal Planning and Performance Management Regulations ( 2001) require that a "municipality must set key performance indicators, including input, output and impact indicators, in respect of each of the development priorities and objectives " that it must specify in terms of the Local Government's Municipal Systems Act, 2000(SA -DPLG 2001c).
According to Marais et al. (2008), baseline indicators are needed to develop targets in order to measure the progress made towards achieving the set goals.Development indicators have three functions: to monitor change, to measure social, economic and environmental welfare, and, lastly, to provide comparisons based on targets, benchmarks or performance in the past (Schwabe, 2002).Indicators measure the impact of development and intervention programmes as well as performance of implementing agencies and those of government.In terms of monitoring progress, indicators perform two major functions.Firstly, they describe a baseline situation of various components of the development process.Secondly, they measure change in baseline information or data within a temporal and spatial context (Statistics South Africa, Framework for Policy, Information and Planning, 2006).It is necessary to pre-define change in the desired outcomes by way of setting goals for a specific period of time as a way to increase indicator effectiveness (Achkbache, et al., 2001).
Challenges and Problems with Regard to Development Indicators
There are serious challenges with regard to development indicators which relate to their quality assurance as well as their validity, relevance, measurability, efficiency, simplicity, availability and representativeness of indicators.Indicators are critical for policy and decision-making and their relevance is important (IISD, 1998;OECD, 1999).Indicators are mostly dependent on quantitative data and since collecting information is costly, the collection of relevant data is essential, since the analysis of unreliable or biased data could result in serious distorted analytical and policy conclusions (Parnell and Poyser, 2002).The coherence of statistical information which would reflect the degree to which it can be successfully brought together with other statistical information within a broad analytical framework could be a daunting task for a developing country like South Africa.The use of standardized concepts, classifications and target populations could promote coherence, as does the use of a common methodology across surveys.Cross-checking and validating data from different statistically agencies on the same information will minimize the challenges of bias.
Complications with the use of indicators include the fact that it is difficult to develop indicators that are applicable to diverse contexts, there are many ideological and other normative obstacles that have to be overcome, and the validity of indicators to measure exactly what is intended to be measured, is not always accepted in different schools of thought on this matter (UNDP 1997, UN-CSD 2001, World Bank 2001).Cloete (2004) argued that there is a general lack of sufficient, reliable data to use for measurement purposes, as well as a lack of appropriate information management and assessment of systems to record, manipulate and convert these required data sets into the desired indicator formats.In general, there is also a high risk of unnecessary duplication when determining measuring instruments because of high degrees of ignorance about what is already available and what is not.
Materials and Method
The research design used for this article was qualitative in nature in which a structured questionnaire was administered as a tool for data collection.IDP documents of sampled municipalities were also obtained and analysed.
The population of the study was the four local municipalities and the Capricorn District Municipality.Each municipality has its own IDP.Local municipalities are constituted by rural and semi-urban municipalities.A sample is a group of elements drawn from the population, which is considered to be representative of the population, which is studied in order to acquire some knowledge about the entire population and therefore sampling is the technique by which a sample is drawn from the population (Bless et al., 2000).The study used a non-probability sampling method in the form of purposive sampling.De Vos et al. (2004) indicate that this type of sample is based entirely on the judgement of the researcher, because a sample is composed of the elements that contain the most characteristics, representative or typical attributes of a population.A sample of four municipalities was obtained and within them, the interviews focused on the mayor or the speaker of the selected municipality; IDP manager, municipal IDP official within the selected municipalities, who are directly involved with the development of the IDP process; traditional authority representative from the community within the municipality; a group of ten (10) youths who might have a different perspective as well as groups of ten (10) people from Community-Based Organizations (CBOs) such as business and women's groups.Qualitative analysis is a non-numerical examination and interpretation of observation, for the purpose of discovering underlying meanings and patterns of relationships (Babbie & Mouton, 2006).
Data were collected, validated and analysed using a thematic approach data were analysed.As such, different data were grouped together as per issues picked up during data collection.
Results and Discussion
Findings from the assessment report of municipal IDPs: The Member of the Executive Committee's (MEC's) 2011/12 IDP assessment report revealed that all municipalities within Capricorn district demonstrated compliance with the legal framework as prescribed by government, by adopting their IDPs for the 2010/11 financial year.The credibility of these IDP documents were then rated by the MEC for CoGHSTA after undergoing assessment and the results showed that only two local municipalities (Aganang and Lepelle-Nkumpi) within Capricorn district received a high rating, implying that their IDPs were credible.This constitutes only 40% of the entire district excluding the district municipality.Capricorn District Municipality also received a high rating.The other three remaining municipalities received medium ratings.
Existence of development indicators: The use of development indicators is mandatory in the IDP as they ensure the use of measurement for policy and decision-making.Indicators ensure that the deployment of scarce resources is channelled to areas of dire need as alluded to in the literature.The analysis of IDP documents of the Capricorn district family of municipalities reveals that some indicators do exist despite being single indicators.Single indicators consist of a single variable.It however, emerged from the interviews that officials were using these indicators without any clear understanding of how they should address issues of development in their municipalities.There are certain drawbacks attached to the use of these indicators and these should be taken into account when considering which ones to use.The quality of the data produced by different agencies differs due to varying methods of data collection, different concepts and standards and single indicator cannot capture all the important aspects of development.No matter how difficult this may be, it is imperative that focus be maintained in terms of measuring social progress or improvements in the quality of life, for the purpose of proper planning (IISD, 1998;OECD, 1999).
Findings from the analysis of municipal IDPs: Planning is not complete without the effective utilization of indicators.The use of indicators in the context of planning induces a source of social change.It has however been observed that data was presented in a fragmented manner across all the IDP documents of the municipalities under study.All the IDP documents of the five local municipalities and one district municipality were analysed regarding which indicators they are using and also their horizontal and vertical alignment with neighbouring municipalities, provincial government indicators, the Compendium of National Indicators and Millennium Development Goals (MDGs) indicators.Municipalities do not have a compendium of development indicators common across all municipalities in the district.A total of 72 indicators were identified in all these municipalities within Capricorn District such as the proportion of people in a household without a pit latrine with ventilation or flush toilet, people living in a shack, People aged 15-65 who are unemployed, and so forth.
In terms of indicator usage, Capricorn District Municipality (CDM) used more indicators than its local municipalities.Molemole Municipality (50%) has used exactly half of the total indicators and is the highest, compared to other local municipalities, in terms of indicator usage within the Capricorn District, followed by Polokwane Municipality with 47%.Blouberg has the least (22%) number of these indicators.Most of these indicators in the IDPs are target indicators without baseline indicators and as a result, making it difficult to measure if there was indeed progress.Access to a range of different data sets as alluded to by Boyne (2003), makes it possible to compare performance change, as measured by these indicators and services delivered.Whether these indicators, either singularly or in aggregate, give a reliable measure of improvement is of course a different matter.
Gaps within the current indicators: All municipalities have challenges in using target indicators without baseline indicators.The development of targets depends solely on the availability and existence of baseline indicators.Some municipalities provided target indicators without baseline indicators in place.For example, with regard to HIV infection rates, some municipalities only indicated that the infection rate should be reduced by 20%, a target which is not measurable and unattainable in the absence of baseline information.All indicators presented in the IDP documents were single indicators with no attempt being made to include composite indicators such as Human Poverty Index (HPI).It should be acknowledged that a single indicator cannot cover all the important aspects of development.Composite indicators can also be too abstract and pose a serious challenge when drawing comparisons between individuals or households and they also have limitations in providing adverse gender or race disparities on social progress (May et al., 2000).A case in point is that of the Gini coefficient which measures inequality in a population regarding a specific value but does not measure disparities in terms of asset distribution and ownership and also that it assesses income only and does not show gender.Development indicators form the basis of integrated development planning.The inadequate usage of development indicators in the IDPs has serious implications for development planning.
Time-series data: The other observation was that not all of these municipalities under study were consistently using more than one data points.There are instances where planners in these municipalities used two to three variables.This could be a deliberate effort by the IDP managers to conceal those areas in which they are not performing well.A properly constructed set of indicators may not only suggest the planning measures which should be employed but also throw light on a better formulation of targets, goals and even the objective of planning (Mukherjee, 1981).The efficiency of use of indicators for planning may not be fully realized because of the manner in which IDP managers are often found to employ indicators to satisfy their own machination.
Benchmarking municipal performance: Benchmarking is the process of identifying "best practice" in relation to both products and the processes by which those products are created and delivered.It involves looking outward to examine how others achieve their performance levels and to understand the processes they use.All sampled municipalities in Capricorn district do not benchmark their performance.Blouberg had at least benchmarked on one variable, demographic profile, while the other four municipalities never attempted to benchmark.This could be seen as an acknowledgement of complacency by municipalities since no effort was being put to be competitive in the way they conduct their business.
Alignment of municipal IDPs with the Provincial Strategy: Firstly, the Limpopo's five year strategy (2010 to 2015), the Limpopo Employment, Growth and Development Plan (LEGDP), identified mining, agriculture, tourism and manufacturing as key drivers of the economy in the province but none of these sectors find resonance in any of the strategic plans of these municipalities.Municipalities such as Aganang, Molemole and Blouberg, have vast areas for farming and could at least have identified agriculture as a key driver of their economy and could have included some indicators related to the sector.Secondly, given the rural and semi-urban nature of these municipalities, they are without any doubt affected by brown environmental problems such as smelling pit latrines, exposure to polluted air caused by paraffin stoves they use for cooking and burning of disposed refuse; but ironically, there were no indicators and strategies developed to deal with these challenges.
Findings from the interviews on development indicators used in planning:
The development of an IDP document is the responsibility of the IDP managers and other officials in the IDP office as well as some councillors within a municipality.Although the municipal manager is accountable, the IDP manager remains the custodian of the document.The CBOs, traditional authorities and the youth were not very clear with legislative frameworks but only knew that they have to form, at some point, part of the IDP process.How indicators should be used in the process was foreign to them.
All municipalities in Capricorn district comply with the legislative framework in terms of adopting their IDPs and involving communities in the IDP process.It has also been established that some municipalities are experiencing challenges with regard to development planning.Firstly, there is limited knowledge and skill amongst officials, both in utilizing development indicators in planning and how to analyse the information.Secondly, there is lack of or unavailability of recent disaggregated, lower-level data to populate municipal indicator matrices.Municipalities' reliance on free census data, which has a five year cycle, could be an impediment to their indicator usage since indicators from other data sources or statistical agencies come at an exorbitant price.National statistics from sector departments such as health, education, agriculture, and so forth, are available at local level and are often ignored.Economic indicators such as the Gross Geographic Product (GGP) can only be disaggregated at the level of a province by Stats SA and therefore municipalities acquire those indicators from other statistical agencies at a cost.
Benchmark indicators measure project progress towards development objectives and result in more meaningful project monitoring and evaluation by municipalities.There was absolutely no attempt by Municipalities to horizontally benchmark their achievements with that of neighbouring municipalities or municipalities in the same category in almost all indicators or vertically with provincial and national averages in some indicators.Kiregyera (2005) argues that the consequences of this include: poor issue identification, policy analysis and design; uninformed and occasionally costly decision-making; inability to properly monitor implementation of policies, projects and programmes as well as inability to evaluate their success.Little use was made of baseline indicators to measure the effectiveness and efficiency of programmes or projects.
There is a general lack of sufficient, reliable data to use for measurement purposes, as well as a lack of appropriate information management and assessment systems to record, manipulate and convert these required data sets into the desired indicator formats.Where data exist, its integrity is frequently suspected.The government should begin to promote synchronization of data across sectors and levels of government -as Kiregyera (2005) argues, this will assist the government to focus on performance and reporting on the achievement of outputs, outcomes and impact, using information to improve decision-making and steer country-led development processes towards clearly defined development goals.
Case by Case Analysis of Service Provision by Municipality
Aganang Municipality: This municipality is 100% rural without a proper revenue base.During the interview with different respondents within the Aganang municipality, they almost all answered in the same way with regards to the provision of basic services such as access to water, electricity, sanitation, refuse removal and free basic services.They showed satisfaction on a number of services provided by the municipality.The youth could not respond adequately to questions on the number of people who have access to social grants, but they in principle agreed on their existence.Traditional authorities have profiles of the household members and are as a result glued to activities within their authority.
When the IDP manager was asked about what should be done to ensure the effectiveness of the IDP in the provision of services, he opined, "the municipality should establish a centre for development programmes.It is this centre where capacity building will take place".This echoed the sentiments of the traditional authority representative as well as the youth who argued that there should be funds set aside every five years to capacitate the community members.The IDP manager, a municipal official dealing with IDP documentation and the mayor suggested that indicators should be in line with Key Performance Areas (KPAs).In other words, there should be indicators for every KPA and these should find expression in the Service Delivery and Budget Implementation Plan (SDBIP).Most respondents, community members (traditional authority and the youth) particularly blame the inadequacy of resources as hindering development in their areas.The mayor spoke emphatically about monitoring and evaluation systems within the municipality as not good enough to put an eagle's eye on the projects and programmes that are being implemented.
Responses from the community members are an indicator of their satisfaction with the provision of basic services in the Aganang Municipality.Many of the respondents are ambivalent (neither satisfied nor dissatisfied) with the provision of water as compared to a few who are satisfied.Only the IDP manager and the mayor are the ones who were satisfied with the services reflecting an element of bias as they are the bonafide custodians of the IDP.The mayor acknowledged that there is still a lot to be done when it comes to the provision of water.The communities are very satisfied with the provision of electricity and to some extent with social grants.However, the mayor, the IDP manager, a municipal official and the youths are not satisfied with social grants.The CBOs group expressed much dissatisfaction with refuse removal citing pollution as a major concern.The respondents affirmed that although access to electricity was not reliable especially during thunderstorms, the time it takes for the municipality to attend to faults has improved.
Molemole Municipality: This municipality is a rural municipality with little tax revenue.The municipality relies heavily on the small town of Mogwadi which has a tax base.All the respondents from the Molemole municipality were "singing from the same hymn book" when it comes to delivery of services.The IDP manager, the municipal official, the mayor, the CBO representatives, and the youth representatives were very satisfied with access to water by households whereas only the traditional authority representative was dissatisfied.
Blouberg Municipality: The rural nature of the municipality poses challenges with regard to the provision of services.The problem can be traced back to the apartheid era.With regard to access to social grant, the youths and the CBOs, were generally dissatisfied.This shows that service delivery still has to overcome a number of challenges to fully realize its aims and objectives to service the entire population within the given jurisdictions.The respondents showed a high satisfaction level with regard to the provision of electricity.
Lepelle-Nkumpi Municipality: The community members of the Lepelle-Nkumpi municipality were not satisfied with refuse removal and access to social grants claiming that services were only provided to those communities in townships.There were also concerns with the provision of sanitation by the municipality.Again this could be the result of the rural nature of the municipality, particularly the many rural villages located within the jurisdiction of the municipality.
Polokwane Municipality; Polokwane municipality comprises of the town Polokwane with a number of suburbs, simplexes and complexes, more than a handful of townships and rural villages.It is in the hub of business in Limpopo.It has a larger revenue base than any of the four other local municipalities within the Capricorn District Municipality.The provision of services was high on their agenda as there is a general level of satisfaction with basic services by communities within Polokwane municipality.This was encouraging except with regard to social grant as the mayor, the traditional authority representative, the CBO group and the youth representatives were dissatisfied with the manner in which the South African Social Security Agency (SASSA) deals with applications as the aged have to apply over and over again as their application forms got lost.The respondents showed a high satisfaction level with regard to the provision of electricity.
Conclusion
The aim of the study was to investigate the effectiveness of the IDP document in the family of municipalities within the Capricorn District Municipality in the Limpopo Province.The study focused on the use of development indicators as performance areas that affect service delivery and can cause community disillusionment.Planners should not be daunted by the intricacy of development indicators, as they should only focus only on those that are being used for planning provided by government.Given the wide range of development indicators, only a handful is presented in the municipalities IDPs.Even in case where those limited indicators were provided, the focus was more on social services and infrastructure.The agricultural sector is supposed to be the driver of the economy, particularly in Blouberg, Molemole and Aganang Municipalities, but the conspicuous absence of indicators on this sector is of great concern.
It is quite apparent from the above analysis that the use of development indicators in the IDPs is limited to guiding policy intervention.Their usage should be supported by the appointment of qualified employees with the requisite skill to implement the legislative requirements.Political interference in the appointment of employees who do not possess the requisite skill to do data analysis only compounds the problem which municipalities are having and promotes institutional mediocrity.
Recommendations
Firstly, IDP officials at municipalities, who generally lack the skill to analyse data, should be provided with training in the analysis of development indicators.A critical challenge for municipalities as raised by CBOs and the youths, common in all municipalities, is how the appointment of staff can be insulated from political interference.A rudimentary understanding of terms such as mean / average, standard deviation, standard errors, graphical displays and so forth, put in a non-statistical manner and then the interpretation of and assessment of data might assist municipalities.The next step will involve the interpretation and assessment of actual results.
Secondly, municipalities should ensure effectiveness and efficiency of the training and capacity building programmes by addressing the weaknesses or constraining factors.One of the main reasons for the failure of IDPs is the lack of commitment and project management skills among municipal officials as it was evident in their lack of forward planning with respect to the Municipal Infrastructure grant (MIG).
Thirdly, for municipalities to be able to address challenges with regard to baseline information, they should either conduct a local study; use administrative records or commission a study.The use of Community Development Workers (CDWs) to profile their wards could go a long way in addressing this problem as they are at times bound to purchase data from independent statistical agencies when census data are not available or are not recent.The establishment of research units or data management units with requisite skills can also assist municipalities in doing annual projections.
Fourthly, the Department of Cooperative Governance, Human Settlement and Traditional Affairs (CoGHSTA) and/or the South African Local Government Association (SALGA) should collectively develop a compendium of municipal development indicators and then ensure that these indicators appear in every municipality's IDP document in the province.
Lastly, it is critical that the IDP is aligned to the Service Delivery and Budget Implementation Plan (SDBIP) or the SDBIP finds resonance in the IDP.This will ensure that the monthly revenue and expenditure projections, quarterly service delivery targets and performance indicators of the SDBIP are compared and adapted (if necessary) to ensure alignment with the strategic thrusts, key performance activities, key performance indicators, and actual programmes and projects of the IDP.
This model suggests that for effective use of indicators and available data, municipalities must follow the following procedures/processes as enshrined in the model below (see Fig1).The first step includes the collection of primary and secondary depending on the data imperatives and capacity of the municipality.From these data, single indicators are generated.From single indicators, aggregate/composite indicators are then formulated.It is at this stage that a compendium of agreed upon indicators (baseline) are selected for monitoring through stakeholder consultation and consensus.Baselines are established at this stage.Trends and patterns are established using time series analysis.Comparisons are then made against established benchmarks from equivalent size municipalities or national of provincial benchmarks.Assessment and interpretation of the data as well as results feed into the monitoring and evaluation system.
Figure 1 :
Figure 1: Model for data collection and processing for municipalities
|
2018-10-10T01:39:58.387Z
|
2014-05-01T00:00:00.000
|
{
"year": 2014,
"sha1": "17714f6a20a10d00d1002b89d7787e305ea6b21a",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/2581/2549",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "17714f6a20a10d00d1002b89d7787e305ea6b21a",
"s2fieldsofstudy": [
"Political Science",
"Economics",
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Business"
]
}
|
211478987
|
pes2o/s2orc
|
v3-fos-license
|
A New Method of Improving the Azimuth in Mountainous Terrain by Skyline Matching
Augmented reality (AR) applications have a serious problem with the accuracy of the azimuth angle provided by mobile devices. The fusion of the digital magnetic compass (DMC), accelerometer and gyroscope gives the translation and rotation of the observer in 3D space. However, the precision is not always appropriate since DMC is prone to interference when using it near metal objects or electric currents. The silhouette of ridges separates the sky from the terrain and forms the skyline or horizon line in a mountainous scenery. This salient feature can be used for orientation. With the camera of the device and a digital elevation model (DEM) the correct azimuth angle could be determined. This study proposes an effective method to adjust the azimuth by identifying the skyline from an image and matches it with the skyline of the DEM. This approach does not require manual interaction. The algorithm has also been validated in a real-world environment.
Introduction
Humans can interpret the environment by processing information that is contained in visible light radiated, reflected, or transmitted by the surrounding objects. Computer vision algorithms try to perceive images coming from sensors. Due to bigger and higher resolution screens, smart devices have become suitable for navigation since they are equipped with necessary sensors, such as global navigation satellite system (GNSS), DMC, accelerometer, and gyroscope. Despite GNSS, the earth's magnetic field can be used to obtain a rough estimate of the position and orientation of the observer, the precision of mobile sensors is not high enough for AR applications. The compass can be biased by metal and electric instruments nearby although frequent calibration, so measuring the magnetic north is not reliable. Several studies, for example (Blum et al. 2013;Hölzl et al. 2013) have examined sensor reliability in real-world tests and showed the error of DMC could be as high as 10 • -30 • . However, the error of gyroscope and accelerometer are also increasing with the elapsed time, and the accuracy of GNSS could be up to several meters, they are not that critical from the perspective of this research.
3
Visual localization is a six-dimensional problem of finding the position (longitude, latitude, elevation) and orientation (pan, tilt, roll) from a single geotagged photo. Visual orientation from an image requires that the position of the observer is at least roughly given, the photo is taken not far from the ground, and the camera is approximately horizontal. That means the problem can be reduced to a one-dimensional instance in which the pan angle or in other words, the azimuth need to be determined. Computer vision can help to improve the precision of the sensors by capturing visual clues whose real-world positions are accurately known. This study proposes a method that can exploit the skyline from an image and match it with the panoramic or synthetic skyline extracted from a rendered DEM in real-time. Thus, the orientation of the observer can be improved, which is critical in AR applications.
In this paper, the focus is on mobile mountaineering apps that annotate mountain photos by matching images with 3D terrain models and geographic data. Nowadays, the ideal hiking app should have the following key features: rendered 3D terrain models, highly detailed spatial data, and AR mode with automatic orientation. Popular AR apps such as Peak-Visor and PeakFinder AR have a well-developed mountain identification function. Some can render the digital terrain model and label the name of peaks nearby and additional information. In some cases, uploaded images can be annotated, as well. However, the horizontal orientation is usually imprecise; thus, fine-tuning is required by the user for the perfect result. One of the few applications that employs sophisticated artificial intelligence algorithms is PeakLens, but it focuses solely on this function. The forthcoming and fully panoramic 360 • version of this app by La Salandra et al. (2019) can be used with Virtual Reality (VR) devices too. Lütjens et al. (2019) give a good example of how VR can offer intuitive 3D terrain visualization of geographical data.
The main contribution of this study is a novel edge-based procedure for automatic skyline extraction and a real-time method that increases the accuracy of the azimuth for a future AR application whose operation is demonstrated in Fig. 1. An original photo taken by the camera can be shown in Fig. 1a; Fig. 1b introduces the DEM with pertinent geographical data; the fusion of the image and information of interest can be seen in Fig. 1c. There are three main steps in the present approach: 1. Panoramic skyline determination from DEM. 2. Skyline extraction from the image. 3. Matching the two skylines.
The rest of the paper is organized as follows: Sect. 2 overviews relevant works in this field; Sect. 3 describes the proposed method; Sect. 4 presents the experimental results. Finally, conclusions and outlook are drawn in Sect. 5.
Related Work
In recent years, there has been considerable interest in the challenging task of visual localization in mountainous terrain. In natural scenarios, vegetation changes rapidly as well as lighting and weather conditions. Since the most stable and informative feature is the contour of the mountains, i.e., the skyline, thus it can be used for orientation.
Many experts examine the so-called drop-off problem when the observer or an Unmanned Aerial Vehicle (UAV) is dropped off into an unfamiliar environment and try to locate its position. Preliminary work by Stein and Medioni (1995) focuses primarily on pre-computed panoramic skyline matching with manually extracted skylines. Tzeng et al. (2013) investigate a user-aided visual localization method in the desert using DEM. Once the user marks the skyline in the query image manually, this feature is looked up in the database of panoramic skylines that is rendered from the DEM. Camera pose and orientation estimation from an image and a DEM were studied by Naval et al. (1997). This non-real-time approach classifies the sky and non-sky pixels by a previously trained neural network. Peaks and peak-like protrusions are used as feature points in the matching phase, where pre-calculated synthetic skylines are stored in a database which is not favourable in a real-time AR app due to the computation and storage needs. Fedorov et al. (2016) propose a framework for an outdoor AR application for mountain peak detection called Snow-Watch, and describe the data management approach of it. Sensor inaccuracy and position alignment are partially discussed in their paper. In contrary to the present study, they take in input the device orientation as well, and they reached a slightly higher peak position error ( 1.32 • ) on their manually annotated dataset. SwissPeaks is another AR app that overlays peaks that is presented by Karpischek et al. (2009). The main limitation of the app is that the correct azimuth should be set manually since visual feature extraction or matching was not implemented. Lie et al. (2005) examine skyline extraction by a dynamic programming algorithm that looks for the shortest path on the edge map based on the assumption that the shortest path between image boundaries is the skyline. A similar solution is investigated by Hung et al. (2013), where a support vector machine is trained for classifying skyline and non-skyline edge segments. A comparison of four autonomous skyline segmentation techniques that use machine learning is reviewed by Ahmad et al. (2017). The above-mentioned studies focus only skyline extraction, and their outcomes are hard to compare with the results of this paper.
A non-real-time procedure for visual localization is suggested by Saurer et al. (2016). They introduce an approach for large-scale visual localization by extracting skyline from query images and using a collection of pre-generated, vectorquantized panoramic skylines that are determined at regular grid positions. For sky segmentation they use dynamic programming but their solution requires manual interaction by the operator in case of challenging pictures, which amounted to 40% of the samples. An early attempt has been made by Behringer (1999) to use computer vision methods for improving orientation precision. Due to computation complexity, this solution was tested in non-real-time. Baboud et al. (2011) also present an automatic, but non-real-time solution for visual orientation with the aim of annotation and augmentation of mountain pictures. From geographical coordinates and camera FOV, this system automatically determines the pose of the camera relative to the terrain model by using contours extracted from the 3D model. They use an edge-based algorithm for skyline detection, and they propose a novel metric for fine-matching based on the feasible topology of silhouette-maps. However, the algorithm is sophisticated, it is not suitable for AR applications. An unsupervised method for peak identification in geotagged photos is examined by Fedorov et al. (2014). They extract the panoramic skyline by edge detection from the rendered DEM, but they do not address exactly how to obtain the skyline from an image.
It is worth to note that infra-red cameras are also put in an application for localization in mountain area, see e.g., Woo et al. (2007). They designed a procedure for UAV navigation based on peak extraction. Special sensors that are sensitive in the IR range could work better under lousy weather or weak light conditions. Unfortunately, a real-world test is not presented in their study.
Visual localization in an urban environment is a related problem. Several studies have been carried out on visualaided localization and navigation in cities where the sky region is more homogeneous than other parts of the image. For instance, Ramalingam et al. (2010) employ skyline, and 3D city models for geolocalization in GNSS challenged urban canyons. Zhu et al. (2012) match the panoramic skyline extracted from a 3D city model with a partial skyline from an image.
Method
The proposed method consists of three main stages. The first stage is to determine the panoramic skyline from the DEM by a geometric transformation suggested by Zhu et al. (2012). After that, the skyline from the image has to be extracted. Finally, the matching is carried out by maximizing the correlation between the two skyline vectors. C++ and OpenSceneGraph were used for panoramic skyline determination. The image processing task and matching were carried out by MATLAB (Image Processing Toolbox). Finally, georeferencing was made with the help of Google Earth Pro and QGIS.
Panoramic Skyline Determination
Panoramic skyline is a vector obtained from the 3D model of the terrain. In this research, publicly available DEMs, SRTM and ASTER were used, sampled at a spatial resolution between 30m and 90m. Depending on the distance of the viewpoint from the target and properties of the terrain in the corresponding geographical area that could be a bit coarse, but in most cases, this resolution was satisfactory. Figure 2a shows a rendered DEM, where the black triangle is the position of the camera, which was determined by GNSS. The 360 • panoramic skyline was calculated from this point by a coordinate transformation, as Fig. 2b shows, where where is the distance between C and D ′ and is the distance between C and D. A 3D to 2D transformation was applied since the height information or the radial distance is no longer required. Azimuth angle and the elevation angle describe any point D in the DEM. Finally, the greatest value determines the demanded point of the skyline for each . Figure 2c illustrates the panoramic skyline projected on a satellite image. The sharp edges on the left corner, indicate the border of the DEMs because the skyline was calculated only at a reasonable distance. Figure 2d shows the panoramic skyline vector that will be used in the matching stage.
Skyline Extraction
The skyline sharply demarcates terrain from the sky on a landscape photo. An automatic edge-based method is presented in this study for skyline extraction. The idea is based on the experience that large and wide connected components in the upper region of the image usually belong to the skyline (Fig. 3).
In the feature extraction step connected components labeling was used, which is a well-known algorithm for finding blobs in a binary image and assign a unique label to all pixels of each connected component. Figure 4a shows an input binary image with disjoint edge segments that coloured to different shades of grey in the output, see Fig. 4b. A flood-fill algorithm was applied for finding 8-connected components, i.e., pixels with touching edges or corners. A detailed review on connected components labeling is found in He et al. (2017). It is not necessary to detect the whole skyline since, in most cases, recognizing only an essential part of it is enough for matching. On the other hand, it is crucial to extract a piece from the real skyline and not a false edge.
In the preprocessing step morphological operations were carried out to enhance the greyscale image and remove noise. Morphological closing (dilation and erosion) eliminate small holes, while morphological opening (erosion and dilation) removes small objects from the foreground that are smaller than the structuring element. A disk-shaped structuring element was used either for closing and opening but with different radius (5 and 10 pixels). Details on morphology can be found in Szeliski (2011).
The algorithm selects the skyline from skyline candidates in multiple steps. The candidates (C) were sorted by the function S(C) = (C) + 2 (C) , where measures the number of pixels in the candidate and is the span of the candidate, i.e., the difference between the rightmost and the leftmost pixel coordinates in the image space. Based on the experiments, this function that takes into account the size and the span of C with double weight is proved to be the most efficient. Therefore, larger and broader skyline candidates are preferred.
The main steps of the approach are listed below and also illustrated in Fig. 3. 1. Preprocessing (a) The first step is to resize the original image to 640 × 480 pixels and adjust the contrast (Fig. 3a). (b) Based on the observations, the sky is in the sharpest contrast to the terrain in the blue colour channel in RGB colour space. Thus the blue channel was used as a greyscale picture. (c) Morphological closing and opening operations are applied for smoothing the outlines, reducing noise, and thereby ignoring the useless details, e.g., edges of tree branches or rocks (Fig. 3b). (d) The edge detection is carried out by Canny edge detector results in a bitmap that contains the most distinctive edges on the image (Fig. 3c).
2. Connected components labeling detects the connected pixels on the edge map determining the skyline candidates. The top three skyline candidates were chosen by the evaluating function S (Fig. 3d). 3. A top-down search selects the first edge pixels from the most probable candidates in each column because the skyline should be on the upper region of the image (Fig. 3e). 4. Since it might make a hole in the real skyline, a bridge operation fills the one-pixel gaps. 5. A second connected component analysis eliminates the left-over pieces from the edge map and selects the largest one as the presumed skyline (Fig. 3f). 6. Finally, the skyline was vectorized in order to make matching more effective (Fig. 3g).
Skyline Matching
The last stage of the proposed method is matching the panoramic skyline and the recognized fragment of the skyline from the image. That point from where the skyline vectors interlock was looked for, i.e., the image skyline fits into the panoramic skyline, from where could be obtained. For a proper comparison, the Horizontal Field of View (HFOV) of the camera and the panoramic skyline 2 need to be synchronized via the sampling rate of the two signals.
For the sake of simplicity, the first index of the panoramic skyline vector corresponds to 0 • (north) as a reference point.
In the case of a partially extracted image skyline, the gap also should be considered in accordance with HFOV, i.e., the total width of the skyline is estimated. Then, normalized cross-correlation (a ⋆ b) was used which is often applied in signal processing tasks as a measure of similarity between a vector a (panoramic skyline) and shifted (lagged) copies of a vector b (extracted skyline) as a function of the lag k. After calculating the cross-correlation between the two vectors, the maximum of the cross-correlation function indicates the point K where the signals are best aligned: From K the azimuth angle can be determined, and the estimated horizontal orientation can be acquired. As it was mentioned above, the camera is supposed to be approximately horizontal when the picture was taken, though the skyline could be slightly slanted. However, cross-correlation proved to be insensitive to this kind of inaccuracy, thus this approach is appropriate for matching the skylines. An example of matching the two skylines is presented in Fig. 5.
Experimental Results
The goal of this study was to develop a procedure that can determine the exact orientation of the observer in a mountainous environment by a geotagged camera picture and a DEM. The main contribution of this paper was an edgebased skyline extraction method. Thus the first part of this section demonstrates the results on sample images. The second part is about calculating and comparing the results with the ground truth azimuth angles ( ̂ ) determined by traditional cartographic methods using reference objects in the image. Various successful examples of automatic skyline extraction: a shows a craggy mountain ridge with clouds and rocks that could mislead an edge detector; in b the snowy hills blend into the cloudy sky mountain which makes skyline detection difficult; c is taken from behind a blurry window, where raindrops and occluding tree branches could impede the operation of an algorithm; d demonstrates a hard contrast image with clear skyline, however clouds might induce false skyline edges
Skyline Extraction
Skyline extraction is a crucial task in this method. The whole pattern is not necessarily needed for the correct alignment; in most cases, only a characteristic part of the skyline is enough for the orientation. The algorithm was tested on a sample set that contains mountain photos from various locations, seasons, under different weather and light conditions.
The goal was to extract the skyline feature as precisely as possible and classify the outputs. The pictures were made by the author or they were downloaded from Flickr under the appropriate Creative Commons license. The collection consists of 150 images with 640 × 480 pixels resolutions and 24-bit colour depth. Experiments showed this resolution provides suitable results considering computation performance, as well. Figure 6 illustrates the extraction steps on four different instances. For details on the steps, see Sect. 3.
The outputs were grouped into four classes according to the quality (%) of the result. The evaluation was done manually because type I and type II errors also can occur, and an objective measure is difficult to create.
-Perfect: [95-100%]; the whole skyline is detected, and no interfering fragments found. Table 1 shows that the extracted skylines assigned to perfect or good classes in more than 89% of the samples. In these cases, the extracted features are suitable for matching in the next algorithm phase. It is noteworthy that the rate of poor is 8% and bad outcomes is less than 3% . When the algorithm fails, the difficulties usually arise from occlusion, foggy weather, or low light conditions. Sometimes, in hard contrast pictures with plenty of edges, e.g., deceptive clouds, or rocks, the largest connected component did not necessarily belong to the skyline and it is difficult to find the horizon line even with the naked eye.
Field Tests
Unfortunately, it was not possible to compare the results directly with those obtained by other algorithms discussed earlier, due to the different problems they addressed. Therefore, field tests were made by the author to measure the performance of the method. The experiments aimed to determine the orientation using only a geotagged photo and the DEM. A Microsoft Surface 3 tablet was employed, which has in-built GNSS sensor and an 8MP camera sensor with 53.5 • HFOV. Various pictures were collected in the mountains with clearly identifiable targets, e.g., church or transmission towers, and aligned them into the center of the image with help of an overlying grid. The EXIF data contains the position, so the recognizable target concerning the viewpoint could be manually referred, i.e., ( ̂ ) for the 10 sample images. The low sample size is due to the difficult task of manually orientate test points and the lack of a publicly available image data set with georeferenced objects. Figure 7 and Table 2 present examples and the experimental results of the field tests. Only good or perfect skylines were accepted for this test and the correlation was almost 95% on average. The mean of absolute differences between ̂ and was 1.04 • , which is auspicious and could be enhanced with a higher resolution DEM. As it was mentioned in Sect. 1 the error of DMC could be 10-30°. Measuring the inaccuracy of the compass sensor was beyond the scope of this study. Nevertheless, this problem was experienced during field tests. The benefit of the proposed algorithm is the more accurate orientation by the camera picture and a DEM instead of the unreliable DMC. The purpose of field tests was to demonstrate the precision that can be achieved with this method. In the tests the main reasons for the average 1.04 • error were the coarse resolution of DEMs and the vegetation, as can be seen in the examples of Fig. 7a-d. Since cross-correlation proved to be less sensitive to this kind of inaccuracy, thus it is applied in the matching phase.
Conclusions and Outlook
This study proposed an automatic, computer vision-based method for improving the azimuth measured by the unreliable DMC sensor in mountainous terrain. The aim was to develop an algorithm for an outdoor AR app that overlays useful information about the environment from a Geographic Information System (GIS), e.g., peak name, height, distance. The main contribution of this work is the robust skyline extraction procedure based on connected components labeling. The skyline was extracted successfully in more than 89% of the sample set that contains various mountain pictures. Furthermore, field tests were also carried out to verify skyline matching. The deviation of the azimuth angle provided by the algorithm and the ground truth azimuth was examined, and 1.04 • average accuracy was reached. Performance issues were beyond the scope of this study. Nevertheless, the algorithm is time and storage efficient, the results are promising and they showed that the proposed method can be applied as an autonomous, highly accurate orientation module in a real-time AR application that is under development. With suitable data and some adaptation, the system could be also used for visual localization in GNSS challenged urban environment.
|
2020-02-26T16:30:59.430Z
|
2020-02-26T00:00:00.000
|
{
"year": 2020,
"sha1": "9ece75dfcf5010fd27bf37e61f5cbf581786cca3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41064-020-00093-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "9ece75dfcf5010fd27bf37e61f5cbf581786cca3",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
}
|
201157875
|
pes2o/s2orc
|
v3-fos-license
|
Hemolytic anemia caused by non-D minor blood incompatibilities in a newborn
Hyperbilirubinemia is one of the most widely seen cause of neonatal morbidity. Besides ABO and Rh isoimmunization, minor blood incompatibilities have been also been identified as the other causes of severe newborn jaundice. We report a newborn with indirect hyperbilirubinemia caused by minor blood group incompatibilities (P1, M, N, s and Duffy) whose hemolysis was successfully managed with intravenous immunoglobulin therapy. A thirty-two gestational weeks of preterm male baby became severely icteric on postnatal day 11, with a total bilirubin level of 14.66 mg/dl. Antibody screening tests revealed incompatibility on different minor groups (P1, M, N, s and Duffy (Fya ve Fyb)). On postnatal day thirteen, the level of bilirubin increased to 20.66 mg/dl although baby was under intensive phototherapy. After the administration of intravenous immunoglobulin and red blood cell transfusion, hemoglobin and total bilirubin levels became stabilised. Minor blood incompatibilities should be kept in mind during differential diagnosis of hemolytic anemia of the newborn. They share the same treatment algorithm with the other types hemolytic anemia. New studies revealed that intravenous immunoglobulin treatment in hemolytic anemia have some attractive and glamorous results. It should be seriously taken into consideration for treatment of minor blood incompatibilities.
Introduction
Hyperbilirubinemia is one of the most widely seen cause of neonatal morbidity. Hemolytic disease of newborn (HDN) is an important cause of hyperbilirubinemia. It is defined as incompatibility between maternal and infant blood groups, which results in destruction of fetal red blood cells leading to high bilirubin levels [1]. ABO and Rh incompatibility are the most common causes of severe indirect hyperbilirubinemia. Besides ABO and Rh isoimmunization, minor blood incompatibilities (MBI) such as anti-Kell, anti-C, anti-E, anti-MNS, Duffy, Kidd, P, Lutherian and Lewis, have been also been identified as the other causes of severe newborn jaundice [2]. Hemolysis due to MBI presents with clinical and laboratory results; ranging between mild anemia, reticulocytosis and neonatal hyperbilirubinemia and marked fetal anemia and hydropic changes [3]. Intravenous immunoglobulin (IVIG) has been used as an alternative treatment modality for HDN, as it has been shown to decrease the need for red blood cell transfusion [4]. Here we report a newborn with indirect hyperbilirubinemia caused by minor blood group incompatibility (P1, M, N, s and Duffy) whose hemolysis was successfully managed with IVIG therapy.
Patient and observation
A 32 gestational week preterm male baby, weighing 1815 gr was born with c/s, to 32 year old lady and was transferred to the neonatal intensive care unit. The mother's blood group was O Rh (+) and the baby's blood group was found out to be O Rh(-). Prenatal history was It has been reported that the most severe hemolytic clinic is pictured by Anti c antibodies [5]. Several strategies have been developed to prevent D immunization, leading to a substantial decrease of D immunization in many countries [6].
Consequently, alloantibodies other than anti-D emerged as an important cause of severe HDN after prevention of D immunization.
Many minor blood group systems were identified since 1927. The diagnosis and treatment process of minor blood incompatibilities are usually delayed. During the diagnosis phase of HDN, most of the clinicians do not give priority to minor blood groups, because hemolysis caused by MNI do not have any specific treatment modality than other incompatibilities [6]. P minor blood group also known as was first identified by Landsteiner in 1927. This system is nowadays renamed and assumed as the part of P1PK blood system [7]. MNS blood antigens are defined in 1927. 30% of all population is negative for antigen M and are capable of producing Anti M when exposed to antigen. However, the incidence of severe HDN due to anti M antibodies is rarely reported. De Young-Owens A et al. showed no cases of hemolytic disease of the newborn, mild or severe; in a data collected from a total of 115 pregnancies [8].
Antigen Fya is classified in Duffy system. It may enhance the formation of severe hemolysis in newborns, because it has strong capacity to set off antibodies. In contrary to this, antigen Fyb, which is also classified in Duffy system, is rarely reported to cause hemolysis [9]. Anti S and Anti s are speculated to cause hemolysis during postnatal period, but there is no enough report supporting this claim. We evaluated the hemolytic process of our patient on PN day Page number not for citation purposes 3 11. After check for the differential diagnosis, incompatibilities in minor blood groups were determined (P1, M, N, s, Duffy (Fya and Fyb). We argued that, all the incompatibilities defined in minor blood group research, might promote the hemolytic anemia in our patient.
However, most of the reports in literature about HA due to MBI, usually refer to single antibody-antigen incompatibility. Minor blood group incompatibilities share the same treatment modality with other blood group incompatibilities. Jaundice caused by hemolysis is treated with phototherapy. Hemolytic anemia with hydrops may need transfusion with sub-group matched RBCs. Some few reports are available about the usage of IVIG in hemolytic anemia caused by minor blood incompatibilities [9]. IVIG treatment decreases the need for RBC transfusion for hemolytic anemia, but the exact mechanism is not clearly identified yet [10]. We administrated 1 gr/kg IVIG and RBC transfusion. After IVIG treatment, Hb levels gradually started to increase, in accordance with a progressive reduction in bilirubin levels.
No exchange transfusion was required.
Conclusion
MBI should be kept in mind during differential diagnosis of hemolytic anemia of the newborn. They have the same treatment algorithm with the other types of hemolytic anemia. This includes phototherapy, IVIG and transfusion. New studies revealed that IVIG treatment in HA have some attractive and glamorous results. It should be seriously taken into consideration for treatment of MBI.
|
2019-08-23T02:03:44.577Z
|
2019-07-29T00:00:00.000
|
{
"year": 2019,
"sha1": "b8e4de3455bc78397e1d7d57dae56d00440068cf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11604/pamj.2019.33.262.19324",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4135ebabeffa5f2635e829fb544b02510b8175fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16185452
|
pes2o/s2orc
|
v3-fos-license
|
Stathmin Protein Level, a Potential Predictive Marker for Taxane Treatment Response in Endometrial Cancer
Stathmin is a prognostic marker in many cancers, including endometrial cancer. Preclinical studies, predominantly in breast cancer, have suggested that stathmin may additionally be a predictive marker for response to paclitaxel. We first evaluated the response to paclitaxel in endometrial cancer cell lines before and after stathmin knock-down. Subsequently we investigated the clinical response to paclitaxel containing chemotherapy in metastatic endometrial cancer in relation to stathmin protein level in tumors. Stathmin level was also determined in metastatic lesions, analyzing changes in biomarker status on disease progression. Knock-down of stathmin improved sensitivity to paclitaxel in endometrial carcinoma cell lines with both naturally higher and lower sensitivity to paclitaxel. In clinical samples, high stathmin level was demonstrated to be associated with poor response to paclitaxel containing chemotherapy and to reduced disease specific survival only in patients treated with such combination. Stathmin level increased significantly from primary to metastatic lesions. This study suggests, supported by both preclinical and clinical data, that stathmin could be a predictive biomarker for response to paclitaxel treatment in endometrial cancer. Re-assessment of stathmin level in metastatic lesions prior to treatment start may be relevant. Also, validation in a randomized clinical trial will be important.
Introduction
Stathmin1 (STMN1 hereafter indicated as 'stathmin') is an 18 kD cytosolic phosphoprotein, known to play an important role in the cell cycle. Stathmin is expressed in all tissues. It is a critical regulator of microtubule dynamics through its microtubule destabilizing properties, including both prevention of polymerization and active promotion of microtubule depolymerization [1][2][3][4]. Phosphorylation of stathmin on four serine residues in the beginning of the mitotic phase attenuates its destabilizing activities, allowing cells to form a mitotic spindle; dephosphorylation then takes place prior to exit of mitosis [1,4]. Stathmin is also involved in intracellular transport, cell motility, polarity, maintenance of cell shape and regulation of apoptosis [1].
A biomarker is defined as a 'characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes or pharmacologic responses to a specified therapeutic intervention [5]. Biomarkers can be divided in various types, such as prognostic; linked to the prognosis of a patient independent of treatment, and predictive biomarkers; that identify patient subpopulations most likely to (not) respond to a treatment [5]. Thus, reliable predictive biomarkers are of paramount importance for improved and individualized treatment.
Presently, few predictive markers are known in human cancers and even less are clinically applied. In endometrial cancer no clinically validated predictive markers are yet available [17]. Both targeted therapies and conventional chemotherapeutic agents are effective only in a subset of patients [18,19], there is therefore an urgent need to identify clinically useful predictive markers. Examples incorporated in the clinic include KRAS mutational status indicating response to cetuximab and panitumumab in colorectal cancer [18,20,21], ALK re-arrangement in non-small cell lung cancer predicting response to crizotinib [18,20,22] and HER2/Neu amplification or overexpression in breast cancer for eligibility for trastuzumab treatment [18,20,23].
Taxanes are a group of chemotherapeutic agents frequently used in the treatment of endometrial carcinoma. Preclinical studies in breast and prostate cancer and retinoblastoma [24][25][26][27][28] give preclinical indications that stathmin may be a predictive marker for response to taxanes in these cancer types. High levels of stathmin decreased the sensitivity of breast cancer cell lines to paclitaxel and vincristine [24] and knock-down of stathmin by siRNA increased the sensitivity to paclitaxel in both breast [25] and prostate cells [27]. This impact of stathmin protein level on treatment response was limited to anti-microtubule agents. Unfortunately, none of these studies have taken this knowledge to a next level, integrating the results with clinical data. In endometrial cancer to our knowledge no studies, preclinical nor clinical, have explored an association between stathmin level and response to paclitaxel containing chemotherapy. In this report, we demonstrate in endometrial carcinoma cell lines, that reduction of stathmin levels by stathmin knock-down results in improved response to paclitaxel. We also show for the first time to the best of our knowledge, that stathmin protein level is associated with response to paclitaxel containing therapy in clinical samples from patients with metastatic endometrial carcinoma.
Cell lines
Two endometrial cancer cell lines were selected due to the difference in their sensitivity profile to paclitaxel; Ishikawa (Sigma, sensitive) and Hec1B (American Type Culture Collection, reduced sensitive). The Cancer Cell Line Encyclopedia (CCLE) data confirms the difference in sensitivity [29]. The lines were obtained in 2009 and authenticity verification by short tandem repeat (STR) profiling was performed in 2012 [30,31]. The cell lines were maintained under the conditions recommended by the suppliers.
Drugs
Paclitaxel and carboplatin were purchased from Sigma.
Cell line experiments
The cell lines were treated with paclitaxel in increasing concentrations (range 1-500 nM) for 24 h. As clinically taxanes are often combined with platinum derivates in endometrial cancer, we also treated cells with a combination of paclitaxel (in increasing concentrations (range 1-500 nM) and carboplatin (fixed concentration, 200 mM) for 24 h to observe any synergistic treatment effects. Cells were subsequently either fixed in 2% formaldehyde for microscopic evaluation of apoptosis; used in a proliferation assay (MTS) or processed for immunoblotting. Experiments were at least performed in triplicate.
For assessment of apoptosis, at least 150 cells were counted in three different areas in 96-well plates. For proliferation assays, experiments were performed in triplicates in 96-well plates. Assays were performed with CellTiter 96H AQ ueous One Solution Cell Proliferation Assay (Promega) following instructions from the manufacturer. The absorbance was recorded at 490 nm using an ELISA plate reader (TECAN Magellan Sunrise).
Patient series
Patients diagnosed with and treated for endometrial cancer at Haukeland University Hospital, Bergen, Norway, are after signing informed consent, prospectively and consecutively included in a database (population based setting) from 2001 onwards, preventing selection bias and ensuring optimal data collection for all patients, as previously reported [14]. Patients have however been treated following routine guidelines and the clinical samples investigated therefore consist of prospectively collected archival tissue. Clinicopathological data collected include amongst others FIGO 2009 stage, histological subtype, grade, primary and adjuvant treatment, and follow up including treatment for metastatic disease. For the purpose of this study, patients who received paclitaxel containing chemotherapy (as a routine in our hospital a combination of paclitaxel and carboplatin) after surgical treatment for either residual disease or metastasis before April 2011, were studied for treatment response according to RECIST criteria [32], with last follow-up entry July 2013. Of in total 607 patients in the database, of which 121 had systemic i.e. recurrent or residual disease, 57 had response data according to RECIST criteria available; 33 of which were treated with paclitaxel containing chemotherapy. We defined good response as complete or partial response (RECIST criteria), and poor response as static disease or disease progression (RECIST criteria). In addition we looked at disease specific survival in relation to stathmin level for all patients with endometrial cancer and specifically for patients treated for metastatic disease. The mean follow-up in our cohort was 34 months (range 0-105 months).
Tissue microarray (TMA) construction
TMA's were generated as previously described and validated in several studies [33]. The area of highest tumor aggressiveness was identified on all hematoxylin/eosin slides to ensure tumor representativity and three (primary tumor) or one (metastasis) tissue cylinders (0.6 mm diameter each) were mounted in a recipient block using a custom made precision instrument (Beecher instruments, silver spring, MD, USA). Formalin fixed paraffin embedded (FFPE) primary tumor tissue was available in TMAs from 603 patients for evaluation of stathmin level. From 77 patients with metastases, additional metastatic tissue was available in TMAs for investigation of stathmin level compared to the corresponding primary tumor. Too few cases had additional evaluable metastatic lesions, obtained prior to the paclitaxel containing chemotherapy, for stathmin level evaluation, with response data available according to the RECIST criteria and a similar prior treatment profile (n = 3) to allow meaningful statistical analyses of response in relation to biomarker status in metastatic lesions.
Staining evaluation
Blinded for patient characteristics and outcome, slides were scored by two authors (HMJW and JT) using standard light microscopy as previously described [34,35]. The kappa value, as a measure of reproducibility, was 0.73 in a separate set of 68 slides scored individually by HMJW and JT. High protein level was defined as the upper quartile, score 9, in line with previous publications [15]. In case of multiple metastases with variation in stathmin level, the lesion with highest level defined the final score for metastatic lesions.
Statistics
Statistical analyses were performed using PASW18 Statistics (Predictive Analysis SoftWare, SPSS inc. Chicago, USA). Categorical variables were evaluated using the Pearson x2-test or Fisher exact where applicable. Two-sided P-values of ,0.05 were considered significant. Univariate analyses of time from primary treatment to death due to endometrial carcinoma (disease specific survival) were carried out using the Kaplan-Meier method. The Cox proportional hazards method was used for a multivariate survival analysis (proportionality assumption checked by log minus log plot).
Ethics statement
All patients have signed informed consent prior to inclusion in the study. The study has been approved by the Norwegian Data Inspectorate (961478-2), the Norwegian Social Science Data Services (15501) and the local Institutional Review Board (Regional Committees for Medical and Health Research Ethics; REKIII nr 052.01).
Response to paclitaxel in endometrial cancer cell lines
Response to paclitaxel varies between endometrial cancer cell lines [29,36,37]. We show Ishikawa cells are sensitive to paclitaxel treatment with a high percentage of apoptotic cells after 24 h treatment (microscopic counting and proliferation assay) as opposed to Hec1B cells ( Fig. 1a and 1b). Combination treatment of carboplatin and paclitaxel did not result in synergistic treatment effect (not shown).
Stathmin knock-down by viral transfection
Fluorescence microscopy showed a transfection rate of 70-80% at the start of experiments, with markedly reduced stathmin levels in the stathmin knock-down cell lines compared to the control knock-down and wild-type cell lines ( Fig. 2a and 3a).
In both stathmin knock-down cell lines (Ishikawa and Hec1B), improved response to paclitaxel treatment was observed ( Fig. 2b and 3b). Hec1B cells show a statistically significant increased apoptotic rate after stathmin knock-down. Possibly due to the intrinsic higher sensitivity to paclitaxel in Ishikawa cells, knockdown did not result in a similar large increase in cell death. However, we noted a clearly increased fragmentation rate in the treated stathmin knock-down Ishikawa cells opposed to the control cells, which may be regarded as a sign of further activation of the apoptotic pathway (insert Fig. 2b). Using immunoblot, we tried to further validate this enhanced apoptotic pathway activation demonstrating PARP cleavage at a lower paclitaxel concentration for Ishikawa after stathmin knock-down compared to controls (Fig. 2c). Microscopic pictures of Ishikawa and Hec1B wild-type and stathmin knock-down cells after 24 h paclitaxel treatment with 0 nM (control) and 500 nM are shown in Figures 2d and 3c. We tested the effect of stathmin knock-down on the sensitivity to carboplatin monotherapy and paclitaxel-carboplatin combinational treatment without observing increased sensitivity or synergistic effects (not shown).
High stathmin level predicts poor response to paclitaxel in clinical samples
We then investigated patient tumor samples to see if a similar association between stathmin level and treatment response could be observed. Stathmin staining was predominantly cytoplasmic, as reported in literature [15,38]. Representative pictures from immunohistochemistry with weak (normal) and strong (high) stathmin staining are shown in Figure 4a. Excluding metastatic patients receiving anti-hormonal treatment only, patients with metastatic disease receiving paclitaxel containing chemotherapy had similar clinicopathological characteristics as patients treated differently. Including the patients treated with anti-hormonal drugs only, predominantly frail elderly patients, clinicopatholog- ical characteristics still remained similar, except that this subgroup was significantly older (Table 1). Patients with normal stathmin level clearly responded much better (RECIST criteria) to treatment than patients with high stathmin level (Fig. 4b).
Stathmin level did not predict response to other chemotherapy regimens or treatment modalities.
Approaching from a different angle, in general, patients with high stathmin level showed a reduced disease specific survival, in line with stathmins role as a prognostic biomarker (Fig. 5a). However, within the subgroup of patients with metastatic disease treated with paclitaxel containing chemotherapy, disease specific survival was significantly poorer in those patients with high compared to normal stathmin (p = 0.03, Fig. 5b). In patients who received other treatments for metastatic disease, prognosis was unrelated to stathmin level (p = 0.76, Fig. 5c. To rule out confounding by known important clinicopathological prognostic variables, we performed a multivariate survival analysis for both subgroups to look into the effect of stathmin level on survival after treatment for metastatic disease, corrected for FIGO stage and histological subtype. Stathmin protein level remained an independent predictor of disease specific survival in the subgroup of patients that received paclitaxel containing chemotherapy (n = 38, HR 2.3, CI 1.1-5.2), adjusted for FIGO stage and histological subtype, but not in the subgroup receiving other therapies (n = 43, HR 1.1, CI 0.4-2.7).
Discordant biomarker status in primary and metastatic lesions
The percentage of patients with high stathmin level was significantly higher in metastases compared to primary lesions with pathologic (high) levels noted in 18% of the latter (n = 84 of 477 primary lesions with stathmin staining available) compared to 37% in metastatic samples (n = 29 of 79) (Fig. 4c).
In the paired primary-metastasis samples, 35% of metastatic lesions showed high stathmin level. A discordance of 26% between metastatic lesions and their primaries was observed. In 16% there was a change to high level in metastases and in 10% to normal level.
Discussion
Stathmin protein level has been shown to be a prognostic marker of aggressive disease in many cancers, including endometrial cancer, where high stathmin level in primary tumor identifies patients at high risk for recurrent disease and lymph node metastases [6,9,10,12,13,15,16]. The identification and development of predictive biomarkers are of paramount importance to increase treatment efficacy and reduce unnecessary side effects, not only in targeted therapies but also in chemotherapeutic regimes, as for both counts that only a subpopulation will respond well, especially in the metastatic setting, but with currently very limited tools available to predict these patients [39,40]. None of the important clinicopathological factors, such as FIGO stage or histological subtype, are currently known to help distinguish potential responders from non-responders to paclitaxel containing chemotherapy in the metastatic setting. Studying large population based series with high-quality clinical annotation such as our series, combined with preclinical experiments are a useful and time-efficient tool to explore potential predictive biomarkers, which can subsequently be tested in clinical trials.
In line with previous in vitro results in breast cancer, we show in endometrial cancer cell lines that, independent of the original stathmin level, sensitivity to paclitaxel increased and thereby apoptosis expedited after successful stathmin knock-down. This was shown by direct microscopic counting and in Ishikawa cells also substantiated by immunoblotting focusing on PARP cleavage. PARP cleavage is an established indicator of apoptosis, distin- guishing it from other mechanisms of cell death, such as necrosis.
The increased apoptotic body formation noted by microscopy in the stathmin knock-down cell lines fits with increased apoptosis [41,42]. In our prospectively collected, retrospectively analyzed patient series, we also demonstrated a striking difference in response to paclitaxel containing chemotherapy comparing patients with normal to those with high stathmin level, also when correcting for the most important clinicopathological prognostic variables. Even when exploring such a large clinical series with endometrial cancer patients as ours, collected over more than 10 years, with adequate follow-up and RECIST [32] compliant documentation of response, ultimately only a smaller number of patients had been treated with the treatment of interest, underlining the difficulty of collecting series with adequate patient numbers for specific marker studies; but at the same time the importance to exploit these large prospectively collected population based series for predictive biomarkers suggested in preclinical studies, and explore potential clinical validity prior to clinical trial stage. The statistically significant correlation between high stathmin level and poor paclitaxel response according to RECIST criteria in clinical samples and the fact that stathmin level has an independent prognostic value in patients receiving paclitaxel for metastatic disease, not present in patients who do not, in survival analyses, supports the likelihood that the level of stathmin level may act not only as a prognostic marker but also as a predictive marker for response to paclitaxel treatment in endometrial carcinomas.
Unlike previous studies looking at stathmin as a potential predictive marker, predominantly in in vitro breast cancer studies, in this study we were able to test and confirm the association in clinical samples from patients treated with the drug of interest; using data from a well-annotated prospectively collected patient series. Both the preclinical and clinical testing support that stathmin level influences sensitivity to paclitaxel. We have explored and excluded that this effect can be generalized to other chemotherapeutic agents such as carboplatin, also frequently used in endometrial cancer.
Reporting recommendations for tumor marker prognostic studies (REMARK) guidelines have been developed with the aim to improve the methodological quality and reporting transparency in such studies [43]. The current study has been performed in accordance to these guidelines to improve the quality and general validity of its results.
Taxanes, originally isolated from the bark of the yew tree, belong to the family of anti-microtubule chemotherapeutic agents, with paclitaxel as their prototype. Simply put, taxanes bind to btubulin, causing microtubules to resist depolymerization, inhibiting cell cycle progression and promoting mitotic arrest and cell death [44]. Carboplatin, in contrast, is one of the platinum based agents, interacting with DNA and interfering with DNA repair. As stathmin is a critical regulator of microtubule dynamics, taken into consideration the mode of action of the drugs, the positive effect of stathmin knock-down on paclitaxel response and the absence of it to carboplatin sensitivity, is also biologically plausible.
We show a higher proportion of high stathmin level in metastatic (37%) compared with primary lesions (18%). Discrepancy in stathmin status was noted in a quarter of paired samples, paralleling findings in e.g. breast cancer where discrepancies between primary and metastatic lesions are shown in 14-55% and 0-40% for hormone receptors and HER2 respectively [45][46][47]. In endometrial cancer, few studies discuss differences in marker status between primary and metastatic lesions [38,48,49]. Intratumoral heterogeneity is well described in cancer and a potential confounding factor in many studies, irrespective of using fulltissue slides or TMA. Inter-observer variation is unlikely to be the sole explanation for these described differences. Also, a recent study assessing mutation status, a method considered less subjective than immunohistochemical scoring, in multiple metastatic lesions from one patient with renal cell carcinoma, support that detected biomarker changes from primary to metastatic lesions are real and may be related to and relevant for tumor progression [39]. The changes in biomarker status from primary to metastatic lesions support the need for repeated biopsies in metastatic lesions, to better relate therapy response to potential predictive biomarkers but also to only offer therapies with likely positive effect when predictive biomarkers are available [47,50,51]. For breast cancer, The American society of clinical oncology (ASCO) advised in 2007 already that for hormone receptor status, testing should be considered to be repeated in Figure 5. Disease specific survival after primary treatment for endometrial carcinoma patients (Kaplan-Meier curves) related to stathmin protein expression by IHC in primary tumor. A: All patients with complete data (n = 476). Number of disease specific events between brackets. B: All patients with metastatic disease who received paclitaxel treatment (n = 38). Number of disease specific events between brackets. C: All patients with metastatic disease who received different treatments (n = 43). Number of disease specific events between brackets. doi:10.1371/journal.pone.0090141.g005 metastatic disease if the results were to influence patient management [52].
Conclusion
These results, including preclinical data and for the first time data from clinical samples, support that stathmin may be a predictive biomarker for the response to paclitaxel treatment in endometrial cancer. However, confirmatory studies, ideally from randomized clinical trials are needed. The biomarker discordance on tumor progression is in line with other studies on tumor biomarker heterogeneity and supports the need for repeated biopsy in metastatic disease.
|
2017-04-14T01:34:19.721Z
|
2014-02-25T00:00:00.000
|
{
"year": 2014,
"sha1": "7ff12a312e9e9ce889516e7a8876dd87a0ee5924",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0090141&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ff12a312e9e9ce889516e7a8876dd87a0ee5924",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
35904286
|
pes2o/s2orc
|
v3-fos-license
|
Exploring New Horizons
Streptomyces bacteria employ a newly-discovered cell type, the "explorer" cell, to rapidly colonize new areas in the face of competition.
H istorically, bacteria have been thought of as simple cells whose only aim is to replicate. However, research over the past two decades has revealed that many types of bacteria are able to develop into communities that contain several types of cells, with different cell types performing particular roles (Kuchina et al., 2011). These communities are of interest in scientific fields as diverse as petroleum engineering and bacterial pathogenesis.
Streptomyces were perhaps the first bacteria to be recognized as having a multicellular lifestyle (Waksman and Henrici, 1943). In fact, this lifestyle led to them being classified as fungi when they were first isolated from soil at the beginning of the last century (Hopwood, 2007). This case of mistaken identity stemmed from the fuzzy texture of Streptomyces colonies (see Figure 1A), which resembles many of the fungi we see growing on bread and other natural surfaces (Waksman, 1954).
The first stage in the life of a Streptomyces colony is the growth of so-called vegetative cells, which form networks of branched filaments that penetrate the surfaces of food sources. The fuzzy appearance of Streptomyces colonies is the result of the vegetative cells producing another type of cell called aerial hyphae that grow upwards into the air (McCormick and Flärdh, 2012;Flärdh and Buttner 2009). Subsequently, cells of a third type (spores) form long chains on the ends of these aerial hyphae. These spores are resistant to drying out and likely allow Streptomyces to passively spread to new environments through the action of water or air movement (McCormick and Flärdh, 2012). Now, in eLife, Marie Elliot at McMaster University and colleagues -including Stephanie Jones as first author -report a new form of growth in Streptomyces termed "exploratory growth" (Jones et al., 2016).
In the initial experiments, Jones et al. -who are based at McMaster University, the University of Toronto and Dartmouth College -grew Streptomyces venezuelae bacteria alone, or close to a yeast called Saccharomyces cerevisiae, on solid agar for two weeks. During this time, the bacteria grown alone formed a normal sized colony typical of Streptomyces. However, in the presence of the yeast, the S. venezuelae colonies expanded rapidly and colonized the entire surface of the growth dish, engulfing the nearby yeast colony. In subsequent experiments, the cells produced during exploratory growth (dubbed "explorer" cells) showed the ability to spread over abiotic surfaces including rocks ( Figure 1B) and polystyrene barriers. Scanning electron microscopy revealed that, unlike vegetative cells, these explorer cells did not form branches and more closely resembled simple aerial hyphae.
Previous studies have identified many genes that regulate the development of Streptomyces colonies including the bld genes, which are involved in the formation of aerial hyphae, and the whi genes, which are required to make spores (McCormick and Flärdh, 2012). Jones et al. found that neither of these sets of genes are required for exploratory growth of S. venezuelae in the presence of the yeast. This suggests that the explorer cell type is distinct from the previously known developmental pathways in Streptomyces. Furthermore, Jones et al. found that multiple Streptomyces species were capable of exploratory growth and that various fungal microbes had the ability to trigger this behavior.
Further experiments using libraries of mutant yeast indicated that glucose and pH may be involved in triggering the formation of explorer
cells. Jones et al. demonstrated that
Streptomyces displays exploratory growth in response to shortages of glucose (caused by the presence of the yeast) and to an increased pH in the surrounding environment. The bacteria trigger this pH change themselves by releasing a volatile organic compound called trimethylamine, which is able to stimulate exploratory growth in Streptomyces over considerable distances. Trimethylamine also inhibits the growth of other bacteria that might compete with S. venezuelae in natural environments. The work of Jones et al. opens up the possibility that there may be additional types of specialized cells within Streptomyces colonies. Streptomyces are important for medicine because they produce many different chemical compounds, including antibiotics and immunosuppressant drugs, and one might imagine that specific groups of cells within a colony are responsible for making these compounds ( Figure 1C). Perhaps other cell types might be dedicated to directing the activities of different cells within the colony (as happens in other bacteria with multicellular lifestyles; Lopez et al., 2009;Baker, 1994), perhaps by producing trimethylamine or other volatile organic compounds.
For decades, researchers have described Streptomyces colonies in terms of vegetative cells, aerial hyphae and spores. The explorer cells identified by Jones et al. offer Streptomyces an alternative means of escape from their normal life cycle and local environment in the face of competition. This makes intuitive sense, given that Streptomyces lack the ability to move ("motility") in the traditional sense (for example, by swimming, gliding or twitching). Taken together, the work of Jones et al. demonstrates a surprisingly dynamic strategy in which a 'non-motile' bacterium can use cues from other microbes, longrange signaling, and multicellularity to make a graceful exit when times get tough.
|
2019-09-17T03:02:14.855Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "d1f54e6e3fb6d84a3636ba3628c72ea8742a02fb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.23624",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24eaf5085268fe64ee4557375291d91fa5f7a275",
"s2fieldsofstudy": [
"Art",
"History"
],
"extfieldsofstudy": [
"History"
]
}
|
138290711
|
pes2o/s2orc
|
v3-fos-license
|
Application of Traditional and Nanostructure Materials for Medical Electron Beams Collimation: Numerical Simulation
Nowadays, the commercial application of the electron accelerators grows in the industry, in the research investigations, in the medical diagnosis and treatment. In this regard, the electron beam profile modification in accordance with specific purposes is an actual task. In this paper the model of the TPU microtron extracted electron beam developed in the program “Computer Laboratory (PCLab)” is described. The internal beam divergence influence for the electron beam profile and depth dose distribution in the air is considered. The possibility of using the nanostructure materials for the electron beam formation was analyzed. The simulation data of the electron beam shape collimated by different materials (lead, corund- zirconia nanoceramic, gypsum) are shown. The collimator material influence for the electron beam profile and shape are analyzed.
Introduction
At the present time particle accelerators and X-ray sources have a wide range of application [1][2][3][4]. One of the most promising radiations is the electron beams with different energies, which are widely used in such clinical areas as medical diagnosis, external-beam radiotherapy and intraoperative radiotherapy [5]. The clinical electron beam application needs to have an exact representation of the beam profile and shape and be able to manage the parameters in accordance with specific purposes. As it is known the numerical simulation allows estimating the electron beam parameters, making it faster and easier to obtain beam characteristics than the practical measurements. In this regard, the electron beam model development is an topical issue.
As it is known the nanostructure materials can be used in cancer treatment. Nanotechnology research in this aria ranges from the diagnostics and therapeutics with the use of nanoparticles to the production of the dendrimers for boron neutron capture therapy [6][7]. One of the perspectives is to use the nanostructured materials for the electron beam formation.
In this research the suitability of the industrial nanoceramics was analyzed for these purposes. Up to date, submicrocrystalline nanoceramics based on zirconia and alumina compositions are intensively studied because of their service properties [8][9]. In this research the corund-zirconia nanoceramic was analyzed as a collimation material.
Nowadays, there are a lot of foundations of the new collimation materials for the electron beam. One of the perspectives is to use the 3D-printer materials for the accelerator beam collimator production. The 3D printed structures are useful in the different medical and industrial areas, because of the possibility to change the material property for the task-specific [10][11][12]. In this paper the gypsum composite was chosen from a wide range of 3D printing materials, since it is widely distributed and is relatively cheap raw material. Within the framework of this investigation the theoretical analysis of the Tomsk Polytechnic University (TPU) microtron extracted electron beam was carried out. The models of the accelerator electron beam shape modulation weredeveloped in the program "Computer Laboratory (PCLab)". The following factors, affecting of the electron beam profile are analyzed: the internal beam divergence, beam collimation by different materials (lead, corund-zirconia nanoceramic, gypsum).
Collimation materials
We analyzed the corund-zirconia nanoceramic for the electron beam formation. The lead was used as the classical material for the electron beam collimator production. As the cheaper alternative the gypsum was selected.
Emitting source
The following parameters of the TPU microtron extracted electron beam were used as the emitting source: electron energy -6.1 MeV; beam size at output ≈ 2.0 mm2; beam divergence -0.1 rad.
Simulation program
The program "Computer laboratory (PCLab)" version 9.5 was used for the TPU microtron extracted beam model creation. Simulation is carried out by applying the Monte Carlo method. The software package allows calculating the propagation process of electrons, positrons, protons and photons in matter with specified characteristics [13].
Experiment geometry.
The normal plane disc (diameter -2.0 mm) monoenergetic electron source with energy of 6.1 MeV, corresponding to the actual TPU microtron beam, was used in the simulation. The source was located in front of the beryllium output window (thickness -50 μm; diameter -40 mm). The beam shape analysis was carried out in the air.
In the simulation with a collimated electron beam in the first instance the output window was overlapped by the plates from different materials (collimator channel length -5 mm and 10 mm) with a taper hole (taper diameter increased from 0.5 mm to 1.5 mm). The collimator materials are lead, corund-zirconia nanoceramic and gypsum.
The figure 1 (a, b) illustrates the calculated path particles in the geometries with noncollimated and collimated electron beams, in view of photon production, correspondingly.
Results and discussions
In the figure 2 the simulation data of the TPU microtron extracted electron beam profile and shape at the 2 cm distance from the output window ignoring the internal beam divergence and taking into account the internal beam divergence are shown. The dose results were averaged and normalized to the maximum simulation dose.
Figure 2.
The TPU microtron extracted electron beam profile and shape at the 2 cm distance from the output window: a, b -ignoring the internal beam divergence; c, d -taking into account the internal beam divergence.
The figure 2 illustrates that increasing the internal beam divergence of the electron in the accelerator, the radiation dissipates faster then monodirectional beam. As a result it can be observed the dramatic drop of the dose and the beam broadening.
The figure 3 presents the simulation data of the TPU microtron extracted electron beam depth dose distribution in the air taking into account and ignoring the internal beam divergence. -taking into account the internal beam divergence.
The figures 2, 3 show that accounting the internal beam divergence of the electron in the accelerator, which is typical for real machines, significantly affects for the calculations results.
In the figure 4 (a, b) the simulation data of the TPU microtron collimated extracted electron beam shape with the lead collimator channel length equal to 5 mm and 10 mm at the collimator output with taking into account the internal beam divergence are shown, correspondingly. The figure 4 illustrates that within the collimation window the dose distribution doesn't change significantly, but with the increasing lead collimation channel length the scattered radiation contribution is greatly reduced because of the large absorption.
In the figure 5 (a, b) the simulation data of the TPU microtron collimated extracted electron beam shape with the corund-zirconia nanoceramic collimator channel length equal to 5 mm and 10 mm at the collimator output taking into account the internal beam divergence are shown, correspondingly. Figure 5. The TPU microtron collimated extracted electron beam shape at the corundzirconia nanoceramic collimator output: a -collimator channel length equal to 5 mm; b -collimator channel length equal to 10 mm.
The simulation model ( figure 5) shows that the dose maximum is observed in the region of the scattered and direct radiation contribution. The dose reduction is observed in a region of the only direct radiation. The dose decreases with the increasing of the deflection angle in the field of the scattered radiation. With the corund-zirconia nanoceramic collimator channel length equal to 5 mm the dose gradient is less than 10 cm because of the range increasing the scattered radiation in the collimator material.
In the figure 6 (a, b) the simulation data of the TPU microtron collimated extracted electron beam shape with the gypsum collimator channel length equal to 5 mm and 10 mm at the collimator output taking into account the internal beam divergence are shown, correspondingly. Figure 6. The TPU microtron collimated extracted electron beam shape at the gypsum collimator output: a -collimator channel length equal to 5 mm; b -collimator channel length equal to 10 mm.
The figure 6 illustrates that the shape of the electron beam at the gypsum collimator output determined by the same parameters as for corund-zirconia nanoceramic collimator ( figure 5). However, the dimensions of the scattered radiation area are comparatively more due to the interaction nature of the collimator material with the electron and photon radiation.
Conclusion
In this paper the theoretical model of the TPU microtron extracted electron beam was calculated in the simulation program "Computer laboratory (PCLab)". The obtained results show the suitability of this program for the real electron beams analyzing and for the beam shape modulation using the different type of the collimation devices, for example this program can be used in betatron for the radiation treatment.
The obtained results show that the corund-zirconia nanoceramic exhibit more efficiency than the gypsum and can be used for the electron beam formation instead of the traditional material after the collimator-geometry optimization.
The calculation data allow estimating the electron beam size and dose distribution at the selected distance from the output window. The depth dose distribution allows estimating the radiation burden values in the electron beam propagation direction. The obtained results allow simulating the collimators for the electron beam parameters optimization necessary for the specific practical task. The next step of this research is an experimental evaluation of the obtained simulation data.
|
2019-04-29T13:08:39.841Z
|
2015-11-06T00:00:00.000
|
{
"year": 2015,
"sha1": "1063c52b135bde41f6fa72337bab1f9ba379f88f",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1757-899X/98/1/012011/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7d8f3a7babae36937c7c26e5e2b59fd6c6db15bf",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
210835764
|
pes2o/s2orc
|
v3-fos-license
|
Impact of metabolically healthy obesity on the risk of incident gastric cancer: a population-based cohort study
Background The risk of colon or breast cancer in metabolically healthy obese (MHO) were lower than that in metabolically abnormal obese (MAO). We hypothesized that the risk of incident gastric cancer in MHO is lower than that in MAO. Methods This historical cohort study included 19,685 Japanese individuals who received health-checkup programs from 2003 to 2016. Each subject was classified as metabolically healthy (MH) (no metabolic abnormalities) or metabolically abnormal (MA) (one or more metabolic abnormalities), according to four metabolic factors (hypertension, impaired fasting glucose, hypertriglyceridemia and low HDL-cholesterol). Obese (O) or non-obese (NO) was classified by a BMI cutoff of 25.0 kg/m2. Hazard ratios of metabolic phenotypes for incident gastric cancer were calculated by the Cox proportional hazard model with adjustments for age, sex, alcohol consumption, smoking and exercise. Results Over the median follow-up period of 5.5 (2.9–9.4) years, incident rate of gastric cancer was 0.65 per 1000 persons-years. Incident rate of MHNO, MHO, MANO and MAO were 0.33, 0.25, 0.80 and 1.21 per 1000 persons-years, respectively. Compared with MHNO, the adjusted hazard ratios for development of gastric cancer were 0.69 (95% CI 0.04–3.39, p = 0.723) in MHO, 1.16 (95% CI 0.63–2.12, p = 0.636) in MANO and 2.09 (95% CI 1.10–3.97, p = 0.024) in MAO. Conclusions This study shows that individuals with MAO, but not those with MHO, had an elevated risk for incident gastric cancer. Thus, we should focus more on the presence of metabolic abnormalities rather than obesity itself for incident gastric cancer.
Background
Gastric cancer is a major global health concern and was the third leading cause of cancer death worldwide in 2012 [1] and gastric cancer is the third leading cause of cancer death in 2016 in Japan [2]. Previous meta-analyses showed that obesity was a risk factor for incident gastric cancer, especially gastric cardia cancer [3], although an umbrella review revealed the effect of obesity on gastric cancer was smaller than that on other obesity-related cancers, such as colon and breast cancers [4].
On the other hand, obesity is also known as a risk factor for type 2 diabetes mellitus (T2DM) [5], chronic kidney disease (CKD) [6] and cardiovascular disease (CVD) [7]. The subgroup of individuals with metabolically healthy obesity (MHO)-i.e., obesity without metabolic abnormalities-are knowns as lower risk of T2DM, CKD and CVD than individuals with metabolic abnormalities obese [8][9][10][11]. However, these studies also revealed that individuals with the MHO phenotype were at higher risk of T2DM, CKD and CVD than individuals with metabolically healthy non-obese [8,10,11]. In addition, there is accumulating evidence that metabolically abnormal obesity (MAO), but not MHO, confers an elevated risk of incident colon cancer [12] and breast cancer [13]. The association between gastric cancer and obesity among Japanese population is controversial [14,15]. These studies did not consider the presence of metabolic abnormalities. In contrast, there is an association between metabolic syndrome and incidence of gastric cancer [16][17][18][19]. Thus, we thought that not obesity itself, but the presence of metabolic abnormalities, which often accompany with obesity, have an important meaning for gastric cancer.
To our knowledge, however, no previous studies have clarified the relation between MHO and incident gastric cancer. Thus, the aim of this study was to elucidate the impact of MHO on incident gastric cancer.
Study population
This was an historical cohort study of participants who received a medical health-checkup at Asahi University Hospital (the NAGALA (NAfld in Gifu Area, Longitudinal Analysis) study, Gifu, Japan) [20]. The purpose of medical health-checkup was to promote public health by early detection of chronic diseases and their risk factors and about 60-70% examiners received the examinations, repeatedly; thus, the participants represent apparently healthy individuals. Most of the participants of this medical health-checkup were employees of various companies and local governmental organizations in Gifu, Japan, and their consorts. The medical data of all individuals who agreed to participate in the study were stored in a database after removing all personally identifiable information. For the current study, we used the results of individuals who participated in the health-checkup program for at least one year between 2003 and 2016. The exclusion criteria of this study were as follows: the presence of gastric cancer at baseline examination, missing covariate data (body weight, high-density lipoprotein (HDL) cholesterol, and lifestyle factors) and no followup health-checkup programs. Informed consent was obtained from each participant. The study was approved by the ethics committee of Murakami Memorial Hospital and was conducted in accordance with the Declaration of Helsinki.
Data collection
A self-administered questionnaire was used for gathering the medical history and lifestyle factors of participants [20]. In regard to alcohol consumption, participants were asked the type and amounts of alcoholic beverages consumed per week over the past month, and then the mean ethanol intake per week was estimated [21]. For smoking status, the participants were categorized into three groups: never-, ex-and current smokers. In addition, smoking burden was evaluated by pack-years which were calculated by multiplying the number of cigarette packs smoked per day by the number of years of smoking [22]. For exercise, participants were asked to describe the type, duration and frequency of sports or recreational activities [23]. Based on the results, we defined regular exercisers as the participants who performed any kind of sports activity at least once a week on a regular basis [21]. Body mass index (BMI) (kg/m 2 ) was calculated as body weight (kg) divided by height (m) squared. Waist circumference was measured as the abdominal circumference around the navel. Fasting plasma glucose, triglycerides, or HDL cholesterol was measured using the venous blood after an overnight fast. The methods for detecting and diagnosing gastrointestinal cancers were described previously [24]. Because the first standardized questionnaires for gastrointestinal cancers were sent on Jan 1st 2003, we set the study period as Jan 1st 2003 to Dec 31st 2016. The primary endpoint of this study was hazard risk (HR) of MHO for gastric cancer.
Definitions of metabolic phenotypes
We used body mass index > 25.0 kg/m 2 to identify the individual with obesity. This value has been proposed as a cutoff for the diagnosis of individual with obesity in Asian people [25] and has often been used in Japan [26,27]. Four metabolic factors (fasting plasma glucose, triglycerides, HDL cholesterol and blood pressure) were used to divide participants into metabolically healthy or metabolically abnormal subgroups [9]. Impaired fasting plasma glucose and/or diabetes was defined as fasting plasma glucose > 5.6 mmol/L and/or current medical treatment. Hypertension was defined as systolic blood pressure > 130 mmHg and/or diastolic blood pressure > 85 mmHg or current medical treatment. Elevated triglycerides were defined as triglycerides > 1.7 mmol/L or treatment for hyperlipidemia. Low HDL-cholesterol was defined as < 1.0 mmol/L in men and < 1.3 mmol/L in women. When none of these four metabolic factors were present, we defined the participants as metabolically healthy (MH) and when one or more of these four metabolic factors were present, we defined the participants as metabolically abnormal (MA) [28]. Then, participants were categorized at the baseline examination into 4 phenotypes: metabolically healthy non-obesity (MHNO), metabolically healthy obesity (MHO); metabolically abnormal non-obesity (MANO), and metabolically abnormal obesity (MAO).
Statistical analysis
The study participants were divided into four groups based on metabolic phenotypes. Continuous variables were expressed as the means ± standard deviation or median (interquartile range) and categorical variables were expressed as numbers. The clinical characteristics at baseline examination of the four groups were compared; continuous variables of groups were evaluated by one-way ANOVA and Tukey's Honestly Significant Difference Test or Kruskal-Wallis Test and Steel-Dwass Test, and categorical variables of groups were evaluated by Pearson's Chi-Squared Test. Because of the censored cases and inconsistent follow-up duration, we used the Cox Proportional Hazards Model to calculate the HR of the four groups. We considered five potential confounders as covariates: age, sex, alcohol consumption [29], pack-years [30], and exercise [31]. Because alcohol consumption and pack-years were skewed variables, logarithmic transformation was carried out before performing the Cox Proportional Hazard Model analysis.
Furthermore, we used the Cox Proportional Hazards Model to calculate the HR of each metabolic abnormality (hypertension, impaired fasting glucose, hypertriglyceridemia and low HDL-cholesterol).
The statistical analyses were performed using JMP version 13.2 software (SAS Institute Inc., Cary, NC). A p value < 0.05 was considered statistically significant.
Results
We included 27,944 participants from the NAGALA database (Fig. 1). Among them, 8259 participants were excluded. Thus, 19,685 participants were eligible for this cohort study. The baseline characteristics of the participants are shown in Table 1. Average age and BMI of this study participants were 45.5 ± 9.5 years old and 22.6 ± 3.3 kg/m 2 and 59.9% (11,782) were men. In addition, both BMI and metabolic parameters, including blood pressure, fasting plasma glucose, triglycerides and HDL cholesterol, were different among the four metabolic phenotype groups.
The results of the Cox proportional hazard model are shown in Table 2 and Additional file 1: Table S1. Compared with the MHNO phenotype, the MAO phenotype (adjusted HR 2.09, 95%CI 1.10-3.97, p = 0.024) was associated with a higher risk for development of gastric cancer after adjusting for covariates, whereas the MHO phenotype (adjusted HR 0.69, 95%CI 0.04-3.39, p = 0.723) was not.
Furthermore, presence of impaired fasting plasma glucose and/or diabetes, hypertension and elevated triglycerides were associated with incident gastric cancer (Table 3).
Discussion
This cohort study of apparently healthy Japanese people is the first to reveal an association between MHO and incident gastric cancer. This study shows that individuals with MAO, but not those with MHO, had an elevated risk for incident gastric cancer. In addition, the presence of impaired fasting plasma glucose and/or diabetes, and hypertension were associated with elevated risk incident gastric cancer.
Obesity was a risk factor for incident gastric cancer [3], although the effect of obesity on gastric cancer was smaller than that on other obesity-related cancers. Previous studies revealed that the risk of incident colorectal cancer [12] and incident breast cancer [13], both of which have been shown to be related to obesity [4], was not high in subjects with MHO. In addition, another study revealed that the risk of obesity-related cancer in MHO was lower than that in MAO [32]. In fact, previous studies revealed the association between metabolic syndrome and incidence of gastric cancer [16][17][18][19]. As to why MAO, but not MHO, was associated with a higher risk of incident gastric cancer, there were several possible explanations. It has been reported that metabolic syndrome is associated with gastric cancer [16][17][18][19]. In this study, we showed that the presence of metabolic abnormalities, especially impaired fasting plasma glucose and/or diabetes and hypertension, were associated with gastric cancer, which was same as previous studies [33,34]. Inflammation, as represented by elevation of the pro-inflammatory cytokines tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), and monocyte chemoattractant protein-1 (MCP-1), is known to be closely associated with not only obesity [35], but also the metabolic abnormalties, including impaired fasting plasma glucose and hypertension [36]. Inflammation leads to the development of gastric cancer by stimulating proliferation and inhibiting apoptosis of human gastric cancer cells [37]. Formation of reactive oxygen species (ROS), by formation of advanced glycation end products [38], leads to DNA damage and development of gastric cancer. In addition, tumor cell progression is stimulated by enhancing the mTOR signaling pathways through an increase in insulin-like growth factor 1 (IGF-1) [39]. On the other hand, it has been reported that the levels of inflammation and IGF-1 in MHO were lower than those in MAO [40,41]. Collectively, these results could explain why the MAO phenotype, but not the MHO phenotype, was associated with a higher risk of incident gastric cancer. Smoking burden, pack-years (median (interquartile range)) 0 (0-305) 0 (0-120) 0 (0-300) * 50 (0-420) * † 150 (0-460) * † ‡ < 0.001 Alcohol consumption, g/wk. (median (interquartile range)) 4.2 (0-90) 1 (0-54) 1 (0-66) * 12 (0-126) * † 12 ( Some limitations of our study should be noted. First, there was a possibility of selection bias, because we only included the participants who were re-examined in the health-checkup program. There is a possibility that there is a characteristic difference between the participants who were re-examined in the health-checkup program and those who did not. Second, we did not have data on H. pylori infection, which is known to pose a risk for gastric cancer [42]. In fact, many Japanese, especially elderly people, are infected with H. pylori [43]. Therefore, the results of this study might have been affected by the status of H. pylori infection. Third, we did not have detailed data on gastric cancer according to the anatomic location of the lesion, such as gastric noncardia cancer and gastric cardia cancer. A previous study revealed that gastric cardia cancer showed a greater association with obesity than non-cardia cancer [1]. Lastly, the generalizability of our study to non-Japanese populations is uncertain.
Conclusion
In conclusion, our study showed that MAO individuals, not but MHO individuals, had a higher risk of incident gastric cancer. Thus, to prevent future gastric cancer, we should focus more on the presence of metabolic abnormalities rather than obesity itself.
Additional file 1: Table S1. Hazard ratio of potential confounders for incident gastric cancer.
|
2019-09-16T03:32:30.823Z
|
2019-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "890cc3fabdf758e5ad4dede6f51beab1d14e7bca",
"oa_license": "CCBY",
"oa_url": "https://bmcendocrdisord.biomedcentral.com/track/pdf/10.1186/s12902-019-0472-2",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "14c34c42d37f49a823cd93fd5a8f23b41f560729",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268270736
|
pes2o/s2orc
|
v3-fos-license
|
Convenience stores: an obesogenic promoter in a metropolitan area of northern Mexico?
Introduction The prevalence of obesity in the Mexican school-age (5–11 years old) population increased from 8.9 to 18.1% between 1999 and 2022. Although overweight and obesity (OW + Ob) is a complex and multifactorial phenomenon, alongside its increasing trend, changes in eating patterns as a result of obesogenic environments that promote higher energy intake have been documented. The objective of the present study was to detect possible associations between schools and their proximity to and density of convenience stores in Monterrey, Mexico from 2015 to 2018. Materials and methods Anthropometric data were obtained from a subset of measurements of the National Registry of Weight and Height (RNPT) performed in the Monterrey Mexico metropolitan area in 2015 and 2018, and obesity prevalence was computed and classified into quintiles at the school level. Convenience store data were obtained from the National Directory of Economic Units (DNUE). The analyses consisted of densities within 400-800 m buffers, distance to the nearest stores, and cartographic visualization of the store’s kernel density versus OW + Ob hotspots for both periods. Results A total of 175,804 children in 2015 and 175,964 in 2018 belonging to 1,552 elementary schools were included in the study; during this period, OW + Ob prevalence increased from 38.7 to 39.3%, and a directly proportional relationship was found between the quintiles with the higher OW + Ob prevalence and the number of stores for both radii. Hotspots of OW + Ob ranged from 63 to 91 between 2015 and 2018, and it was visually confirmed that such spots were associated with areas with a higher density of convenience stores regardless of socioeconomic conditions. Conclusion Although some relationships between the store’s proximity/density and OW + Ob could be identified, more research is needed to gather evidence about this. However, due to the trends and the magnitude of the problem, guidelines aimed at limiting or reducing the availability of junk food and sweetened beverages on the school’s periphery must be implemented to control the obesogenic environment.
Introduction
Overweight and Obesity (OW + Ob) is among the most challenging and urgent problems worldwide.In the last decade, no countries have controlled or even lowered their OW + Ob prevalence, particularly in countries with lower incomes, which report greater increases in this issue (World Obesity Federation Atlas) for the period between 2020 and 2025 on a worldwide scale.An increment is expected in the prevalence of obesity from 10 to 14% and 8 to 10% for boys and girls aged 5-19 years, respectively (1).
In Mexico, the OW + Ob problem was recognized in 2016 and ratified in 2018 as a national public health emergency (2).In the school-age population (5-11 years old), the National Nutrition Survey (ENSANUT) reported an alarming increase between 1999 and 2022, rising from 17.2 to 19.2% for overweight and from 8.3 to 18.1% for obesity (3,4).
This rising trend in OW + Ob has occurred alongside changes in eating patterns, and Popkin et al. found a shift from home-prepared to processed and packaged foods (5).This is not unique to Mexico; in other countries such as Brazil, the increase in ultra-processed foods (junk food) during the last two decades has also been documented (6).
There is existing literature that demonstrates a link between junk food consumption and OW + Ob as a "cause-effect" relationship.One such study was developed by PAHO in 2015, wherein the authors analyzed data from 14 countries and found a significant relationship between the per-capita sales of these products and the OW + Ob levels.Moreover, in their conclusion, they encourage countries to reduce the consumption of ultra-processed food due to its negative impacts on the nutrition of the population (7).Longitudinal (cohort) studies (8) and literature reviews (9) have confirmed this relationship.Recently, a study conducted in eight countries (including Mexico) documented that increased consumption of ultra-processed foods was associated with higher energy and free sugar intake and stated that this association constituted a potential determinant of obesity in children and adolescents (10).
Numerous studies have been conducted to examine the obesogenic environment from a spatial perspective, some of them in Latin American countries such as Peru and El Salvador (11,12), which are merely descriptive and aimed at characterizing the spatial distribution of OW + Ob.In the case of seeking spatial interactions between junk food offerings and OW + Ob, research is mainly based on developed countries, and one study conducted in the United Kingdom in 2012 established that areas with greater access to fast food stores also had greater OW + Ob prevalence (13).Similar studies have been carried out in the USA, Netherlands, Germany, Canada, Macao, and New Zealand, finding spatial interactions between food sources and OW + Ob (14)(15)(16)(17)(18).
In Mexico, recent studies have approached this issue by conducting a cross-sectional analysis to estimate the indirect association between food store density and OW + Ob among Mexican adolescents, using sugar-sweetened beverage (SSB) consumption as a mediator.Store density was directly associated with SSB consumption but not indirectly associated with OW + Ob mediated by SSB (19).Another study analyzed the change in the retail food environment in Mexican municipalities from 2010 to 2020 and assessed whether these trends were modified by socioeconomic deprivation.It was concluded that there has been a substantial expansion and rapid change in Mexico's food environment, driven mainly by the rise in convenience stores and supermarkets in the most deprived and least urbanized areas (20).
The present study was designed to measure the impact of convenience stores on OW + Ob prevalence in children, which is relevant because some authors have documented that although the home is still a place where a large number of calories is consumed, a significant proportion of the calories consumed (almost one-third) come from eating episodes conducted outside of the house (21,22).A spatial approach using GIS techniques was proposed, considering that the periphery of schools could be a place where children acquire a significant proportion of their calories.
The objective of the present study was to assess the OW + Ob rates in children who attended schools in Monterrey, Mexico, between 2015 and 2018, and to detect possible associations between schools and their proximity to and density of convenience stores.
Materials and methods
The present work is an ecological study whose population was children (6 to 12 years old) from Monterrey, Mexico; the variable of interest was OW + Ob prevalence at the school level and its relation to spatial attributes -proximity and density -of convenience stores, as promoters of obesogenic environments.Considering the school as the unit of analysis, the study is longitudinal (2015-2018) since anthropometric data are present in both periods for the 1,552 schools.
Data sources
The study area encompasses the Monterrey Metropolitan Area, situated in the northern region of Mexico (25.67°N, 100.308°E), approximately 224 km southwest of the United States border in Laredo, Texas.As of the 2020 census, this region comprises 16 municipalities, analogous to county-level administrative units, with a total population of 5.3 million inhabitants.The economic landscape of the area is predominantly shaped by industrial activities, with a particular emphasis on key sectors such as beer, steel, and concrete production and the manufacture of cars and machinery.
Anthropometric data were obtained from the National Registry of Weight and Height (RNPT in Spanish) (23), a strategy implemented in Mexico to evaluate the nutritional status of children in elementary schools.Its main objective is to identify nutritional disorders such as malnutrition, overweight, and obesity.This initiative is jointly managed by the Health and Education Ministries (SSA and SEP in Spanish) and the National and State Systems for Integral Family Development (DIF) and is technically overseen by the National Nutrition and Medical Sciences Institute (INCMNSZ).This collaboration includes periodic visits to schools by trained personnel to conduct anthropometric measurements (24,25).From these data, a subset consisting of 1,552 Elementary schools belonging to the urban areas of the municipalities (a county-like territorial division) conforms to the Metropolitan Area of Monterrey, as defined by the National Statistics Geography and Informatics Institute (INEGI) in its Metropolitan Areas Catalog (26).This dataset had an OW + Ob prevalence for each school of 454,217 and 447,792 children in 2015 and 2018, respectively.Nutritional status was defined using the WHO BMI/age indicator, defining overweight as individuals with a z-score greater than +1 S.D. and obesity as those with a z-score > +2 S. D (27).To include socioeconomic data on the schools, a social exclusion index (SEI) variable built for the 2020 National Population and Household Census (28) was included, and the smallest geographic unit (suburb) was used to obtain the maximal resolution for characterizing the schools.
Convenience store data were extracted from the National Directory of Economic Units (DNUE) of INEGI (29), which includes information regarding all the businesses across the country, such as the type of activity, company name, number of employees, opening date, and, most important for the purposes of the present study, latitude and longitude.The final store dataset was constructed by filtering the DNUE database for the following criteria: "Convenience stores" as economic activity code, "OXXO" or "7-11" as company name, and -as in the case of the schoolsbeing located in the urban areas of the Metropolitan Area of Monterrey.In 2015 and 2018, 1,394 and 1979 convenience stores were located, respectively (585 new stores were opened in the metropolitan area during this period).
Spatial and statistical analyses
All previously described data were clipped to the suburb polygon layer that included the SEI; quintiles for OW + Ob and the index were recoded to create categorical variables using QGIS 3.282 and SPSS 25, respectively (30,31).Buffers (influence areas) of 400 and 800 m were constructed around every school using QGIS; these distances represent, in round numbers, walking times of 5 and 10 min, respectively, for an average pedestrian speed of 1.25 m/s (32).
As the geographic layers were not locally projected, we used the Python equidistance buffer plugin to generate buffer polygons with a precision of ±2 m (33).Once the polygons were generated, we used the QGIS count points in the polygon geoprocessing tool to determine the number of convenience stores within 400 m and 800 m radii from each school, which allowed us to estimate the density of such stores around the school-age population.Additionally, to establish ease of access to the stores from the schools, we computed the Euclidean distance from every school to its nearest store using the distance to the nearest hub geoprocessing tool.Once the spatial variables were calculated, crosstabulations and scatter plots were performed to analyze the interaction between the density and ease of access of the population to stores with OW + Ob prevalence.
Additionally, to evaluate the impact of SEI characteristics on OW + Ob, a bootstrap analysis of 1,000 samples was performed to calculate the 95% confidence intervals for OW + Ob prevalence for every SEI quintile and determine whether the differences could be due to this and not necessarily because of the proximity and density of the convenience stores.Bootstrap analysis was performed using the SPSS v. 25.0, which uses the computer's calculation power to create a large number (1,000 in this case) of subsamples of the actual data to calculate a standard error, thus obtaining a confidence interval of the variable of interest (OW + Ob prevalence) (34).
Finally, the last analysis consisted of generating a raster Kernel Density Estimation (KDE) heatmap for the stores in 2015 and 2018, using the QGIS KDE geoprocessing tool with an Epanechnikov kernel shape, which is the most efficient and suitable for our data (35), and a bandwidth of 0.24° (degrees), defined by Scott's Rule for Bandwidth Selection (36) calculated for 2015 store data.The rasters corresponding to the density of the stores for the 2 years were clipped to the metropolitan area of Monterrey and cartographically represented with a pseudo-band color ramp for their visualization.Along with the previously described process, a Getis-Ord Gi* hotspot analysis was performed in ArcGIS desktop 3.0.3(37) with the OW + Ob prevalence as the input field.This spatial statistical procedure allowed us to identify, with a significance level (90, 95, and 99%), those schools with a prevalence remarkably greater than their neighbors (38), and the resulting layer of this process was also included in the map along with the store density to visualize the relationship between them.This was performed for 2015 and 2018; therefore, changes over time can be observed when comparing the two maps.
Overweight and obesity behaviors in schools
The number of children suffering from OW + Ob in the weight and height registry was 175,804 and 175,964 in 2015 and 2018, respectively, which are expressed as prevalence corresponding to 38.7 and 39.3%, respectively, which also modifies the Q4 and Q5 quintiles by year.The results are presented in Table 1.
Social exclusion index and its impact on OW + OB
To assess whether variations in socioeconomic status alone could account for differences in OW + Ob prevalence among schools, a bootstrap analysis of mean prevalence and 95% confidence intervals (CI) was conducted based on Socioeconomic Index (SEI) quintiles.The results, depicted in Figure 1, indicate that although Q4 consistently exhibited a higher prevalence in both years and the overall prevalence increased from the first to the second year of the study, no statistically significant differences were evident between the quintiles in 2015 and 2018 (with overlapping CIs).Convenience stores registered in the National Directory of Economic Units (DNUE) in the geographical area of the study grew from 1,394 to 1979 in the 2015-18 period, which corresponds to a 42% relative increase in the number of stores in the cited period.Another way to describe this growth is that from 2015 to 2018, a new store was opened every 2.5 days in the Monterrey Metropolitan Area.
The number of stores and their impact on OW + Ob
Table 2 shows the mean and SD of the number of convenience stores in the OW + Ob quintile for the 2 years of the present study.The first aspect that is clear is that regardless of the store's density, in Q4 and Q5, OW + Ob increased between 2015 and 2018, showing a greater lower limit value in the most recent period (40.6 to 41.0 and 44.8 to 46.6, respectively).In other words, in the same proportion of the Q4 and Q5 quintiles, the actual prevalence is greater.Another interesting finding is that, between 2015 and 2018, in all quintiles and for both radii, the mean number of stores increased.If we consider that schools are spatially fixed, this necessarily indicates that a significant number of stores opened in the school's proximity during this time.A greater increment can be seen within the 400 m radius for Q1 schools, which increased from 0.82 schools to 1.27 (this corresponds to a 54% relative increment).
The relationship between the OW + Ob quintile and the number of stores in the two radii is shown in Figure 2A which graphically shows two trends: the OW + Ob quintile increases with the number of stores and, as shown in Table 2, the OW + Ob quintile and the store number increased between 2015 and 2018.The same Figure 2B shows an example of a school with 14 convenience stores within its 400 m radius and the remarkably high number of 46 within its 800 m radius.
The results show that, in all cases, a greater number of stores were present in the higher OW + Ob quintiles; moreover, a consistent trend was found as the quintile increased (perfect ladder view in Figure 2).It was quite concerning that by 2018, the children of the more prevalent OW + Ob schools could reach more than seven stores by walking for 10 min and at least two stores by walking for only 5 minutes.Another finding of this analysis was that over 3 years, all schools, regardless of their OW + Ob status, faced an increase in the density of convenience stores on their periphery.
Distance to the nearest convenience store and its impact on OW + Ob
Regarding the influence of the proximity of convenience stores and its effect on OW + Ob prevalence, Figure 3 shows a decreasing trend of the mean distance to the nearest store as the prevalence increases for 2015 and 2018; in other words, the schools in which the OW + Ob is greater have, on average, a convenience store nearer than those with lower prevalences (182 m for 2015 and 120 m for 2018).It is also clear that between 2015 and 2018, for all schools, regardless of the OW + Ob level, stores were actually nearer.
In this sense, we also found that by 2018, the average distance for the schools with the highest prevalence was only 338 m, but even for those with a lower prevalence, the average distance was 458 m, which indicates that in all cases, convenience stores and their food were near the schools.In this analysis, we confirmed the increase in the number of convenience store trends found in the density computations, which can be seen in Figure 3, in which the line of 2018 is well below the 2015 one, indicating that convenience stores and junk food offerings Another way to visualize the relationship between OW + Ob and distance to the nearest store is by plotting the schools as data hotspots using the school-store distance as the X-axis and the OW + Ob prevalence as the Y-axis and making a linear regression of the dataset, which can be seen for both years in Figure 4, where a negative slope of the linear fit can be seen and is confirmed by the negative β (beta) value, indicating that a trend for a lower prevalence of OW + Ob occurs as the schools have their nearest school at a greater distance.When viewing the same data not aggregated in quintiles but as continuous hotspots (Figure 4), one can confirm the cited behavior by inspecting the plot and linear regression negative coefficients that, again, not only confirm but also prove with statistical significance the inverse relationship between the distance to the nearest store and OW + Ob prevalence.
Cartographical visualization of OW + Ob and store density
Figure 5 shows two maps of the Metropolitan Area of Monterrey for 2015 (Figure 5A) and 2018 (Figure 5B), in which a raster layer in the background corresponding to the kernel density of convenience stores is presented as a white-red color ramp, where the more intense red color represents areas with a high density of stores.Another information layer presented as size-scaled hotspots shows those schools that were detected by the Getis-Ord Gi*algorithm as OW + Ob hotspots; in other words, the schools that presented a much higher prevalence of OW + Ob when compared to their neighbors.Point size indicates the confidence level of the categorization as a hotspot (90, 95, or 99%).
Looking at the maps side by side, it is clear that (regardless of the OW + Ob component) the store density grew from 2015 to 2018; it can be seen that (b) shows a more intense red background in the central area of the city and a slight increase in the periphery when compared to (a), which is a visual confirmation of the findings presented in Table 2. Regarding the hotspots, in (a), there are 63, whereas in (b), this cipher increased to 91, being an important proportion of such hotspots of 95-99% significance.It is also important to point out that in the central area of the city where the store density is greater, an important number of OW + Ob hotspots appeared over 3 years.Overall, the maps displayed a general concordance between the store's denser areas and the OW + Ob hotspots for both years, but this was more evident for the most recent period.
The store density and OW + Ob Getis-Ord Gi* Hotspots maps allow not only the visualization of the spatial distribution of the two variables but also the changes over time.Focusing on store density, it is clear that they are more abundant in the center of the metropolitan area and that they increased in number between 2015 and 2018 in the same center and also toward the periphery.The maps also show that in the areas of higher store density in the center and north, there are more schools with high OW + Ob, and the presence and significance levels of the hotspots increased between 2015 and 2018.It is also shown, in a more subtle way, that some new hotspots appeared in the periphery and some had an increased significance level.
Discussion and conclusions
Monterrey Mexico is a metropolitan area that belongs to the Nuevo León State, where, in the 2022 National Continuous Nutrition Survey, there was an OW + Ob prevalence of 34.2% (39), whereas the National was 37.1% (4).These data are strongly consistent with the prevalences found by us in 2015 and 2018 (Table 1).This region, especially the city, is highly industrialized, and its cultural and social patterns are strongly influenced by its proximity to the United States.One of the results of such an influence can be found in current eating behaviors, which have shifted from traditional food to fast food and energy-dense food (40).This is also where the main chain of convenience stores (OXXO) opened its first business more than 46 years ago (41,42).This metropolitan area is also the second most It is well known that OW + Ob is a complex phenomenon that cannot be reduced to one or only a few factors.However, one of the main questions that could be addressed in this study is whether the stores themselves are the influencing factors for OW + Ob development or whether the mere presence of the stores in some places reflects the intrinsic economic characteristics of the regions inside the metropolitan area, and whether such characteristics are what finally determine or at least have an impact on OW + Ob.To control this issue, as the first step of the study, we calculated the distribution of OW + Ob using an SEI, finding no significative differences between the groups; this led us to discard the idea that inside the Monterrey Metropolitan area, the problem of OW + Ob in the children was determined mainly by socio-economic characteristics.
Approaching the research question from a spatial perspective, the first influencing factor from the stores that were studied was their density around the schools within 400 m and 800 m radii, corresponding to walking times of 5 and 10 min, respectively (32).This was important because we wanted to determine the number of junk/fast food offering hotspots available within those two radii around the schools.In other words, we considered these variables as a measure of the magnitude of this kind of food availability for all children who attended schools.
The other spatial property that we studied in the school-store interactions was nearness, which can be interpreted as the ease of access to junk/fast food by the children.These results are consistent with the density.The use of Euclidean distance constitutes the first approach; however, more sophisticated methods, such as distances over street networks (service areas) could be used in future research.When viewing both maps side-by-side, it seems that the store presence and the obesity hotspots spread over time, increasing in the center and expanding these phenomena to the edges of the Metropolitan Area.Again, these analyses correspond to the first approach but are susceptible to being enhanced in many ways: updating the data (schools and stores) to more recent sets to confirm the cartographic findings, performing other analyses (i.e., map algebra) using density rasters, and perhaps replicating these same methods in other regions of the country.Although we cannot state that the store density, distance, and location are directly responsible for the OW + Ob condition of children attending school, we found interesting and consistent facts and trends that may point toward this.Although there are few spatial analyses of this issue in Mexico, the obtained results are consistent with those found by Zavala et al. in 2021 (44), but in the case of the present work, on a much larger scale (metropolitan area) and over a lapse of time.Furthermore, these results are consistent with numerous studies on urban areas in developed countries (45,46).
According to a systematic review by Matsusaki et al. (47), a positive association exists between the nearness of fast food selling hotspots to schools and OW + Ob prevalence in children from Latin, Anglo-Saxon, and Afro-American ethnic groups from different continents, which was observed in all school degrees, but the authors remarked that this could be more important in younger individuals.
Hughey and collaborators (48) proposed a kernel density estimator methodology for an adolescent population to group the obesogenic environment components in the proximity of neighborhoods, such as processed food selling hotspots and fast food restaurants, and, on the other hand, positive elements such as parks and green/recreative areas used for physical activity.The authors recognize all of these places as relevant to the research because of the large amount of time spent by the individuals there.
In a similar study, Buszkiewicz et al. (49) established a relationship between the obesogenic environment and the presence of OW + Ob according to the place of residence.Their main findings agree with our work by showing that at smaller distances between households and junk food selling hotspots, OW + Ob prevalence increases.These results reinforce the idea that variables related to the food environment have an impact on weight gain.
The strengths of this study include its use of a large amount of data (census) on a homogeneous population (school-attending children), which results in robust OW + Ob ciphers, along with consistent and complete spatial information about convenience stores.This, in the context of a metropolitan area that does not have the enormous economic/social inequities present in other regions, allowed for the removal of some of the "noise" and facilitated a focus on the impact of the stores on OW + Ob.On the other hand, the study's weakness consists of the multifactorial complexity of the problem, which cannot be reduced only to junk food consumption and, thus, the stores' geographical locations.Another limitation could lie in the simplicity of some of the methods used (buffers and Euclidean distance), which could certainly be improved in future research.Despite the fact that it is not possible to confirm that all of the junk food and sweetened beverages come from convenience stores, it remains a hard fact that the number of such businesses has grown in recent years, reducing the distance of access to ultra-processed foods.This rapid spread of stores could not be explained without having significant sales, and in contrast to the supermarkets that are centralized meeting hotspots, these stores' strategies consist of "getting nearer" to the customers and readily offering the products.
In conclusion, investigating the relationship between convenience stores and their role in obesity in school-aged children, which is a growing global public health concern, requires an understanding of the potential role of convenience stores in contributing to this problem to support interventions and policies aimed at mitigating its impact, along with the use of spatial analysis, which provides valuable information for public health interventions and urban planning and aids in tailoring interventions to specific geographic areas and formulating policy to promote healthier environments for children to grow and develop in.Considering this, it is important to conduct more research on this topic in Mexico using more recent datasets and more sophisticated methods to identify the precise role and possible negative impacts of convenience stores on the health of the population.It is necessary to promote guidelines that restrict the availability of junk food and sugarsweetened beverages in schools.
Public policies aimed at effectively coping with this problem must include strategies for dealing with obesogenic environments.This implies limiting the number of convenience stores around schools and, at the same time, promoting the availability of healthy foods.
FIGURE 1
FIGURE 1Mean and 95%CI of OW + Ob prevalence in schools in Monterrey Mexico Metropolitan Area by Social Economic Index (SEI) quintile 2015-2018.
populated in the country (43), and all the previous factors make this place suitable for conducting the analyses of the present work.
FIGURE 2 (
FIGURE 2 (A) Mean of the number of convenience stores around the elementary schools within 400 m and 800 m radii by OW + Ob quintile and year in Monterrey Mexico Metropolitan Area 2015-2018.(B) Extract of the actual data showing a school (S) surrounded by 14 and 46 convenience stores (Red triangles) in radii of 400 m and 800 m, respectively.
FIGURE 3 Mean
FIGURE 3 Mean Euclidean distance (m) and 95%CI between schools and the nearest convenience store by OW + Ob quintile, Monterrey Mexico Metropolitan Area 2015-2018.
FIGURE 4 Scatterplot
FIGURE 4Scatterplot, linear fit, a negative beta value of p of the distance to the nearest convenience store (trimmed to 6 km) vs. OW + Ob prevalence in Monterrey Mexico Metropolitan Area 2015-2018.
FIGURE 5 Getis-
FIGURE 5 Getis-Ord Gi* Hotspots and significance level of OW + Ob in elementary school children in comparison to kernel density of convenience stores in Monterrey Metropolitan Area, Mexico in (A) 2015 and (B) 2018.
TABLE 1
General characteristics of the elementary schools, Monterrey Metropolitan Area Mexico 2015-2018.
TABLE 2
Mean and standard deviation of the number of convenience stores around the elementary schools within 400 m and 800 m radii by OW + Ob quintile and year in Monterrey Mexico Metropolitan Area 2015-2018.
|
2024-03-08T16:03:23.171Z
|
2024-03-06T00:00:00.000
|
{
"year": 2024,
"sha1": "28ce64f72a63f936109bf835e646c627688045a7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2024.1331990/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1588c09cafdeaf4ad3b275666f34953751e63fd7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
6907156
|
pes2o/s2orc
|
v3-fos-license
|
Sexually transmitted Human Papillomavirus type variations resulting in high grade cervical dysplasia in North-East North Dakota and North-West Minnesota
Background A review of Pap smear diagnoses from a reference laboratory in Grand Forks, North Dakota over a 3-year period (07/00 to 10/03) revealed a two-fold higher rate of high grade squamous intraepithelial lesion in a community in northwest Minnesota (Roseau, 0.486%) than in northeast North Dakota (Grand Forks, 0.249%), in spite of both having similar rates of low-grade squamous intraepithelial lesion (1.33% vs.1.30% respectively) Objectives To identify the different types of HPV present in patient populations showing high-grade dysplasia in Grand Forks, ND and Roseau, MN. Study design Formaldehyde-fixed paraffin-embedded cervical tissue samples were analyzed using polymerase chain reaction (PCR) to detect the presence of HPV type 16, 18 and 31. Results Our studies showed that 41 % of samples from Roseau were triply infected with HPV serotypes 16, 18 and 31 in comparison to 12 % from Grand Forks. Conclusion Due to the small sample size we were unable to prove the study to be statistically significant. However, our results suggest that the presence of HPV 16, 18 and 31 in triply infected samples may be the cause of the higher percentage of high-grade dysplasia in Roseau, MN when compared to Grand Forks, ND.
Background
Human Papillomavirus (HPV), a member of the papovavirus family, is a small circular double stranded DNA virus with a genome of approximately 8 Kb. HPV causes the most common sexually transmitted disease (STD) in the U.S. with at least 5.5 million new infections each year and an actively infected population of approximately 20 million people [1]. There are more than 100 different genotypes of HPV, which are known to cause a wide range of infections including common warts, genital warts, recur-rent respiratory papillomatosis, cervical dysplasia and cervical cancer. Fifteen HPV types are classified as high-risk types {16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 68, 73, and 82} and twelve are classified as low-risk types {6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81, and CP6108} [2]. HPV has been found in 99.7 % of cervical carcinomas worldwide with HPV 16 and 18 the predominant genotype in these carcinomas. [3]. The virus has been postulated to gain entry into the body through microscopic abrasions of the surface epithelium most often followed by integration of the viral genomes of the high-risk types into basal cells late in infection and subsequent transformation of the basal cells.
During an analysis of the severity of cervical dysplasia in patients attending clinics in Grand Forks, ND and Roseau, MN, we observed that the number of patients with highgrade dysplasia was approximately twice as high in Roseau compared to Grand Forks [0.249% and 0.486% respectively; (p < 0.004)] in spite of similar rates of lowgrade dysplasia [1.304% and 1.332% respectively] in both these areas. Grand Forks, ND and Roseau, MN are geographically related areas separated by approximately 100 miles. Since none of the typical risk factors including age of 18-28, pregnancy, smoking, high school diploma or less, use of oral contraceptive pills, or presence of coexisting STD (including condylomata acuminata) correlated with the increased incidence of high grade dysplasia, we hypothesized that the increased incidence might be a result of differences in the high-risk HPV types responsible for the infections. The aim of this study was to use polymerase chain reaction (PCR) to identify HPV types 16, 18 and 31 present in patient populations showing high-grade dysplasia in Grand Forks, ND and Roseau, MN.
Study population
Archival paraffin-embedded, formalin-fixed cervical tissue samples from patients diagnosed with high-grade dysplasia were obtained from Altru Clinic, Roseau, MN and Altru Clinic, Grand Forks, ND over a three year period from 07/00 -10/03. Grand Forks represented the control group, while Roseau, MN represented the experimental group. Statistical significance was analyzed by Chi square test and confirmed by z test using Sigma Stat software.
HPV type analyses
DNA from formaldehyde-fixed paraffin-embedded tissues was extracted using the thermal cycler deparaffinization method as previously described [4] with minor modifications. Extracted DNA preparations were first subjected to PCR targeting a 155 base pair fragment (GP 5+/GP6+) of the L1 open reading frame (ORF) of HPV [5]. The HPV types in the positive samples were characterized by PCRs specific for HPV types 16, 18, and 31 {Primer Sets used, Type Specific 16 [6], Type Specific 18 [7], Type Specific 31 [8]. The final 30 μl of PCR mixture contained 2.5 μl sample, 2.0 mM MgCl 2 , 3 μl of 10X PCR Gold Buffer, 200 μM deoxynucleoside triphosphates, 50 pmol of each primer (IDT Oligos) and 0.5 μl AmpliTaq Gold Polymerase (all reagents were purchased from Applied Biosystems, Foster City, CA). The amplification conditions were set to 1 min of denaturation at 95°C, 2 min of annealing at 40°C and 1.5 min of extension at 72°C for 40 cycles. HeLa and CaSki cells were used as positive controls for HPV 18 and HPV 16 respectively. The presence of an appropriately sized amplification product was monitored by gel electrophoresis and ethidium bromide staining.
Results and discussion
Out of the total thirty-four high-grade cervical dysplasia tissues analyzed by PCR with the general primers (GP 5+/ GP 6+) targeting a 155 base pair fragment of the L1 open reading frame of HPV, twenty-eight tested positive. Samples from four normal patients, which were used as negative controls, did not show any evidence of HPV infection. Since the general primers have previously been reported to be less sensitive than specific primers in screening for certain high-risk HPV types [9], all of the samples were then amplified with primer sets specific to each of the three different high-risk types (HPV 16, 18 and 31) which are known to be associated with cervical carcinoma (Walboomers et al., 1999). Of the 17 cases studied from Grand Forks -control group, 14 samples (82 %) were positive for the general primers ( Figure 1). Two samples (12 %) and 3 samples (18 %) were positive for only HPV types 16 and 18 respectively. There were no cases with single infection with HPV types 31. Three samples (18 %) showed dual infections with both HPV 16 and 18, 1 sample (6 %) were doubly infected with HPV 16 and 31 and 4 samples (24 %) were doubly infected with HPV 18 and 31. Two samples (12 %) showed triple infections with HPV 16,18 and 31. One sample tested negative with general primers but tested positive with HPV 18 specific primers.
Of the 17 cases studied from Roseau -experimental group, 14 samples (82%) were positive for the general primers ( Figure 1). Single infections of HPV 18 were not detected. One sample (6%) had single infections with HPV 16 and 2 samples (12 %) with HPV 31. Two samples (12%) were doubly infected with HPV 16 and 18, 3 samples (18 %) with HPV 16 and 31 and 1 sample 6 % with HPV 18 and 31. Triple infections with HPV 16,18 and 31 were detected in 7 samples (41 %) cases. One sample tested negative with general primers but tested positive with HPV 31 specific primers.
We also analyzed four squamous cell carcinoma tissue samples from Roseau, MN and all the four samples tested positive with the general primers for HPV. We further analyzed these samples for the presence of specific types of HPV and found that three out of the four samples were triply infected with HPV type 16, 18 and 31. One sample contained a double infection with HPV 16 and 31.
Differences in the incidence of cervical high-grade dysplasia in two separate communities within the same geographic area were correlated with the presence of multiple HPV type infections and differences in the HPV type infecting the dysplastic cells. Multiple infections with different HPV types have been previously reported to be associated with high-grade dysplasia [10,11]. Infection with HPV 16 has also been known to cause high-grade squamous intraepithelial lesions which progress into malignancy [12]. However, in our study population we found that the presence of single or double infections with HPV 16 did not alone appear to contribute to the higher incidence rate of high-grade dysplasia. Our results suggest that the presence of HPV types 16 and 18 along with HPV 31 in the triply infected samples may be responsible for the higher rate of HSIL in the experimental population. Supporting this hypothesis was the observation that 3 of 4 samples of cervical squamous cells carcinoma from area patients also were triply infected with HPV 16, 18, and 31 suggesting that the presence of multiple infections along with HPV 16 might play a significant role in the progression of low-grade dysplasias to high-grade dys-plasia. However due to the small sample size tested, this data precludes any statistical significance. Although we analyzed all of the samples obtained from Roseau MN over a three year period, we realize that this analysis was limited by the relatively small number of samples which could be obtained from this location. However, it was quite interesting that major differences in HPV type infecting cervical tissue could exist in distinct localities within the same geographic area. Similarly since we only analyzed for HPV 16, 18 and 31, we do not know if other viral types were present in single or multiple infections. Confirmation of the role of triple infections by HPV types in causing high-grade dysplasias will require further molecular studies in a larger risk population.
Publish with Bio Med Central and every scientist can read your work free of charge
|
2014-10-01T00:00:00.000Z
|
2006-06-15T00:00:00.000
|
{
"year": 2006,
"sha1": "662af7abf4eb54e6dfe3aa4614e98a8010c01ef3",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-3-46",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "953b08b7b42b919313234d515379e22accac595d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
221690462
|
pes2o/s2orc
|
v3-fos-license
|
COVID -19 Infection Prevention and Control: Review of Country Experiences
The novel coronavirus disease (COVID-19) caused by the new SARS-CoV-2 virus infection in December 2019 is one of the most severe public health emergencies facing the health systems around the globe. It reached the level of being a pandemic that caused a great disturbance and widely variable responses in nearly all countries. This review aims to highlight some of these country experiences to learn from others and to enhance better management strategy in our country. A PubMed search was performed to extract the country experiences from papers published till the 30 of April, 2020. As the onset of start, the course and severity of the pandemic differed from one country to the other, scientists from every country are doing their best to publish their experiences in the disaster management to help other countries avoid the mistakes in the management of such difficult situation. Problems facing different communities were summarized.
OVERVIEW he increased international travel and commerce leads to the rapidly evolving global infectious disease epidemiology. The incubation period of many infections needs longer time than travelling between different countries worldwide e.g. seasonal influenza (1-3 days), Rota Virus infection (1-3 days) also, a person infected with SARS-CoV-2 needs an incubation period of 2-14 days to express symptoms while he could travel from his country and come back within only 2 days. (1) From an infectious disease point of view, globalization has led to a borderless world that makes international cooperation and coordination necessary to control infections.
Nowadays, we are dealing with the COVID-19 pandemic resulting in a huge concern of a dangerous global public health threat. (2) Wuhan, the capital city of Hubei province and a major transportation center of China, started presenting to local hospitals many cases of adults suffering from severe pneumonia of unknown etiology in December 2019. Most of the cases had history of visiting a wet market of seafood and live animals. (3) The responsible pathogenic agent for such clusters of patients was identified as 2019 novel coronavirus (2019-nCoV), where person to person transmission has been confirmed, in addition, an asymptomatic person was confirmed to be a source of infection transmission. (4) On the 30 th of January, 2020, the World Health Organization (WHO) declared the COVID-19 outbreak to be a public health emergency of international concern posing a high risk to countries with vulnerable health systems. (5) The SARS-CoV-2 spread rapidly from its origin in Wuhan to all around the world. Until the 5 th of March, 2020 about 96,000 of COVID-19 cases and 3300 deaths have been reported. (3) On March 11, 2020, the WHO has declared the COVID-19 outbreak as a global pandemic (6) , On April 14, 2020, the total globally reported confirmed cases was 1,844,863, despite the suspicion that the true numbers are much higher (7) , while on April 30, 2020 the global number of confirmed cases was estimated to be 3,090,445. (8) In this report, we decided to collect the previously published scientific literature review papers related to measures taken in different countries to get their T Review Article experiences, to learn rapid response to similar situations and to ensure full preparedness of our health care system to combat such a pandemic.
APPROACH
One search engine namely PubMed was used. A Medline search was performed using the keywords COVID-19, outbreak, hospital associated, infection prevention and control, and challenges. Filters applied were English language, full text articles, only published articles till the 30 th of April, 2020. The search resulted in 30 research articles; 13 articles were excluded because of their irrelevance. The remaining 17 articles have been reviewed and summarized, with exclusion of the repeated issues. Special focus was applied on the different problems that different healthcare systems suffered around the world and the solutions developed and applied by those systems in order to learn from the experience of other countries with such a pandemic.
CHALLENGES FACING HEALTH SYSTEMS
Sohrabi et al., reported that on 31 st December 2019, 27 cases of pneumonia of unknown etiology were identified in Wuhan City, Hubei province in China. (5) Wuhan city has a population exceeding 11 million. Most patients complained of dry cough, dyspnea, fever, and bilateral lung infiltration on imaging. All cases had history of visiting a sea food market in Wuhan where fish and other live animals as poultry, bats, and snakes were sold. Lu et al., determined that the causative organism was isolated through taking throat swabs conducted by the Chinese Center for Disease Control and Prevention (Chinese CDC) on the 7 th of January, 2020, which was named SARS-CoV-2 and the disease was named as COVID-19 by the WHO. (9,10) Moreover, Huh et al., stated that the continuous transmission of SARS-CoV-2 in Wuhan resulted in a huge number of patients either with or without COVID-19. So, adequate care should be provided for both patient categories. A great problem facing patients in Hubei province was their inability to reach medical care services. Such huge numbers of cases have led to decreased outcome of healthcare services due to shortage of the health system capacity to provide isolation and testing. So, it was necessary to develop an efficient plan for testing and referring patients in cooperation with the public health authorities of Wuhan's healthcare system. (11) On the 3 rd of February, 2020 there were 6384 confirmed cases in Wuhan, in addition to severe shortage of medical resources as reported by Pan et al, 2020. (12) Another critical problem facing the health system in Liaoning, Zhejiang, Shandong and other provinces in China as reported by Zhang et al., was the asymptomatic carriers who have been suspected and detected, where estimates suggested that they could represent about 60% of all COVID-19 infections. (13) At the same time, Zou et al, proved that asymptomatic carriers may be highly infective during the incubation period because the viral load detected in asymptomatic carriers was nearly the same as that in symptomatic patients. (14) Consequently, scientists with the Ningbo CDC in East China's Zhejiang Province found that 6.3% of the close contacts of confirmed COVID-19 patients were ultimately infected with the virus, while 4.4% of the close contacts of asymptomatic carriers were ultimately infected as mentioned by Zhang J, et al, 2020. (13) Zhang Z et al., reported that there were 2055 laboratory-confirmed health care professionals (HCP) from 476 hospitals across China on the 20 th of February, 2020. Most of the HCP cases (88%) were reported from Hubei. Yet, no super spreader was identified among the HCP infections. The high number of infected HCP was very crucial confirming the severity of the epidemic, the scarcity of information related to the new virus during the early period of the outbreak and the required improvement in the medical system. (15) The Centers for Diseases Prevention and Control (CDC) issued its Weekly Morbidity and Mortality Report (MMWR) on the 10 th of April, 2020 which confirmed that community acquired infection of COVID-19 was associated with high morbidity and mortality rates among older patients in addition to its high spread in long-term skilled nursing facilities. (16) Another, problem was highlighted by Ağalar and Engin (2020), who discussed a study that included 138 patients in China which demonstrated that 57 patients (41.3%) have been infected within the hospitals. Among these patients, 17 (12.3%) were admitted for medical reasons other than COVID-19, and 40 (29%) were HCP. Among the infected HCP, 31 (77.5%) were providing clinical services, 7 (17.5%) in emergency unit, and 2 (5%) were working in the Intensive Care Unit (ICU), respectively. In addition, the laboratory workers were exposed to the risk of COVID-19 infection during analysis of patients' samples. (17) Moreover, two studies, one by Ağalar and Engin and the other by Meng et al., observed that some dental patients are suffering from cough, sneezing, or undergo dental procedures involving the usage of a high-speed handpiece and ultrasonic instruments that lead to aerosolization of the patients' secretions, saliva, or blood to the surroundings, exposing HCWs in the dental settings to the risk of acquiring infection. Also, Healthcare personnel in dental settings deal with apparatuses that might be contaminated with many pathogens during their use or during dental personnel's exposure to a contaminated clinic environment. Furthermore, infection could be transmitted to dental personnel due to puncture by sharp objects or through direct contact between mucous membranes and contaminated hands. In addition to the fact that dental procedures are usually associated with droplets and aerosols generation. Hence, the infection control measures in the daily work might not be sufficient to prevent COVID-19 spread especially from asymptomatic carriers. (17,18) Similarly, Lai et al., stated that close contact of the ophthalmologists with their patients while performing direct ophthalmoscopy or slit lamp examination could lead to infection transmission to ophthalmologists. Furthermore, conjunctivitis may be misleading to ophthalmologists as sometimes it may be the first presenting sign of COVID-19 although the patient may be apparently asymptomatic. (19) On the other hand, Lu et al., drew the attention to the fact that nose and throat examinations pose a high risk to healthcare personnel in ENT settings. So, additional protective measures are required to protect ENT staff members. (20) In Wuhan, 14 healthcare workers (HCWs) were infected by one super spreader with an atypical presentation and resulted in the death of one physician. This led to shortage of HCWs, initiated a cycle of substandard infection control procedures which caused hospital acquired infection transmission and further increased disease transmission within the community. (21) Screening of 24 orthopedic surgeons in Wuhan, in a study by Guo et al., revealed 21 confirmed COVID-19 cases; out of these cases 3 were clinically diagnosed cases presented with fever and respiratory problems, chest CT scan with ground-glass opacity and consolidation, leucopenia and/or lymphopenia, and negative influenza virus tests in addition to a history of exposure to COVID-19. (22) Sohrabi et al., reported that the Chinese health authorities in Wuhan applied strict adherence to standard infection control measures (e.g. frequent proper hand hygiene, use of personal protective equipment [PPE] etc.), that is highly recommended by the WHO and the Centers for Disease Control and Prevention (CDC). (5) In Japan, a company launched an artificial intelligence powered App which provides updated information concerning COVID-19 outbreak and its preventive measures. A symptom checker, as recommended by various bodies like the WHO and the CDC, was added. It advised preventing spread of COVID-19 through prohibiting travel to high-risk places, contact with symptomatic individuals, and the consumption of meat from regions with confirmed COVID-19 cases. (5) Additionally, Huh et al. (2020) stated that important rules were applied at the same time with standard infection control measures including early management to lower the transmission rates, decrease the outbreak risks, and enhance clinical outcome. Moreover, they highlighted the conclusion of certain countries like Singapore, Japan, and Korea which revealed that travelling history and/or contact with a confirmed case becomes inadequate as a case definition, and that acute respiratory illness systematic surveillance application was needed. That was also proposed by the CDC which announced that the USA will take into consideration including SARS-CoV-2 to its influenza-like illness surveillance system and that pathways were used to screen, test, and isolate patients suffering acute respiratory infections in different hospitals. (11) The researchers also proposed several recommendations. First, laboratories should be prepared to surge capacity together with developing rapid testing at the point of care to mitigate the workload on the laboratory and enhance the diagnosis rate. Second, more beds and instruments (e.g., ventilators) should be provided for preparation of the surge capacity together with discharging patients who could be cared at home from long term care facilities. Finally, resources for management of COVID-19 cases should be checked and stocked while patients urged to visit hospitals should be carefully protected and mildly ill patients were advised to stay home, and to ask for medical advice if they suffer persistent or aggravated. (11) Pan et al. (2020) mentioned that the government in China used stadiums, exhibition halls and other places to prepare a number of "square cabin hospitals" for noncritical patients so they can secure resources at Huoshenshan hospital, Leishenshan hospital and other sites for critically ill patients. (12) Also, Zhang et al. said that "China's measures for managing asymptomatic carries included 14 days of centralized quarantine and observation and that people could be released from quarantine after two consecutive PCR tests (separated by 24 hours). Unless asymptomatic carrier develops clinical manifestations while in quarantine he is not included in confirmed cases in addition to the expansion of testing and follow-up of asymptomatic carriers including people in close contact with confirmed COVID-19 and asymptomatic cases. (13) It was also reported that many proceedings have been taken to contain the COVID-19 outbreak in China as improving advice about the appropriate use of PPE, preparing logistics and medical supplies together with enhancing disinfection at the hotels where HCP stay besides the application of a contingency surveillance system to follow all exposed HCP essential for detection and management of infected HCP. (15) Furthermore, a special medical expert group was established to make the needed efforts for diagnosis and treatment of suspected and confirmed cases among healthcare providers. Other efforts executed during such outbreak were presented by Meng et al. (2020). Among these efforts was the addition of COVID-19 to group B of infectious diseases in January 2020 by the Chinese National Health Commission, which also includes SARS and highly pathogenic avian flu. They also recommended all health care providers to follow the indicated protective measures used for group A infectious diseases (a disease group referred to highly infectious diseases, e.g. cholera and plague) while they permitted dental emergency cases only to be managed with strict adherence to the infection prevention and control measures and suspending daily clinical work practice until further notification. Furthermore, quality control centers concerning with dentistry profession raised recommendations for dental services during COVID-19 outbreak in order to ensure the quality of infection control. Moreover, all healthcare providers were asked to seek medical advice and stop working in case of suffering from fever, coughing, sneezing, and/or other COVID-19 related symptoms has contacted closely with a confirmed family member. Dentists were advised to use saliva ejectors that reduce droplets and aerosols production and to avoid performing dental procedures that could induce coughing as per WHO recommendations. (18) Ağalar and Engin concluded that it is important for HCP to be fully equipped with PPE and ready to receive patients and that COVID-19 suspected patients should be safely and rapidly isolated. At the same time, hospital entrances, patient rooms, and waiting areas should be provided with supplies needed for hand disinfection containing 60-95% alcohol and the needed waste containers. Further, triage personnel should be separated from possibly infectious patients to restrict close contact using physical barriers made of glass or plastic. Also, there must be 2 meters between the HCP and the patient inside the clinics and examination rooms to avoid close contact. Concerning laboratory workers, there must be proper training on the usage of biological agents and selfprotection against its hazards together with the use of the appropriate PPE and avoidance of aerosol generating procedures. At the same time, risk assessment should be carried out periodically in all hospital laboratories and following the WHO laboratory biosafety highlights in relation to COVID-19. (17) Lai et al. (2020) stated that the USA issued an alert to advise ophthalmologists to wear masks and eye protection when examining conjunctivitis patients with respiratory symptoms and history of international travel. This recommendation was recommended by the American Academy of Ophthalmology after performing risk assessment of the infection control precautions followed by ophthalmologists depending on a three-level hierarchy of control measures; administrative control, (environmental control and use of PPE. (19) Regarding the healthcare workers in the ENT field, Lu et al, 2020 reported the implementation of additional protective measures while performing flexible laryngoscopy to reduce exposure to aerosols, replacing local anesthetic spray by gel anesthesia, the use of the smallest possible laryngoscopic diameter and ensuring the use of adequate surface anesthesia to reduce the sneezing reflex during nasal endoscopy. The recommendations also included isolating suspected COVID-19 patients after surgery into negative pressure rooms and screening them for COVID-19 then returning patients with negative results to ENT department, replacing open-type suctions by closed suction for tracheotomy patients and replacing aerosol inhalation procedures by in-tube infusion or spray humidification to humidify the trachea. (20) As a result of these regulations, there were 22 confirmed COVID-19 from a total of 4148 fever cases visited this hospital (since the 20 th of February, 2020). On the other hand, Schwartz et al. (2020) recommended implementing a Traffic Control Bundling (TCB) to reduce infection rates among HCWs in Taiwan. Starting with outdoor triage where positive COVID-19 patients are directed to the isolation ward in private isolation rooms while query patients (those who suffer atypical symptoms or whose tests are inconclusive) are placed in a quarantine ward (intermediate zone) for 14 days. Both isolation and quarantine directed patients were transferred through a designated route that avoids contact with the clean zone. Healthcare workers should strictly follow all infection control measures while moving between different patient areas, in addition to ensure daily cleaning and disinfection of the environmental surfaces in clean and intermediate patient zones while limiting cleaning and disinfection of the hot zone to be only in case of visible contamination with body fluids. (21) Guo et al. (2020) in their study reported that during COVID-19 outbreak, N95 respirators have a protective effect for orthopedic surgeons compared to medical masks. So, they should be aware and vigilant towards wearing N95 respirators as a protective measure. (22)
CONCLUSION AND RECOMMENDATIONS
The current COVID-19 pandemic highlighted the importance of rapid international response in the fields of disease diagnosis, virus isolation, financial support, and temporary hospital construction to deal with the increasing number of cases. Strict adherence to infection control measures is crucial to provide self-protection. Supporting basic health care is an important pillar in decreasing transmission risk. Also, provision of continuous online learning and education of all healthcare providers concerning infection control measures and methods of protecting themselves. It is necessary to integrate scientific research resources, increase research investment, enhance the cooperation between all scientists internationally, and apply scientific research results to enhance the ability to prevent spread of the pandemic.
|
2020-08-20T10:05:21.427Z
|
2020-08-17T00:00:00.000
|
{
"year": 2020,
"sha1": "436d7620b064c32b4cccfbbabe6a2737f88732aa",
"oa_license": "CCBYSA",
"oa_url": "https://jhiphalexu.journals.ekb.eg/article_108288_431f248b3f2d853ebb35214a7b8c9e1c.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "de6282403d5041314a6eaeba7d8e81b74fcd03e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3577958
|
pes2o/s2orc
|
v3-fos-license
|
The NCBI BioCollections Database
Abstract The rapidly growing set of GenBank submissions includes sequences that are derived from vouchered specimens. These are associated with culture collections, museums, herbaria and other natural history collections, both living and preserved. Correct identification of the specimens studied, along with a method to associate the sample with its institution, is critical to the outcome of related studies and analyses. The National Center for Biotechnology Information BioCollections Database was established to allow the association of specimen vouchers and related sequence records to their home institutions. This process also allows cross-linking from the home institution for quick identification of all records originating from each collection. Database URL: https://www.ncbi.nlm.nih.gov/biocollections
Introduction
The BioCollections Database is a curated dataset of metadata for culture collections, museums, herbaria and other natural history collections connected to sequence records in GenBank. It is maintained and curated by the Taxonomy group at the National Center for Biotechnology Information (NCBI). Biocollection institution codes are unique across multiple types of collections and the database is used to support the 'structured voucher' annotation in the sequence entries submitted to International Nucleotide Sequence Database Collaboration (INSDC) (1). This broadly follows the Darwin Core (DwC) standard for biodiversity data (2) and is used to standardize usage across interconnected databases including GenBank the NCBI (3), as well as the European Nucleotide Archive (ENA) (4) and DNA Databank of Japan (DDBJ) (5).
Initially, the data were imported from Index Herbariorum (6), World Federation for Culture Collections (http://www. wfcc.info/), Insect and Spider Collections of the World (http:// hbs.bishopmuseum.org/codens/codens-r-us.html), Amphibian Species of the World (AMNH) (7) and the Catalog of Fishes (8). Only the institution codes that are listed in the BioCollections Database appear as 'structured voucher' in GenBank records. New repository records are added to the database as they are submitted to INSDC along with sequence data. Since the BioCollections Database is maintained at NCBI, the validation process is fast. Prior to inclusion in BioCollections, the new collections are validated to ensure that they are curated, are readily available to the public and there is a contact person responsible for the collections. If a home institution has a catalogue page and provides us with URL formula, the vouchers in the sequence entries are hotlinked to specimen pages at the relevant collection ( Figure 1). Personal collections are not normally included. Other directories of repositories are periodically reviewed to ensure that the NCBI BioCollections is up-to-date.
As the importance of specimen vouchers in biodiversity studies continues to grow, it is increasingly important to organize and annotate the data to allow users to easily access this information and confirm which collection houses the original sample. This newly released public resource is the source for building links between NCBI databases and external collections.
BioCollections Database overview
In 2005, the Consortium for the Barcode of Life (CBOL; http://www.barcodeoflife.org) proposed linking sequence records to voucher specimens as part of the DNA Barcode data standard. This method was developed in collaboration with the Global Biodiversity Information Facility (http://www.gbif.org/) and other major biodiversity database initiatives. The NCBI BioCollections Database was created as a part of this global project to gather, update, manage and search biological collections information. In mid-2008, members of INSDC started annotating sequence entries that contained culture collection or specimen voucher information with structured voucher qualifiers.
The initial method proposed for linkage by CBOL used a structured data format based on the DwC data standards developed by the Biodiversity Information Standards (TDWG, formerly the Taxonomic Database Working Group). The DwC standard Triplet format for specimen data consists of three parts: the universally-recognized code for the institution that holds the voucher specimen; the institution's code for the collection in which the voucher specimen is kept and the unique specimen identifier, all separated by colons.
For example: /organism¼'Spizella atrogularis' /specimen_voucher¼'MVZ: Bird: 170231' In many cases, a secondary collection code (such as a collection devoted to mammals or plants at a specific institution) is not utilized and in such cases the specimen data is indicated as a doublet only. Structured Voucher Annotation: There are three different types of qualifiers for annotating sequences from different source materials: 1. /culture_collection for live microbial and viral cultures and cell lines deposited in curated culture collections. 2. /specimen_voucher for a physical specimen in a curated museum, herbarium, frozen tissue collection or in laboratory (accessible to public). If the specimen was destroyed in the process of sequencing, electronic images (e-vouchers) are an adequate substitute for a specimen voucher. 3. /bio_material for source material in biological collections that do not fit into either the/specimen_voucher or the/culture_collection modifier categories, like physical specimens from zoos, aquaria, stock centers, germplasm repositories and DNA banks.
Another set of qualifiers may contain information from BioCollections. Submitters commonly use these fields to add voucher information but they are not 'structured,' hence, they don't get linked to Biocollections Database.
1. /isolate is recommended to identify specific individuals or samples from which the sequence data was originally obtained--this can include field numbers and a broad set of unique identifiers that will not be classified under strain or culture collection. 2. /strain is recommended for cultures in personal collection or laboratory. 3. /note for any comment or additional information about the organism.
Until recently, the BioCollections Database was only used internally by the members of INSDC, mainly to facilitate sequence annotation, although a public text-based data file was (and remains) available (ftp://ftp.ncbi.nih.gov/pub/ taxonomy/Cowner_dump.txt). Over the years, the database has grown significantly. Each record now provides information about the institution that houses the collection, standard institution code, mailing address and associated webpage if available. If there are collections within an institution, they are listed within the institution record as collection codes. As of October 2017, there are over 7400 institution codes and 300 collection codes listed in the BioCollections Database. Recognizing that this information can be useful to a broader scientific community, NCBI released this resource to the public in April of 2017.
Search and retrieve data
Various search queries can be used to search the BioCollections Database using the search box on the BioCollections homepage. For example, searching with MVZ will bring up Museum of Vertebrate Zoology, University of California at Berkeley and its collections ( Figure 2). Some useful search fields are listed in Table 1.
The BioCollections Database is reciprocally linked to other databases like Nucleotide, Protein, Popset, EST and GSS. This allows users to find all related records that are from an institution of interest.
Users can download the BioCollections dataset by using 'Send To' -> 'File' option, located at the upper right corner of the search results page. Summary will download text file with data based on number of records selected using checkbox from page. CSV will download comma separated values. XML will download XML file with data based on number of records selected using checkbox from page. In each case the user can select specific entries to include in the download by using checkboxes. The data can also be downloaded as a pipe-delimited text file from the NCBI ftp site (ftp://ftp.ncbi.nih.gov/pub/taxonomy/biocollections/).
Duplicated or ambiguous collections and codes
For various reasons, some institutions use more than one institution code. For example, University of Maryland uses MARY for its herbarium collection and UMDC for its museum collection. These are listed as separate records. If an institution changes the code for its collection or institution and adopts a new one, the old code is retained in the database as a synonym. Similarly, when there are several institution codes for the same collection, they are listed as synonyms.
When more than one institution uses the same code for their specimen, the International Organization for Standardization three letter country code is used to unique the collections. If the institutions are from the same country, a state code is added in addition to country code. The institution code that is already in the database is retained (without the country code) and the subsequent ones are registered with country codes (state codes where applicable).
For example, all the following institutions use UAM as their institution code. To distinguish between the collections, the institution codes are listed as: Since University of Alaska, Museum of the North (UAM) was the first one to be registered in the BioCollections Database, UAM is retained for University of Alaska and the subsequent UAM codes are added with country and state codes. When a record is submitted to Genbank with an ambiguous code (ex: UAM), it prompts a consult so a curator can confirm the correct institution is listed.
Challenges of DwC Triplet
DwC Triplet creates an identifier for voucher specimens in the form <institution_code>:<OPTIONAL collection_ code>:<specimen_id>. The problems with DwC Triplets as identifiers have been discussed before (9). There are many institutions that share the same institution code. We resolve this ambiguity by adding three letter country codes to the duplicated institution codes. This works well for our internal system i.e. to link BioCollections with GenBank records but may not find exact matches across other repositories. Adding to the problem, DwC Triplets are not formatted consistently and different collections codes could be used for a single institution. For example, we use UWBM: ORN: for University of Washington, Burke Museum Ornithology Collection, whereas VertNet Database (http://vertnet.org/) uses UWBM: BIRD: for the same collection. Furthermore, submitters are asked to fill in the voucher information when submitting sequences to GenBank but many don't provide that information, thus, many voucher specimens are fielded as/strain or/isolate in GenBank records and cannot be linked to BioCollections. We have over 600 000 ATCC records that are formatted correctly as/culture_collection and are linked to BioCollections but there are about 76 000 ATCC records that are not formatted correctly and appear as/strain or/ isolate in GenBank records. We are working on improving processes to correct the legacy records for which the culture collections acronyms are not 'structured.' Additionally, GenBank has recently started to automatically structure selected culture collections codes in new entries submitted as/strain or/isolate if they are from DSM, CBS, JCM, ATCC, LMG, NBRC, CCUG and KCTC. We selected these culture collection codes based on number of type strains we have in the taxonomy database ( Table 2). Going forward, we will expand this list to other institution codes as well. Also, we would like to encourage submitters to provide specimen vouchers in a structured format so that they can be correctly linked to BioCollections by emailing updated information to gb-dmin@ncbi.nlm.nih.gov.
Often, institutions change their codes or are merged with other institutions. Linking mechanisms that depend on metadata like institution codes are prone to break as the metadata changes. The biodiversity community has long recognized the need for globally unique identifiers (GUID) to share, link and track biocollections data (specimen records, images, taxonomic names and DNA sequences) that are scattered all around the world. Several different technologies like Life Science Identifiers, Digital Object Identifiers, HyperText Transfer Protocol (HTTP) Uniform Resource Identifier-based identifiers etc. have been discussed for this purpose. More recently the use of GUID to provide stable identifiers for biocollections has gained traction (10-12). We will consider using these options as they become universally used in future.
External resources
Resources outside of NCBI are constantly reviewed to keep the NCBI BioCollections Database up to date. In the past, we have exchanged data with the Global Registry of Biodiversity, an online metadata resource that provides information on biodiversity collections (13). Recently, we imported about 300 institution codes from Index Herbariorum (6) and about 50 culture collections codes from World Federation of Culture Collections. Integrated Digitized Biocollections (iDigBio) is another resource that provides data and images for millions of biological specimens in electronic format (14) and ways to link iDigBio specimen records to GenBank sequences associated with those specimens should be further explored. In 2011, the Global Genome Biodiversity Network (GGBN) was created as a part of Global Genome Initiative to bridge the gap between biodiversity repositories, sequence databases and research (15). Through its Data Portal, GGBN aims to Retrieves all entries that have Alaska in institution/collection name
Search by properties
Modifier type Collection type museum [prop] Retrieves all museum entries Collection type herbarium [prop] Retrieves all herbarium entries Collection type culture collection [prop] Retrieves all culture collection entries Complex queries can be built by specifying the search terms, their fields and the Boolean operations AND, OR and NOT. make biodiversity samples readily discoverable and accessible to the research community. We will continue to explore the possibilities of crosslinking and updating data in accordance with all these external resources.
We are in the process of cleaning and updating the information in the BioCollections Database and have already updated >200 records by contacting resource managers and asking them to verify and correct their relevant information. We are also requesting institutions to provide us with an URL rule to their catalogue page so we can cross link the data. At present, NCBI offers the ability for credible third party resources to link out directly from either sequence records or via taxonomic names in the Taxonomy Browser. LinkOut aims to facilitate another way to access relevant online resources and supplement information found in NCBI databases (16). Links could be expanded at these individual pages by collaborating with more biorepositories.
Addditional uses of BioCollections Database and future steps
Ideally, taxonomic vouchers should be expertly identified samples deposited and stored in a facility that is accessible to researchers for further study and thus serve an important role in biological research (17). For prokaryote names to be validly published, its type strains must be deposited in two recognized culture collections, a rule set by International Committee on Systematics of Prokaryotes under the International Code of Nomenclature of Prokaryotes (18). Type strains in culture collections are the points of reference that other strains must be compared with when determining their taxonomic identity. Similarly, the International code for Nomenclature for algae, fungi and plants (19) and The International Code of Zoological Nomenclature (http://iczn.org/code) also requires the designation of a type specimen, albeit with slightly different rules. The designation of vouchers is an important part of establishing provenance in systematic research and allows for critical assessment. With the increasing use of molecular sequences analysis in the systematics, it is important to establish a mechanism to connect these two sets of data. Besides taxonomic identification, associated metadata can provide important information on geographic dispersal and DNA can potentially be obtained for further research.
There are >1 600 000 species-level taxonomy ids in the NCBI Taxonomy database and they are identified with varying degree of certainty, with almost 400 000 identified with a binomial name. Type specimens have an important role in this regard, by providing a clear reference for comparison. We currently have just over 36 000 names with type material annotations. The complete list of type material annotations will be released as part of the taxonomy ftp files. Since 2013, GenBank curates type material in the Taxonomy Database and uses it to flag sequences from types in the sequence records. This has led to an improvement in the annotation of sequence records. Recently, GenBank has developed a protocol to identify and correct misidentified prokaryotic genomes, using Average Nucleotide Identity genome neighboring statistics in conjugation with reference genomes from type (20). In addition to this, GenBank, together with its collaborative partners in the INSDC, has accepted the addition of a new 'type material' qualifier for sequence records which will enable specific sequence records to be annotated automatically with information from the NCBI Taxonomy database (1). BioCollections Database can be used as a useful resource to facilitate the identification of the home institutions providing these important set of records and track these specimens. Furthermore, BioCollections can add value to other NCBI databases. In 2011, NCBI developed BioProject and BioSample databases to organize and integrate data across interdisciplinary resources and allow users to query across many NCBI databases to retrieve data relevant to their interest (21). BioSample can potentially include blood samples, cell cultures, individual organisms etc. that may come from culture collections, museums, herbaria or other repositories. Expanded links between BioCollections and BioSample database will help make these databases more comprehensive.
The individual biorepository pages in BioCollections can serve as a start site for users specifically interested in the breakdown of sequenced vouchers at a specific institution. For example, Smithsonian Institution, National Museum of Natural History shares specimens and DNA samples with collaborators worldwide. As a result, DNA sequence data is submitted to Genbank, ENA and DDBJ by a large number of submitters and are often not formatted correctly and therefore are not linked to BioCollections Database. USNM (National Museum, >29 000 total records) and US (National Herbarium, > 16 000 total records) notations represent a large number of sequence records and are part of an important collaborative effort. The 'USNM' and 'US' strings were used to search the entire GenBank database, then manually checked to assure they referred to specimens as expected. This information was reported to Smithsonian where they were added to the databases of appropriate departments. Depending on the choice of the individual institution this can facilitate the linking of specimens to their sequence records. One option will be to provide LinkOuts to specific samples pages directly from sequence records.
We hope to expand the utility of the BioCollections Database in a similar fashion for other biocollections in future. In the meanwhile, this focused resource will continue to provide important institutional context to the large number of sequence records in the public sequence databases.
|
2018-04-03T04:45:03.305Z
|
2018-02-23T00:00:00.000
|
{
"year": 2018,
"sha1": "0c9cbd7256b263b9246637a4ff3c4978be1dd700",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/database/article-pdf/doi/10.1093/database/bay006/28539873/bay006.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c9cbd7256b263b9246637a4ff3c4978be1dd700",
"s2fieldsofstudy": [
"Biology",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Computer Science",
"Medicine"
]
}
|
37983199
|
pes2o/s2orc
|
v3-fos-license
|
Triangle / Square Waveform Generator Using Area Efficient Hysteresis Comparator
A function generator generating both square and triangle waveforms is proposed. The generator employs only one low area comparator with accurate hysteresis set by a bias current and a resistor. Oscillation frequency and its non-idealities are analyzed. The function of the proposed circuit is demonstrated on a design of 1MHz oscillator in STMicroelectronics 180 nm BCD technology. The designed circuit is thoroughly simulated including trimming evaluation. It consumes 4.1 μA at 1.8V and takes 0.0126mm2 of silicon area. The temperature variation from −40 ◦C to 125 ◦C is ±1.5% and the temperature coefficient is 127 ppm/◦C.
Introduction
Relaxation oscillator circuits are present in almost every electronic application such as microcontrollers, DC-DC converters or RFID chips.With increasing demand for small form surface mount packages silicon area of every block can have an impact on whether the final design fits into the package or not.Recently several architectures with low silicon area have been proposed.In [1] a capacitor is linearly charged and the threshold is compared with a current-mode comparator.In [2] a voltage on an exponentially charged capacitor is compared with a hysteresis comparator.However, for some applications, such as DC-DC converters [3], a triangular waveform may be required.
The conventional approach to generation of such a waveform is depicted in Fig. 1.A capacitor is charged with a constant current of alternating orientation generated by a current source (CS) between two voltage levels V lo and V hi .This solution requires two comparators.Instead of two comparators another solutions may employ single comparator with a hysteresis.In [4] a CMOS Schmitt trigger is used.The main drawback of such a circuit is that the hysteresis and the frequency are sensitive to PVT variations.
To address this issue, the solution in [5] uses a comparator with a hysteresis set by an external resistor network.In [6] the hysteresis is set with a resistor and a saturation current of an OTA.In [7] two OTAs are used to form a Schmitt trigger and another OTA is used as an integrator.These solutions require either comparator or OTA with a differential input stage.
Another class of generators is based on the so-called modern functional blocks.In [8] two second-generation current conveyors (CCII) are used for square/triangular generation.A current mode generator is presented in [9] using two multiple-output current controlled current differencing transconductance amplifiers (MO-CCCDTA).Another voltage mode solution uses two differential voltage current conveyors (DVCC) in [10].A differential output generation was presented in [11] using dual output and fully balanced voltage differencing buffered amplifiers (DO-VDBA and FB-VDBA, respectively).Solutions in [12] and [13] employ single Zcopy controlled gain voltage differencing current conveyors (ZC-CG-VDCC) for voltage/current output functional generator.All these designs require complex functional blocks that take a lot of silicon area.The overview of the mentioned architectures can be seen in Tab. 1.In this article a new triangular relaxation oscillator is proposed.This circuit requires only one single ended comparator and therefore saves both silicon area and power consumption.Section 2 described operation of the proposed circuit and analyses its oscillation frequency, design of a 1 MHz relaxation oscillator can be found in Sec. 3 and its simulation results are presented in Sec. 4, conclusion follows in Sec. 4.
Circuit Analysis
The schematic of the proposed waveform generator is in Fig. 2a.Transistors M 1 and M 4 work as current sources with M 2−3 as switches controlling charging and discharging of the capacitor C. For symmetrical waveform both current must be equal.Hysteresis comparator is composed of transistors M 5−9 .M 6 together with R work as V → I converter whose output current is compared to I M 5 produced by M 5 .M 8 and M 9 then form a second stage of the comparator whose output is further amplified by a digital CMOS buffer.Hysteresis is created by shorting the resistor by M 7 .If the bulk of M 6 is shorted to the source the body effect is avoided and the two threshold voltages are (assuming high gain in the first stage of the comparator): Absolute value of the two voltages is dependent on the gatesource voltage of M 6 but the difference depends only on the current in the first stage of the comparator and the resistor value.
Some applications (e.g.DC-DC converters) may require reference voltages corresponding to the comparator thresholds.These can be extracted with the circuits depicted in Fig. 2b.M 11 is sized to have the same current density as M 6 so that V gs11 = V gs6 = V low .Similarly, if I M 12 = I M 5 and M 13 is the same size as M 6 then the voltage on the drain of M 13 corresponds to V high .By the same principle, if needed, setting the resistor value between 0 and R (resistor value in the oscillator) can generate any voltage within the oscillator output voltage range.
Due to the delay of the comparator the capacitor voltage v cap overshoots the threshold V high by SR + t + d , where SR + is the positive slew rate on the capacitor given by Substituting ( 1) and ( 2) into (3) and summing both halfperiods we get for a symmetrical waveform (SR + = SR − ) the following oscillation period The frequency of oscillation is therefore dependent on the product of R and C as is the temperature dependence.The effect of the comparator delay can be compensated by increasing the value of C.
The comparator delay portion of the oscillation period is dominated by t − d caused mostly by slewing of the cmp1 node from saturation voltage of M 6 V sat ds6 to the threshold of the second stage give by V DD − |V THP |, where V THP is the threshold voltage of PMOS transistor M 8 .This delay t slew can be estimated as follows.Slewing starts when the input voltage of the comparator crosses threshold V low .Around this operating point M 6 can be approximated by a corresponding transconductance g m6 that is charging a parasitic capacitance C p of node cmp1.As the voltage on the capacitor v cap continues to linearly decrease so does the current charging C p given by g m6 v cap .The slewing time can be computed by the following integral ( Solving for t slew leads to Equation (6) shows that to decrease the slewing delay of the comparator the parasitic capacitance C p must be minimized and the transconductance g m6 must be maximized.
The former can be done by minimizing M 8 for lower gate capacitance, the latter by increasing drain current of M 6 which is given by I M 5 .
Design
The proposed waveform generator with 1 MHz frequency was designed in STMicroelectronics 180 nm BCD technology with supply voltage of 1.8 V. Values of the passive components were selected to be easily integrated onchip: R = 500 kΩ, C = 1 pF.For good linearity a MOM (Metal-Oxide-Metal) capacitor was selected together with N+ polysilicon resistor for good temperature behavior.
Since the on-chip resistors and capacitors have large process variations trimming is usually employed to put the resultant frequency within a given specification.This can be accomplished by trimming either the resistor or the capacitor.The drawback of the resistor trimming is a change of triangle amplitude with trimming code.This may not be an issue when only a digital output is used but may pose a problem for subsequent processing when the triangular output is used as well, e.g. in DC-DC converters.For this reason the capacitor trimming was selected and the trimming circuit can be found in Fig. 3.The main capacitor C is accompanied with four binary scaled capacitor C 0 − C 4 which can be switched parallel to the main capacitor using transfer gates controlled by trimming bits.The unit capacitance of the trimming capacitors and therefore the trimming range was selected according to the technology spread of the oscillation period and is about ±30 %.The remaining value of the main capacitor C was then reduced by the parasitic capacitances of the transistors connected to the cap node, e.g.drain/source capacitances of M n0−3 , M p0−3 , M 2 , M 3 or gate capacitance of M 6 .This correction makes 65 fF.The typical bias current as well as all the branch current through M b1 , M b2 , M 1 or M 5 is 1 µA.In order to stabilize the amplitude of the triangle waveform the bias current should have inverse temperature and process dependency as the resistor R.This is not a problem as the bias current I bias distributed across the chip is usually derived from a trimmed bandgap voltage V bg and a reference resistor R bias as I bias = V bg /R bias .If R bias is the same resistor type as R then the triangle waveform amplitude is a scaled copy of the bandgap voltage.
The transistor dimensions are summarized in Tab. 2. The gate lengths of the transistors operating as switches were kept at minimum of the given technology at 180 nm.However, the transistors operating as current sources have gate lengths in the order of micro meters for high output resistance and good matching.The former has impact on the triangular waveform linearity and the latter on statistical duty cycle variations.
Figure 4 shows a layout of the proposed circuit (excluding reference generators of Fig. 2b).The circuit takes 0.0126 mm 2 out of which the largest part is taken by the capacitors and the resistor.
Simulation Results
The designed circuit has been simulated in Eldo simulator from Mentor Graphics.The bias current was derived from a constant voltage source and an N+ polysilicon resistor to simulate chip bias current behavior and to stabilize amplitude of the triangle waveform across corners and temperature.The simulated transient waveforms of the main circuit including the reference generators are depicted in Fig. 5.It can be seen that the triangular waveform generated on the cap node exceeds the ideal boundaries given by the reference voltages V low and V high (waveforms low and high).This is caused by the propagation delays of the comparator t + d and t − d as discussed in Sec. 2. The origin of the propagation delays can be seen on the depicted waveforms of internal nodes of the comparator cmp1 and cmp2 which show slew rate limitation caused by the designed constant current of M 5 and M 9 .The square waveform output clk is then produced by reshaping the signal on node cmp2 by the digital CMOS buffer.The oscillating frequency for a typical corner is 0.94 MHz, read from the waveforms t + d is 3 ns and t − d is 34 ns out of which t slew is about 28 ns.We can compare this result with a theoretical value given by (6).Using (values from operating point analysis) V DD = 1.8 V, |V THP | = 0.55 V, V sat ds6 = 0.19 V, C p = 4.9 fF, g m6 = 18 µS and SR − = 1 V/µs we get theoretical value of t slew equal to 24 ns which is in good agreement with the simulated value.
The average current consumption of the generator core (w/o ref. gens.on Fig. 2b) is 4.1 µA.This is equivalent to 7.38 µW at the respective supply voltage.
In order to evaluate process spread of the circuit a Monte Carlo (MC) analysis was run on top of the transient simulation.Further to assess the effectiveness of the trimming Figures 6 and 7 shows histograms and statistical parameters of 500 runs of MC analysis.Results of oscillation frequency before trimming can be seen in Fig. 6a.The maximum deviation from the nominal frequency is 26 % and is caused by the process variability of sheet capacitance and sheet resistance in the given technology process.Figure 6b shows histogram of frequency after trimming.The maximum deviation is now 4.27 % from the nominal frequency.Duty-cycle variation histogram is in Fig. 6c and its standard deviation is 0.82 %.This statistical variation of the duty-cycle is caused by the mismatches of current mirrors M b3 − M 4 and M b2 − M 1 and can be improved by enlarging the area of the transistors [14].
Figure 7 shows histogram of the peak-to-peak amplitude of the triangle waveform.As described above, the bias current was assumed to be derived from an ideal bandgap voltage reference and the same resistor type as resistor R. The amplitude of the triangle waveform is thus not affected by the process spread of the resistor (which is around ±20 %) and is given mostly by the mismatches of M b1 , M 5 and R.
Variation of the oscillation frequency with temperature can be see in Fig. 8.
For the extended temperature range spanning from −40 • C to 125 • C the total frequency variation is ±1.05 % and the temperature coefficient is therefore 127 ppm/ • C.
Conclusion
A new area efficient circuit generating triangular waveform was proposed.Oscillating period together with the major source of error caused by the propagation delay of the comparator was derived.A 1-MHz waveform generator based on the proposed topology was designed in STMicroelectronics 180-nm BCD technology consuming 7.38 µW and occupying only 0.0126 mm 2 .The temperature and process stability of the oscillation frequency depends on the resistors and capacitors available in the given technology.The type of these elements can be selected to at least partially compensate for the temperature behavior of each other.In the presented design a temperature coefficient of 127 ppm/ • C was achieved.To cope with the process spread a trimming is usually employed as was demonstrated.The proposed topology can be used as a general purpose square wave generator or as a triangular generator in DC-DC converters.
Fig. 2 .
Fig. 2. Proposed circuit: (a) Waveform generator, (b) reference generators (when not explicitly shown the bulks are tied to V DD or ground for PMOS and NMOS, respectively).
Number of passive elements Number of transistors Architecture Type of out. sig. (Voltage/Current)
I M 1 C and t + d if the rising propagation delay of the comparator.Similarly for the opposite phase v cap undershoots V low by SR − t − d , SR − being negative slew rate given by I M 4 C and t − d being falling propagation delay.The rising and falling half-periods are
|
2017-12-01T02:09:35.212Z
|
2016-04-14T00:00:00.000
|
{
"year": 2016,
"sha1": "0f7062947c2b9968e75b65b63a930a43cba11790",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.13164/re.2016.0332",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0f7062947c2b9968e75b65b63a930a43cba11790",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
119086594
|
pes2o/s2orc
|
v3-fos-license
|
Anisotropic radiation from accretion disc-coronae in active galactic nuclei
In the unification scheme of active galactic nuclei (AGN), Seyfert 1s and Seyfert 2s are intrinsically same, but they are viewed at different angles. However, the Fe K\alpha emission line luminosity of Seyfert 1s was found in average to be about twice of that of Seyfert 2s at given X-ray continuum luminosity in the previous work (Ricci et al. 2014). We construct an accretion disc-corona model, in which a fraction of energy dissipated in the disc is extracted to heat the corona above the disc. The radiation transfer equation containing Compton scattering processes is an integro-differential equation, which is solved numerically for the corona with a parallel plane geometry. We find that the specific intensity of X-ray radiation from the corona changes little with the viewing angle \theta when \theta is small (nearly face-on), and it is sensitive to \theta if the viewing angle is large (\theta>40 degrees). The radiation from the cold disc, mostly in infrared/optical/UV bands, is almost proportional to cos\theta when \theta<40 degrees, while it decreases more rapidly than cos\theta when \theta>40 degrees because of strong absorption in the corona in this case. For seyfert galaxies, the Fe K\alpha line may probably be emitted from the disc irradiated by the X-ray continuum emission. The observed equivalent width (EW) difference between Seyfert 1s and Seyfert 2s can be reproduced by our model calculations, provided Seyfert 1s are observed in nearly face-on direction and the average inclination angle of Seyfert 2s ~65 dgrees.
INTRODUCTION
According to the unification scheme of active galactic nuclei (AGN), Seyfert 1s (Sy1s) and 2s (Sy2s) are intrinsically same but viewed at different angles, which leads to different observational features (Antonucci 1993). Liu & Wang (2010) found that the Fe Kα line luminosities of Compton-thin Seyfert 2 galaxies are in average 2.9 times weaker than their Seyfert 1 counterparts. Ricci et al. (2014) found that the Fe Kα line luminosities of Sy1s are about twice of those for Sy2s at a given X-ray continuum luminosity (10 − 50 keV). The reason is still unclear. One possibility is that such difference is caused by anisotropic X-ray emission from these sources. Indeed, the orientation dependence of emission from AGN has been found and studied for a long time. Nemmen & Brotherton (2010) explored the uncertainty of the bolometric corrections of quasars with different viewing angles based on the accretion disc models of Hubeny et al. (2000), and found that a value of the bolometric luminosity for a quasar viewing at an angle of ≈ 30 • will result in ≈ 30% systematic error, if the emission from the quasar is assumed to be isotropic. Runnoe, Shang, & Brotherton (2013) analyzed a sample of radio-loud quasars and found that the quasar luminosity changes with orientation. The sources viewed in the faceon direction are brighter than the edge-on sources by a factor of 2-3. Zhang (2005) explained the observed anticorrelation between type II fraction and the X-ray luminosity in 2−10 keV based on the AGN unification model with only one intrinsic luminosity function for these two types of AGNs. Recently, DiPompeo et al. (2014) used two observed luminosity functions to investigate the intrinsic quasar luminosity function with the correction of a simple projection effect for the anisotropic emission of accretion disc. They concluded that the orientation dependence is the most important one among several potential corrections. They also claimed that more complex model of anisotropy may strengthen the orientation effect.
The black hole accretion disc-corona model has been widely used to explain both the thermal optical/UV and the power-law hard X-ray emission in the spectral energy distributions (SEDs) of active galactic nuclei (AGNs) (Galeev, Rosner, & Vaiana 1979;Haardt & Maraschi 1991, 1993Nakamura & Osaki 1993;Svensson & Zdziarski 1994;Liu, Mineshige, & Shibata 2002;Liu, Mineshige, & Ohsuga 2003;Cao 2009;You, Cao, & Yuan 2012) and galactic black hole candidates (GBHC) (Esin et al. 1998;Nayakshin & Dove 2001). Although there are different physical mechanisms proposed in the previous works for heating the corona and the interactions between the cold disc and the hot corona, the disc-corona is mostly structured as a sandwich-like cylindrically symmetric system (Begelman & McKee 1990;Balbus & Hawley 1991;Meyer & Meyer-Hofmeister 1994 Dullemond 1999;Merloni & Fabian 2001. The optically thin and geometrically thick hot coronae are vertically connected to the both sides of an optically thick and geometrically thin accretion disc. It was also suggested that such hot coronae may play an important role in launching relativistic jets observed in X-ray binaries/AGN (e.g., Merloni & Fabian 2002;Cao 2004;Zdziarski et al. 2011;Wu et al. 2013;Cao 2014). In the accretion disc-corona system, the observed thermal optical/UV emission is believed to originate from the blackbody radiation of the thin disc passing through the hot corona. A small fraction of soft photons are inverse Compton scattered by the hot electrons in the corona, which contribute to the observed power-law hard X-ray emission of the system. Moreover, the temperature of the electron in the transition layer between the corona and the disc experiences a rapid decrease from ∼ 10 8−9 K (the hot corona) to ∼ 10 4−5 K (the cold disc), the thermal X-ray line emission may be produced in such transition zone . In most previous works, either the cooling rate in the corona or the spectra from the accretion flow are calculated assuming the corona to be one parallel-plane. The inverse Compton scattering is often computed by the Monte Carlo simulation based on the escape probability method (Pozdniakov, Sobol, & Siuniaev 1977;Kawanaka, Kato, & Mineshige 2008;Liu, Mineshige, & Ohsuga 2003;Cao 2009).
Assuming that a fraction of viscously dissipated energy in the disc is transported into the corona (probably by magnetic fields), we can calculate the structure of the disc-corona accretion with a set of equations of the accretion flow, such as, energy equation, angular momentum equation, continuity equation, and state equation, etc., if the model parameters are given. Then, the emitted spectra from the accretion flow can be calculated. In this work, we explore the angle-dependent emission of the accretion disc-corona system in more detail, by solving a set of equations describing disc-corona structure and the equation of radiation transfer in the corona simultaneously. The change of the spectra with the viewing angle is explored in detail. Our results are compared with the X-ray observations of Sy1s and Sy2s. The disc-corona model employed in this work is briefly described in §2. The calculation method is introduced in §3. We show the results and discussion in §4 and 5.
THE DISC-CORONA ACCRETION MODEL
The accretion disc-corona model used in this work is described in the previous works (Cao 2009;). The detailed model description and calculating approach can be found in Cao (2009). Here we only briefly summarize the main features of the model. The energy equation of the cold thin disc is where, is the gravitational power dissipated in unit surface area of the accretion disc at radius R [where, Ω k (R) is the Keplerian velocity, Rin = 3RS, RS = 2GM bh /c 2 is the Schwarzschild radius for a black hole of mass M bh , andṀ is the mass accretion rate of the black hole]. The third term in the left side of equation (1) represents that about half of the power dissipated in the corona is radiated back into the disc by Compton scattering, and the reflection albedo of the disc ar = 0.15 is adopted in the calculations. The right side of the equation represents the power radiated from the cold disc via blackbody radiation, where T disc is the effective temperature in the mid-plane of the disc, and τ is the optical depth in vertical direction of the disc. The continuity equation of the disc is where H d (R) is the half thickness of the disc, ρ(R) is the mean density of the disc, and vR(R) is the radial velocity of the accretion flow at radius R. The state equation of the gas in the disc is where µ = (1/µi + 1/µe) −1 , µi = 1.23 and µe = 1.14 are adopted. The half thickness of the disc H d is The energy equation of the corona is where Q ie cor is the energy transfer rate from the ions to the electrons via Coulomb collisions (see equation 11 in Cao 2009) , δ is the fraction of the energy directly heats the electron, Comp is the cooling rate in unit surface area of the corona via synchrotron, bremsstrahlung, and Compton emissions. The value of δ can be as high as ∼ 0.5 by magnetic reconnection, if the magnetic field in the plasma is strong (Bisnovatyi-Kogan & Lovelace 1997, 2000. Almost all the power dissipated in the hot corona is radiated away locally, which means that the radiated power in the corona is independent of the value δ, and the temperature and density of the electrons in the corona are almost insensitive with this parameter (see Cao 2009, for the discussion). In the accretion disc-corona model, the detailed physical mechanism for generating the energy source and heating the corona is still unclear, though there are some assumptions of corona heating processes, such as, magnetic fields reconnection assumed in the previous works (e.g. Di Matteo 1998;Di Matteo, Celotti, & Fabian 1999;Merloni & Fabian 2001Cao 2009;. To avoid the complexity, we introduce a parameter, fcor, the ratio of the power dissipated in the corona, Q + cor , to the gravitational power dissipated in the disc, Q + dissi , in our model calculations.
RADIATIVE TRANSFER IN THE CORONA
We consider a parallel plane geometry of the corona above/below the disc. The cooling processes in the corona include Compton, bremsstrahlung, and synchrotron emissions. The incident photons from the disc at z = H d are assumed to be blackbody radiation.
For simplicity, the electron density and temperature of the corona are assumed to be constant in the z direction.
Radiative transfer equation
The radiative transfer equation of the corona is, where ds = dz/µ, µ = cos θ, θ is the angle of the photon with respect to the vertical direction of the disc, and Iν(z, µ) is the specific intensity. For the different absorption and emission processes included in the corona, we have the combined absorption and emission coefficients, κν = κ ff ν + κT, is the total absorption coefficient including free-free(bremsstrahlung+synchrotron) absorption and Thomson scattering, jν = j ff ν + j C ν , is the emission coefficient (emissivity) including free-free(bremsstrahlung+synchrotron) emission and Comptonization emission. We can calculate Iν(z, µ) when the structure of the corona, such as electron temperature Te,cor(r) and electron density ne,cor(r), are given.
Absorption and emission coefficients
The absorption coefficient of Thomson scattering is κT = ne,corσT, where σT is Thomson cross section. The absorption coefficient including bremsstrahlung and synchrotron processes can be described as where Bν is the blackbody emissivity, and j ff ν = (χ Brem ν + χ Syn ν )/4π is the emission coefficient of bremsstrahlung and synchrotron processes. The bremsstrahlung emissivity χ Brem ν and synchrotron emissivity χ Syn ν are taken from Narayan & Yi (1995) and Manmoto (2000).
The bremsstrahlung emissivity is given by whereḠ is the Gaunt factor as in Rybicki & Lightman (1986), The bremsstrahlung cooling rate per unit volume q − brem consists of electron-ion and electron-electron rates, The electron-ion cooling rate is where α f is the fine-structure constant, θe = kTe/mec 2 is the dimensionless electron temperature, and the function Fei has the form The electron-electron cooling rate is q − ee = n 2 e cr 2 e α f c 2 20 9π 1/2 (44 − 3π 2 )θ 3/2 e (1 + 1.1θe + θ 2 e − 1.25θ 5/2 e ) for θe < 1, q − ee = n 2 e cr 2 e α f c 2 24θe(ln 1.1232θe + 1.28) for θe > 1, where re = e 2 /mec 2 is the classical electron radius. The synchrotron emissivity is given by and the function I ′ (x) is given by where K2 is the second order modified Bessel function, B is the magnetic field strength. For the equipartition magnetic field case, we have B = √ 8πpgas,cor (pgas,cor is the gas pressure in the corona). The emissivity of the inverse Compton scattering is calculated with the method proposed in Coppi & Blandford (1990), where is the seed photon intensity of the unit volume including incident photons from all the directions.
Numerical solution to the radiative transfer equation
The radiative transfer equation (8) is an integro-differential equation. For a given disc-corona structure (including temperatures and densities of the electrons and ions, half thicknesses of the disc H d and thicknesses of the corona Hcor, etc.), one can solve this equation by iterations (e.g., Cao et al. 1998). An initial solution can be obtained by solving the equation neglecting the Compton scattering term, i.e., We solve the above equation numerically for the corona with the boundary condition at the disc surface, in the range 0 < µ < 1, where T s d is the temperature of the disc surface, and the boundary condition at the upper surface of the corona, in the range −1 < µ < 0. With derived initial solution, we calculate the emissivity of the inverse Compton scattering from equations (22) and (23), and solve equation (8) numerically. With derived Iν(z, µ), the emissivity of the Compton emission can be recalculated with equations (22) and (23). We find that the final solution can be achieved after several iterations when the solutions converge.
Integrating the intensities over the different directions (µ = cos θ) and frequencies ν, the energy loss via radiation from the upper and lower surfaces of the corona can be calculated by Subtracting the incident blackbody radiation from the thin disc, we obtain the cooling rate in unit surface area of the corona, Given the black hole mass M bh , the dimensionless mass accretion rateṁ (ṁ =Ṁ /Ṁ Edd , whereṀ Edd = L Edd /0.1c 2 ), and the fraction of the energy directly heats the electron δ, we can derive the disc structure (such as, the effective temperature in the mid-plane of the disc, the density in the disc, and the half thickness of the disc, etc.) as a function of radius R from equations (1)-(5), and equation (7), when the ratio of the power dissipated in the corona fcor is specified. The structure of the corona (such as, temperatures and densities of the electrons and ions, scaleheight of the corona) can be derived with equations (6) and (29) under the assumption of equipartition of the magnetic pressure and the gas pressure in the corona. We assume the temperature of the ions in the corona Ti,cor = 0.9Tvir = 0.9GM mp/3kR in this work as that in Cao (2009). Thus, the accretion disc-corona structure is available by solving the radiation transfer equation together with disc-corona equations described in §2.
The specific intensity from the corona at a certain radius R is obtained as a function of direction and frequency of the photons. Integrating the intensity over the whole surface of the corona, the specific luminosity emitted from the corona per steradian is Thus, we can obtain the direction-dependent spectrum of an accretion disc-corona system for a set of given disc parameters.
RESULTS
We adopt the model parameters as follows, the black hole mass M bh = 10 8 M⊙, the maximum radius of the disc-corona flow Rcormax = 100RS, and the fraction of energy directly heated the electron δ = 0.5, are fixed in all cases. The mass accretion rateṁ and the ratio of the power dissipated in the corona fcor are taken as two free parameters in our model calculations.
In Figure 1, we show the emergent spectra for four disc-corona accretion models with different values of parametersṁ and fcor. shown with black dashed, cyan dotted, magenta dash-dotted, green thin solid, and blue thin dash-dotted lines, respectively. The corresponding value of θ is denoted near each spectrum. The red thick solid line is the spectra integrated over the directions from µ = 0 to µ = 1. The four vertical dotted lines represent the four typical frequency points corresponding to 2500Å, 0.1 keV, 2 keV, and 10 keV, respectively. We can see from these figures that the spectra from different emitting directions are different both in the luminosity and the spectral shape. The luminosity decreases with the increasing viewing angle between the emitting photons and the axis of the accretion disc, θ. The spectral shapes are similar in almost all bands but very different between the optical/UV (∼ 2500Å) and soft X-ray band (0.1 ∼ 1 keV).
The different values of mass accretion rateṁ = 0.1 andṁ = 0.5 are adopted in Figure 1(a) and Figure 1(d), respectively, while the values of all other parameters are the same in these two figures.
The larger the mass accretion rate adopted, the stronger and harder the spectra are. The different values of fcor = 0.1 and fcor = 0.3 are adopted in Figure 1(a) and Figure 1(b), respectively, while the values of all other parameters are the same in these two figures. We find that the change of the spectra with fcor is also evident. The larger the value of fcor adopted, the harder the spectra are.
The spectral shapes plotted in the four panels of Figure 1 show a certain degree of degeneracy of the two parametersṁ and fcor. Different combinations of these two parameters may give very similar spectral shapes, but these can be distinguished by the different luminosity, which is directly controlled byṁ.
In order to study the change of spectra with the viewing direction, we calculate some typical quantities of the observed spectra, including the observed bolometric luminosity L bol , the spectral luminosity at optical/UV band (λ=2500Å) Lo, and the spectral luminosity at X-ray band (E = 2 keV) Lx. The optical/UV to X-ray power index αox is defined as and the X-ray spectral index between 2 keV and 10 keV, αx (defined as L (2−10 keV) ν,x ∝ ν −αx ). The changes of these quantities with the viewing angles are plotted in Figures 2-5.
In Figure 2, we plot the change of observed bolometric luminosity with viewing angle. The red solid, green dashed, cyan dotted, and black dash-dotted lines correspond to the four models (a)-(d), respectively. The blue thin dashed line indicates a simple relation as L bol ∝ cos θ representing the area-projection effect.
In Figures 3 and 4, we plot the changes of observed spectral luminosities at optical/UV band (λ = 2500Å) and X-ray (2 KeV) band with the viewing angle. The red solid, green dashed, cyan dotted, and black dash-dotted lines correspond to the four models (a)-(d), respectively. The blue thin dashed lines also represent simple relations as Lo ∝ cos θ and Lx ∝ cos θ. We find that the two spectral luminosities (optical and X-ray bands) show different changes with the viewing angle θ. The X-ray luminosity Lx decreases more slowly than Lx ∝ cos θ, while Lo decreases more rapidly than Lo ∝ cos θ. The X-ray luminosity Lx is almost isotropic when the viewing angle is small (nearly face-on), and it becomes strongly anisotropic if the viewing angle is large ( 30 • − 40 • ). The optical luminosity Lo is almost proportional to cos θ when θ 30 • − 40 • , but decreases more rapidly than cos θ when it is viewed at large angles ( 30 • − 40 • ).
The changes of the optical/UV to X-ray spectral index, αox, and the X-ray spectral index between 2 keV and 10 keV, αx, with the viewing angle are shown in Figure 5. The red solid, green dashed, cyan dotted, and black dash-dotted lines correspond to the four models (a)-(d), respectively. In each model, αox decreases very slowly with θ at small viewing angles and quickly at large angles nearly edge-on. The situation is different for αx, which is almost unchanged with θ in each model.
The hard X-ray continuum emission is from the coronae above the discs, and the discs are irradiated by the X-ray photons from the corona. Our model calculations show that the angle-dependent X-ray continuum spectra are anisotropic. They deviate from cos θdependence. The Fe Kα lines are probably emitted from the irradiated discs. In this case, the Fe line emission is anisotropic, and its angle-dependence follows ∼ cos θ. Thus, we can calculate the observed equivalent widths as functions of viewing angle with different values of model parameters. The result of model (a) is shown with red solid line in Figure 6 .
It is still unclear whether part of observed Fe line emission is from the torus, which is emitted nearly isotropically. We estimate how the results would be affected by the torus contribution to Fe line emission by assuming that x per cent of the total Fe line luminosity is from the torus. The ratio of the equivalent widths of the Fe line emission viewing at an angle θ is We re-calculate the relative equivalent width of the Fe line for different values of x=50, 20, and 10. The results are shown in Figure 6 with black dashed, green dotted, and blue dash-dotted lines, respectively.
DISCUSSION
Solving the radiative transfer equation of the corona numerically, we obtain the cooling rate in the hot corona and the emergent spectrum of the accretion disc-corona system viewed at any specified angle. The calculations are carried out for four sets of model parameters, in which different accretion rateṁ and different ratio of the power dissipated in the corona fcor are adopted, but all the other model parameters are fixed. Overall, the calculated spectra are sensitive to the values of the model parameters. For the models with same value of fcor [compare Fig. 1(a) with 1(d)], the larger the accretion rate, the harder the spectra from the disc-corona accretion flow are. The X-ray emission predominantly originates from the inverse Compton scattering of the soft photons from the thin disc by the hot electrons in the corona. The larger the accretion rate, the more the soft photons are radiated from the disc, thus the emission in the lower energy band (i.e. optical/UV band) becomes relatively weaker while the high energy band (i.e. X-ray band) becomes relatively stronger, and the spectra are harder (see αox in the left panel of Fig. 5). On the other hand, for the models with same value ofṁ [compare Fig. 1(a) with 1(b), or Fig. 1(c) with 1(d)], the higher the ratio of the power dissipated in the corona, fcor, the harder the spectra from the disc-corona accretion flow are. We find that the electron temperature of the corona increases with fcor, which causes the Compton emission to be hard for the high fcor cases. It is found that the spectra of disc-corona system observed at different directions have the similar shapes in 2-10 keV X-ray band. The X-ray spectral index, αx, remains almost unchanged for different viewing angle θ (see the lower panel of Figure 5). The X-ray emission mostly originates from the inverse Compton scattering of the soft photons radiated from the thin disc by the hot electrons in the corona. The inverse Compton scattered X-ray spectrum mainly depends on spectrum of the seed photons and the temperature of the hot electrons. Thus, the X-ray spectral index is not sensitive to the viewing direction. Comparing the values of αx for different models, we find that αx ≈ 1.64 for the case withṁ = 0.1 and fcor = 0.1, while αx ≈ 1.40 for the case withṁ = 0.5 and fcor = 0.1. This result seems inconsistent with the observed results and theoretical predictions in the previous works which claim the increasing αx with the Eddington ratio, L bol /L Edd or accretion rate,ṁ (e.g., see Fig. 4 in Cao 2009). The reason is that we employ the same value of parameter, fcor = 0.1, for these two models, which is inconsistent with the fact that the value of fcor decreases with the accretion rate (e.g., see Fig. 1 in Cao 2009). If we compare the results of the models (b) with (c), which adopt fcor = 0.3 for the case witḣ m = 0.1 and a smaller fcor = 0.06 for the case withṁ = 0.5, we have αx ≈ 1.43 for the case withṁ = 0.1 and fcor = 0.3 (see Figure 5).
The main focus of this work is to explore the change of the emergent spectra from the disc-corona system viewed at different angles. In Figure 1, we find that the luminosity decreases with the increasing viewing angle θ with respect to the axis of the accretion disc. The detailed results of the observed bolometric luminosity are plotted in Figure 2, which shows that the bolometric luminosity is nearly proportional to µ (µ=cosθ). This is due to the area-projection effect. The change of the spectral shapes for different viewing angles is different between optical/UV and soft X-ray bands (see Figure 1). The observed spectral luminosity at a typical optical/UV band of 2500Å decreases with θ (see Figure 3). It decreases more rapidly than cos θ-relation. The blackbody emission from the thin disc is partly absorbed in the corona. For the optical/UV emission from the disc, the specific intensity of the photons from the upper surface of the corona, Iν ∼ Iν,0 exp(−τ0/µ), where Iν,0 is the specific intensity of the photons injected in the lower surface of the corona, and τ0 is the vertical optical depth of the corona. Thus, the observed spectral luminosity at optical/UV band is proportion to µ exp(−τ0/µ), which decreases more quickly with θ than cos θrelation. On the other hand, although the X-ray spectral shape remains unchanged for different θ, the observed spectral luminosity Figure 5. The optical/UV to X-ray spectral index, αox(upper panel), and the X-ray spectral index between 2 keV and 10 keV, αx(lower panel), vary with the viewing angle. The red solid, green dashed, cyan dotted, and black dash-dotted lines correspond to the four models (a)-(d) respectively. at 2 keV decreases with θ (see Figure 4). It decreases more slowly than cos θ-relation.
The narrow 6.4 KeV Fe Kα lines are ubiquity in AGNs. Although its origin is still uncertain, it was suggested that they may probably originate from the distant molecular cloud (torus), the outer accretion disc or/and the broad-line region (BLR). The BLR origin is ruled out by the fact that no correlation between the Fe Kα core width and the BLR line (i.e., Hβ) width (Nandra 2006 Figure 6. The equivalent width of the narrow Fe Kα line emitted from AGN varies with the viewing angle, which is calculated with model (a). The red solid line represents the case that all the line emission is radiated from the thin accretion disc. The black dashed, green dotted, and blue dashdotted lines are the results of the cases that 50%, 20%, and 10% of the line luminosity contributed from the torus, respectively. The black dotted horizontal line represents the relative equivalent width being 0.5. from type II AGNs at the same X-ray continuum luminosity. Compiling 89 Seyfert galaxies and using [O IV] emission to estimate the intrinsic luminosity of the sources, Liu & Wang (2010) found that the Fe Kα line luminosities of Compton-thin Seyfert 2 galaxies are in average 2.9 times weaker than their Seyfert 1 counterparts. Ricci et al. (2014) found that the Fe Kα line luminosity is correlated with the 10-50 keV X-ray continuum luminosity either for Sy1s and Sy2s. The slopes of the correlations are almost same for these two types of the sources, but the Fe Kα line luminosities of Sy1s are about twice of those for Sy2s at a given X-ray continuum luminosity.
It is believed that type 1 Seyfert galaxies are intrinsically same as type 2 Seyferts but viewed at different angles (Antonucci 1993). We find that the observed systematical difference of EW of Fe Kα emission lines between Sy1s and Sy2s can be attributed to the difference of the view angles(see Figure 6). Such EW difference can be reproduced by our model calculations, provided Sy1s are observed in nearly face-on direction and the average inclination angle of Sy2s ∼ 65 • , which support the unification scheme of AGN. If a fraction of Fe line emission is contributed by the torus, a larger average inclination angle is required for Sy2s, which implies that the contribution from the torus should be much less than that from the disc.
No significant correlation is found between the spectral index αox and the radio core dominance parameter R, which is believed to be an indicator of the viewing angle (Runnoe, Shang, & Brotherton 2013). Our results show that the spectral index αox is weakly dependent on the inclination angle (see Figure 5). This is not surprising, because there is a strong correlation between αox and optical luminosity/Eddington ratio (e.g., Vignali, Brandt, & Schneider 2003;Grupe et al. 2010;Lusso et al. 2010). This correlation may smear out any possible correlation between αox and inclination angle for a normal AGN sample. We suggest that the investigation on αox and inclination angle relation should be carried out with a sample of AGNs in the narrow range of Eddington ratio/luminosity.
In the standard unification model of AGNs, the different observation properties of different types of AGN, such as, different continuum spectral shape and broad emission lines, can be explained by their different inclination angles of the black hole ac-cretion flow and the surrounding torus. We show in this work that the emission spectra emitted from the accretion flow of a black hole are anisotropic along the different viewing angles, which is consistent with the area-projection effect in the bolometric luminosity, but very different in the other characteristics of SED, i.e., the spectral luminosities in the optical/UV band and the X-ray band. Our calculations of accretion disc-corona spectra may provide more precise orientation effect correction in predicting the intrinsic luminosity functions of AGN sources from the observed luminosities at certain band. Recently, the effect of anistropic radiation from the accretion discs on the luminosity function derived from an AGN sample was evaluated by DiPompeo et al. (2014), and they found that the bright end of the luminosity function may be overestimated by a factor of ∼ 2 without considering this effect. A simple cos θ-dependent specific intensity from a bare accretion disc is used in their estimates. Our present detailed calculations of the disc corona spectra as functions of the viewing angle can be incorporated in deriving the intrinsic AGN luminosity.
The calculations of the radiation transfer in the corona in this work are carried out in Newtonian frame. A general relativistic accretion corona model is required for direct modeling the observed spectra of AGN. The calculations in this work can be easily expanded for the accretion discs surrounding Kerr black holes in general relativistic frame. This will be reported in our future work.
|
2015-02-12T03:29:50.000Z
|
2015-02-12T00:00:00.000
|
{
"year": 2015,
"sha1": "42fea22833a75956adf0acedfdc28e01d46f1b95",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/449/1/191/4138990/stv290.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "42fea22833a75956adf0acedfdc28e01d46f1b95",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
35446887
|
pes2o/s2orc
|
v3-fos-license
|
Cytokine and nitric oxide patterns in dogs immunized with LBSap vaccine, before and after experimental challenge with Leishmania chagasi plus saliva of Lutzomyia longipalpis
In the studies presented here, dogs were vaccinated against Leishmania (Leishmania) chagasi challenge infection using a preparation of Leishmania braziliensis promastigote proteins and saponin as adjuvant (LBSap). Vaccination with LBSap induced a prominent type 1 immune response that was characterized by increased levels of interleukin (IL-) 12 and interferon gamma (IFN-γ) production by peripheral blood mononuclear cells (PBMC) upon stimulation with soluble vaccine antigen. Importantly, results showed that this type of responsiveness was sustained after challenge infection; at day 90 and 885 after L. chagasi challenge infection, PBMCs from LBSap vaccinated dogs produced more IL-12, IFN-γ and concomitant nitric oxide (NO) when stimulated with Leishmania antigens as compared to PBMCs from respective control groups (saponin, LB- treated, or non-treated control dogs). Moreover, transforming growth factor (TGF)-β decreased in the supernatant of SLcA-stimulated PBMCs in the LBSap group at 90 days. Bone marrow parasitological analysis revealed decreased frequency of parasitism in the presence of vaccine antigen. It is concluded that vaccination of dogs with LBSap vaccine induced a long-lasting type 1 immune response against L. chagasi challenge infection.
a b s t r a c t
In the studies presented here, dogs were vaccinated against Leishmania (Leishmania) chagasi challenge infection using a preparation of Leishmania braziliensis promastigote proteins and saponin as adjuvant (LBSap). Vaccination with LBSap induced a prominent type 1 immune response that was characterized by increased levels of interleukin (IL-) 12 and interferon gamma (IFN-␥) production by peripheral blood mononuclear cells (PBMC) upon stimulation with soluble vaccine antigen. Importantly, results showed that this type of responsiveness was sustained after challenge infection; at day 90 and 885 after L. chagasi challenge infection, PBMCs from LBSap vaccinated dogs produced more IL-12, IFN-␥ and concomitant nitric oxide (NO) when stimulated with Leishmania antigens as compared to PBMCs from respective control groups (saponin, LB-treated, or non-treated control dogs). Moreover, transforming growth factor (TGF)- decreased in the supernatant of SLcAstimulated PBMCs in the LBSap group at 90 days. Bone marrow parasitological analysis revealed decreased frequency of parasitism in the presence of vaccine antigen. It is concluded that vaccination of dogs with LBSap vaccine induced a long-lasting type 1 immune response against L. chagasi challenge infection.
Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.
Introduction
Visceral leishmaniasis with a zoonotic feature is caused by protozoan species belonging to the complex Leishmania donovani (Leishmania infantum syn. Leishmania chagasi, in Latin America) and is widely distributed in the Mediterranean Basin, Middle East, and South America (Desjeux, 2004). Canines are the main reservoir for the parasite in different geographical regions of the globe and play a relevant role in transmission to humans (Deane, 1961;Dantas-Torres, 2006). Thus, the current strategy for control of the disease includes the detection and elimination of seropositive dogs alongside vector control and therapy for human infection (Tesh, 1995). Chemotherapy in dogs still does not provide parasitological cure (Noli and Auxilia, 2005), and for this reason a vaccine against visceral leishmaniasis (VL) would be an important tool in the control of canine visceral leishmaniasis (CVL) and would also dramatically decrease the infection pressure of L. chagasi for humans (Hommel et al., 1995;Dye, 1996).
Toward this purpose, establishing biomarkers of immunogenicity is considered critical in analyzing candidate vaccines against CVL (Reis et al., 2010;Maia and Campino, 2012), and this strategy is being used to identify the pattern of immune response in dogs and to further the search for vaccine candidates against CVL (Reis et al., 2010). Several studies have reported the potential of different CVL vaccines to trigger immunoprotective mechanisms against Leishmania infection (Borja-Cabrera et al., 2002;Rafati et al., 2005;Holzmuller et al., 2005;Giunchetti et al., 2007;Lemesre et al., 2007;Araújo et al., 2008Araújo et al., , 2009Fernandes et al., 2008;Giunchetti et al., 2008a,b).
Studies evaluating other biomarkers of immunogenicity induced by the LBSap vaccine (composed of L. braziliensis promastigote proteins plus saponin as the adjuvant) have demonstrated higher levels of circulating T lymphocytes (CD5 + , CD4 + , and CD8 + ) and B lymphocytes (CD21 + ) and increased levels of Leishmania-specific CD8 + and CD4 + T cells (Giunchetti et al., , 2008a. LBSap vaccine is considered safe for administration, without induction of ulcerative lesions at the site of inoculation Vitoriano-Souza et al., 2008). Moreover, LBSap vaccinated dogs presented high IFN-␥ and low IL-10 and TGF-1 expression in the spleen, with significant reduction of parasite load in this organ (Roatt et al., 2012). Additionally, LBSap displayed a strong and sustained induction of humoral immune response, with increased levels of anti-Leishmania total IgG as well as both IgG1 and IgG2, after experimental challenge (Roatt et al., 2012).
Considering the promising results of the LBSap vaccine, we aimed to further evaluate the immunogenicity biomarkers before and after experimental L. chagasi challenge . Thus, the profile of different cytokines (IL-4, IL-10, TGF-, IL-12, IFN-␥, and tumor necrosis factor [TNF]-␣) and nitric oxide (NO) in supernatants of peripheral blood mononuclear cell (PBMC) cultures were evaluated before the first immunization (T 0 ), 15 days after completion of the vaccine protocol (T 3 ), and at time points 90 (T 90 ) and 885 (T 885 ) days after experimental L. chagasi challenge. The frequency of parasitism in the bone marrow was also evaluated until T 885 .
Materials and methods
2.1. Animals, vaccination and experimental challenge with L. chagasi plus saliva of Lutzomyia longipalpis Twenty male and female mongrel dogs that had been born and reared in the kennels of the Instituto de Ciências Exatas e Biológicas, Universidade Federal de Ouro Preto, Ouro Preto, Minas Gerais, Brazil, were treated at 7 months with an anthelmintic and vaccinated against rabies (Tecpar, Curitiba-PR, Brazil), canine distemper, type 2 adenovirus, coronavirus, parainfluenza, parvovirus, and leptospira (Vanguard ® HTLP 5/CV-L; Pfizer Animal Health, New York, NY, USA). The absence of specific anti-Leishmania antibodies was confirmed by indirect fluorescence immunoassay. Experimental dogs were divided into four experimental groups: (i) control (C) group (n = 5) received 1 ml of sterile 0.9% saline; (ii) LB group (n = 5) received 600 g of L. braziliensis promastigote protein in 1 ml of sterile 0.9% saline; (iii) Sap group (n = 5) received 1 mg of saponin (Sigma Chemical Co., St. Louis, MO, USA) in 1 ml of sterile 0.9% saline; and (iv) LBSap group (n = 5) received 600 g of L. braziliensis promastigote protein and 1 mg of saponin in 1 ml of sterile 0.9% saline. All animals received subcutaneous injections in the right flank at intervals of 4 weeks for a total of three injections. The challenge of experimental animals was performed after 100 days of vaccination protocol. In this sense, all dogs received intradermally 1.0 × 10 7 promastigotes of L. chagasi stationary phase of cultivation, in the inner side of the left ear, in addition to 5 acini of the salivary gland of L. longipalpis. This preliminary stage of the study was performed from 2005 to 2007.
Vaccine preparation
Promastigotes of L. braziliensis (MHOM/BR/75/M2903) were maintained in in vitro culture in NNN/LIT media as previously described . Briefly, parasites were harvested by centrifugation (2000 × g, 20 min, 4 • C) from 10-day-old cultures, washed three times in saline buffer, fully disrupted by ultrasound treatment (40 W, 1 min, 0 • C), separated into aliquots, and stored at −80 • C until required for use. Protein concentration was determined according to the method of Lowry (Lowry et al., 1951). The LBSap vaccine was previously described by Giunchetti et al., 2007 and registered at the Instituto Nacional da Propriedade Industrial (Brazil) under patent number PI 0601225-6 (17 February 2006).
Blood sample collection and in vitro assays
Peripheral blood samples were collected before the first immunization (T 0 ), 15 days after completion of the vaccine protocol (T 3 ) and at time points of 90 (T 90 ) and 885 (T 885 ) days after experimental L. chagasi challenge by puncture of the jugular vein in sterile heparinized 20 ml syringes. To obtain PBMCs for the in vitro analysis, the blood collected was added over 10 ml of Ficoll-Hypaque (Histopaque ® 1077, Sigma) and subjected to centrifugation at 450 × g for 40 min at room temperature. The separated PBMCs were resuspended in Gibco RPMI1640 medium, washed twice with RPMI 1640, centrifuged at 450 × g for 10 min at room temperature, homogenized, and finally resuspended in RPMI 1640 at 10 7 cells/ml as previously described .
The in vitro assays were performed in 48-well flatbottomed tissue culture plates (Costar, Cambridge, MA, USA), with each well containing 650 l of culture medium (10% fetal bovine serum/1% streptavidin/penicillin, 2 mM lglutamine, and 0.1% -mercaptoethanol in RPMI 1640) and 50 l of PBMCs (5.0 × 10 5 cells/well) with 100 l of vaccine soluble antigen (VSA; L. braziliensis, 25 g/ml) or 100 l of soluble L. chagasi antigen (SLcA, 25 g/ml) obtained according to Reis et al. (Reis et al., 2006a,b). One-hundred l of RPMI was added in place of the antigenic stimulus in the non-stimulated control cultures. Incubation was carried out in a humidified incubator with 5% CO 2 , at 37 • C for 5 days, after which the supernatants were collected and stored in a freezer at −80 • C for detection of cytokine and NO.
The in vitro evaluation was performed with the supernatant of PBMCs collected at T 0 , T 3 , T 90 and T 885 , which were stored as described above.
All experiments were performed using 96-well plates (COSTAR ® , Washington, DC), according to R&D Systems instructions. The reading was performed using the microplate automatic reader (EL800, Biotek, Winosski, VT) at a wavelength of 450 nm.
NO production
Quantification of levels of NO was performed indirectly by measuring nitrite in supernatants of PBMC cultures by Griess reaction (Green et al., 1982;Gutman and Hollywood, 1992). Duplicate samples were grown in 96-flat bottom wells (Nunc, Naperville, IL). Briefly, a 100-l aliquot of cell-free culture supernatant was mixed with 100 l of Griess reagent (1% sulfanylamide, 0.1% naphthylethylenediamide-dihydrochloride, and 2.5% phosphoric acid, all from Sigma). Following 10 min of incubation at room temperature in the dark, the absorbance was measured at 540 nm by using a microplate reader (Biotek, EL800). The concentration of nitrite was determined by interpolation from a standard curve constructed by using sodium nitrite solutions of known concentration in the range 0-100 M.
To discount the interference of nitrites already present in the culture medium, data were calculated taking into account the blank for each experiment, assayed by using the medium employed for the in vitro PBMC cultures. The results were first expressed as nitrite concentration (M).
Parasitological analyze on bone marrow samples
Bone marrow was obtained to evaluate the frequency of tissue parasitism in the different groups. Dogs were anesthetized with an intravenous dose (8 mg/kg body weight) of sodium thiopental (Thionembutal ® ; Abbott Laboratories, São Paulo, Brazil), and bone marrow fluid was removed from the iliac crest under aseptic conditions. The bone marrow aspirates were used to study the presence of L. chagasi parasites by PCR.
DNA of bone marrow samples was extracted by Wizard TM Genomic DNA Purification Kit (Promega, Madison, WI, USA) according to the manufacturer's instructions. PCR was performed as previously described (Degrave et al., 1994) using the primers 150 forward: [5 -GGG(G/T)AGGGGCGTTCT(G/C)CGAA-3 ] and 152 reverse: that amplified a DNA fragment of 120 base pairs (bp) from the conserved region of Leishmania minicircle kDNA. Briefly, the PCR assay reaction mixture contained 1.0 l of DNA preparation, 0.2 mM dNTPs, 10 mM Tris-HCl (pH 8.0), 50 mM KCl, 1.5 mM MgCl 2 , 10 pmol of each primer, and 1 U Taq polymerase (Invitrogen) to a final volume of 10 l. PCR amplification was performed in a Veriti Thermal Cycler 96well thermocycler (Applied Biosystems ® , Irvine, CA, USA), over 40 cycles consisting of 1 min at 94 • C (denaturation), 1 min at 64 • C (annealing), 1 min at 72 • C (extension), and 7 min at 72 • C (final extension). Positive [genomic DNA of L. chagasi (MHOM/BR/1972/BH46)] and negative (without DNA) controls were included in each test. Amplified fragments were analyzed by electrophoresis on 8% Table 1 Levels of TGF- in the PBMCs from dogs before the first vaccine dose (T0), following completion of the vaccine protocol (T3), and after early (T90) and late (T885) time points following L. chagasi challenge. The results are presented with regards to stimulation with soluble L. chagasi antigen (SLcA) in the following groups: C (control) and LBSap (killed L. braziliensis vaccine plus saponin). polyacrylamide gel and ethidium bromide-stained for the PCR product identification.
The parasitological investigation was performed until 885 days after L. chagasi challenge.
Statistical analysis
Statistical analyses were performed using Prism 5.0 software package (Prism Software, Irvine, CA, USA). Normality of the data was demonstrated using a Kolmogorov-Smirnoff test. Paired t-tests were used to evaluate differences in mean values of cytokines levels, considering the comparative analysis of T 0 and T 3 (Fig. 1) or T 90 (Fig. 2) or T 885 (Fig. 3), in each group evaluated. Unpaired ttests were used to evaluate differences in mean of values of TGF- (Table 1). Analysis of variance (ANOVA) test followed by Tukey's multiple comparisons were used in the evaluation between the different treatment groups for cytokines (Figs. 1-3) and nitric oxide (Fig. 4) analysis. Differences were considered significant when P values were <0.05.
LBSap induced a prominent type 1 immune response elicited by higher levels of IL-12 and IFN-post vaccine protocol
To determine the impact of LBSap vaccination on the immune response, we evaluated the cytokine profile (TNF-␣, IL-12, IFN-␥, IL-4, and IL-10) in the supernatant of PBMC stimulated with VSA ( Fig. 1A) or SLcA (Fig. 1B). In this context, we performed a comparative analysis between T 0 and T 3 , in addition to the comparisons between experimental groups at each time point. In the comparison between T 0 and T 3 , the Sap group showed increased levels (P < 0.05) of TNF-␣ and IFN-␥ production at T 3 with VSA stimulation. Additionally, the LB group presented higher levels (P < 0.05) of IL-10 in VSA-stimulated PBMCs at T 3 , as compared to T 0 . In contrast, in SLcA-stimulated cultures, the LB group displayed lower levels of TNF-␣ at T 3 as compared to T 0 in SLcA-stimulated cultures (P < 0.05).
Interestingly, the LBSap vaccine induced higher levels of both IL-12 and IFN-␥ at T 3 in VSA-stimulated PBMCs. Similarly, in the presence of SLcA, increased levels (P < 0.05) of IFN-␥ were observed in the LBSap group at T 3 .
The comparison between the experimental groups, in different time points, revealed increased levels (P < 0.05) of IFN-␥ in VSA-stimulated cultures from the LB group, as compared to C group in T 3 . Interestingly, higher (P < 0.05) levels of this cytokine were observed in the VSA-stimulated culture of LBSap group when compared to C and Sap groups, at T 3 . Similarly, in SLcA-stimulated cultures, LBSap group displayed increased (P < 0.05) levels of IFN-␥ in relation to C, Sap and LB groups at T 3 . In addition, at T 3 , LBSap group showed increased (P < 0.05) levels of IL-12 in relation to C and Sap groups, in addition to reduced (P < 0.05) levels of IL-10 when compared to LB group, in VSA-stimulated cultures.
Both IL-12 and IFN-markedly increased in response to LBSap vaccine at T 90
The early immune response after L. chagasi challenge was analyzed in different groups. We determined the cytokine patterns in the supernatant of PBMCs comparing the different treatments (VSA - Fig. 2A and SLcA - Fig. 2B), different time points (T 0 and T 90 ), and different experimental groups, at each time point.
Comparison between T 0 and T 90 showed that the C group had increased levels of TNF-␣ production (P < 0.05) and lower levels of IL-4 production (P < 0.05) at T 90 upon VSA and SLcA stimulation. Additionally, C group had higher levels of IL-12 in SLcA-stimulated PBMCs (P < 0.05) and higher levels of IFN-␥ production in VSA-stimulated PBMCs (P < 0.05) at T 90 . The Sap group showed increased levels (P < 0.05) of TNF-␣ and IL-10 production and reduction of IL-4 levels at T 90 . In SLcA-stimulated cultures, the Sap group presented higher levels (P < 0.05) of TNF-␣ and IFN-␥. The LB group showed increased levels (P < 0.05) of TNF-␣, IL-12, and IL-10 production and reduction of IL-4 in VSAstimulated PBMCs at T 90 . In cultures stimulated with SLcA, the LB group shown increased levels (P < 0.05) of IFN-␥. Interestingly, the LBSap vaccine induced higher levels of IL-12 at T 90 in PBMCs stimulated with VSA. Furthermore, in the presence of SLcA, LBSap vaccine induced higher levels of IFN-␥ (P < 0.05). The reduced levels of IL-4, which occurred in the other groups, were retained (P < 0.05) in the LBSap group at T 90 for both stimuli (VSA and SLcA).
The comparative analysis between the experimental groups showed, at T 90 , increased levels (P < 0.05) of IL-4, in SLcA-stimulated cultures in the LB group and VSAstimulated cultures in the LBSap group, in relation to C group. Interestingly, the SLcA-stimulated PBMCs from LBSap group showed increased levels (P < 0.05) of IL-12 compared to LB and Sap groups at T 90 . Furthermore, increased levels (P < 0.05) of IFN-␥ in the LBSap when compared to C, Sap and LB groups were observed.
3.3. LBSap vaccine elicited a long-lasting type 1 immune response at T 885 , displaying higher levels of IFN- The late immune response after L. chagasi challenge was studied in different groups with regard to the cytokine T0 T3 T0 T3 T0 T3 T0 T3 T0 T3 0 150 300 0 1000 2000 T0 T3 T0 T3 T0 T3 T0 T3 T0 T3 0 . The x-axis displays the cytokines evaluated (TNF-␣, IL-12, IFN-␥, IL-4, and IL-10). The y-axis represents the mean values (pg/ml) ± SD from groups of five animals/evaluation time; the left y-axes depict the TNF-␣, IL-4, and IL-10 levels, while in the right y-axes represent the IL-12 and IFN-␥ cytokine levels. Significant differences (P < 0.05) between values measured at T0 (before the first dose) and T3 (15 days after the third dose) are indicated by connecting lines, whereas the symbols C, Sap, and LB indicate significant differences in relation to C, Sap, and LB, groups, respectively, at the same stimulus and time of evaluation. levels in the supernatant of PBMCs treated with VSA ( Fig. 3A) or SLcA (Fig. 3B), and the T 0 and T 885 data were compared, besides the comparisons between experimental groups, at each time.
T 885 , as compared to T 0 . Similarly, the group LBSap had decreased levels of IL-4 (P < 0.05) at T 885 , as compared to T 0 , but this difference was only observed in the presence of the VSA stimulus. Interestingly, in this group, levels of IL-12 (in the presence of VSA) and IFN-␥ (in the presence of SLcA) were higher compared to T 0 (P < 0.05). Whereas this was a time-delayed response post-challenge with L.
chagasi (T 885 ), this result indicates an immune response predominantly of the type 1, induced by vaccination with LBSap.
Impaired levels of TGF-ˇ were the hallmark of the LBSap vaccine at T 90
The levels of TGF- are shown in Table 1, which focuses on the analysis using supernatant of PBMCs simulated with SLcA. We evaluated the data using a comparative analysis between the control and LBSap groups at T 0 and T 3 as well as at T 90 and T 885 for the L. chagasi challenge.
Interestingly, there was a decrease in TGF- in the group immunized with LBSap compared to group C at T 90 .
LBSap enhanced NO production at T 885 in both VSAand SLcA-stimulated cultures
Since the production of NO is considered to be a key element in mechanisms that mediate the elimination of intracellular pathogens, the levels of antimicrobial oxidant produced by in vitro antigen-stimulated PBMCs derived from dogs vaccinated with LBSap were determined (Fig. 4).
At T 90 a reduction (P < 0.05) was observed in the levels of the reactive NO in VSA-stimulated cultures compared to the respective control cultures of the groups C, Sap, LB, and LBSap (Fig. 4A).
At T 885 , significantly increased nitrite levels (P < 0.05) in the VSA-and SLcA-stimulated cultures were observed in the Sap group compared with cultures receiving the same stimuli in the C and LB groups. SLcA-stimulated cultures in the C and LB groups showed a significant reduction of NO levels when compared to the respective control cultures (Fig. 4B). In addition, the C group presented higher levels of NO in control cultures in relation to VSA-stimulated cultures (Fig. 4B). Interestingly, in the LBSap group, higher (P < 0.05) levels of NO levels were recorded in the supernatant of SLcA-and VSA-stimulated cultures at T 885 when compared with cultures receiving the same stimuli in groups C and LB.
Bone marrow parasitological analysis revealed decreased frequency of parasitism in the presence of vaccine antigen
The parasitological investigation was performed until 885 days after L. chagasi challenge. By T 885 two dogs from group C, four dogs from group Sap, and one dog each from the LB and LBSap groups were diagnosed as positive. It is interesting to note also, that until the period in which they were accompanied (T 885 ) all experimental groups remained asymptomatic.
Discussion
Increased VL incidence in the world and especially in Brazil have motivated studies and evaluations of anti-CVL vaccines because of the epidemiological importance of dogs in the biological cycle of the parasite (Palatnik-de-Sousa, 2012). Aiming to guide the rationale for developing anti-CVL vaccines, studies have been performed to identify biomarkers of immunogenicity before and after L. chagasi challenge (Gutman and Hollywood, 1992;Reis et al., 2010). Type 1 and type 2 immune responses and immunomodulatory cytokines are considered the main targets for identifying resistance biomarkers following vaccination against CVL (Reis et al., 2010;Fernandes et al., 2008;Carrillo et al., 2007;De Lima et al., 2010).
Results from previous studies using LBSap, the anti-CVL vaccine, showed high immunogenic potential, with induction of increased levels of circulating T lymphocytes (CD5 + , CD4 + , and CD8 + ) and B lymphocytes (CD21 + ), and higher levels of CD4 + and CD8 + T cells that were Leishmania specific Roatt et al., 2012). In these studies, LBSap vaccine elicited strong antigenicity related to the increased levels of anti-Leishmania IgG isotypes after vaccination , and a strong and sustained induction of humoral immune response after experimental challenge, with increased levels of anti-Leishmania total IgG, IgG1 and IgG2 (Roatt et al., 2012). Furthermore, LBSap vaccinated dogs presented high IFN-␥ and low IL-10 and TGF-1 expression in spleen with significant reduction of parasite load in this organ (Roatt et al., 2012). In addition, LBSap vaccine displayed safety and security for the administration Vitoriano-Souza et al., 2008;Moreira et al., 2009).
However, there are few studies evaluating the cytokine profiles associated with CVL and in anti-CVL vaccines, which might serve as biomarkers to identify resistance and susceptibility. Thus, this study aimed to evaluate the cytokine profile and NO induced by immunization before and after experimental challenge with L. chagasi and sand fly saliva. In addition, the frequency of bone marrow parasitism was included in the evaluation.
We thus performed a comparative analysis of the cytokine profile before immunization (T 0 ), after completion of the vaccine protocol (T 3 ), and at early (T 90 ) and late (T 885 ) time points after experimental challenge with L. chagasi. The production of distinct cytokines was evaluated during the vaccination protocol and after L. chagasi and sand fly saliva experimental challenge.
The analysis of IL-4 levels has been considered a morbidity marker during ongoing CVL (Quinnel et al., 2001;Brachelente et al., 2005;Chamizo et al., 2005), as well as in a murine models of VL (Miralles et al., 1994). We observed that the group vaccinated with LBSap showed increased levels of IL-4 as compared to the C group. However, increased levels of IFN-␥ in the LBSap group were also observed. According to Manna et al. (2008), it is possible to maintain a standard of resistance in CVL even in the presence of IL-4, as long as there are elevated levels of IFN-␥. Nevertheless, our results do not suggest a typical profile linking this cytokine with a resistance or susceptibility pattern in CVL. Similar to our study, a previous study (Manna et al., 2006) did not associate IL-4 with resistance or susceptibility to natural L. chagasi infection in CVL. In contrast, levels of IL-4 in splenocytes from dogs naturally infected with L. chagasi and presenting different clinical signs, indicated that this cytokine could be a biomarker present during the course of infection in CVL (Lage et al., 2007).
Similarly, IL-10 has also been associated with susceptibility to CVL (Pinelli et al., 1999;Lage et al., 2007;Alves et al., 2009;Boggiatto et al., 2010) and human VL (Nylen and Sacks, 2007). Our data showed increased levels of IL-10 at T 3 and T 90 in the LB group and at T 90 in the Sap group. In contrast, we observed decreased levels of IL-10 in LBSap in relation to the LB group at T 3 in VSA-stimulated PBMCs. We hypothesize that lower levels of IL-10 during the immunization protocol and the lack of significance in IL-10 levels after experimental challenge with L. chagasi in the LBSap contributes to the establishment of a more efficient immune response in these vaccinated dogs.
In addition, the cytokine TGF- has been associated with progression of Leishmania infection in a murine model (Barral et al., 1993;Virmondes-Rodrigues et al., 1998;Gantt et al., 2003). Few studies have been performed in CVL; however, existing studies show increased levels of TGF- in both asymptomatic and symptomatic dogs naturally infected with L. chagasi . Our results displayed decreased levels of TGF- in SLcA-stimulated cultures of LBSap group at T 90 . These results suggest that vaccination with LBSap may trigger reduced TGF- production after experimental challenge. In fact, a previous work (Alves et al., 2009) reported high levels of TGF- associated with increased parasite load in lymph nodes from symptomatic dogs naturally infected with L. chagasi and an association between this cytokine and CVL morbidity. Therefore, it is possible that the reduced levels of TGF-, associated with higher levels of IL-12 and IFN-␥, after L. chagasi and sand fly saliva challenge, would contribute to establishing immunoprotective mechanisms induced by LBSap vaccination.
Type 1 cytokines have also been considered as a prerequisite for evaluating immunogenicity before and after L. chagasi experimental challenge in anti-CVL vaccine clinical trials (Reis et al., 2010). Thus, we analyzed TNF-␣, IL-12, and IFN-␥ levels.
Some studies have established that TNF-␣ together with IFN-␥ are associated with a resistance profile against CVL (Pinelli et al., 1994(Pinelli et al., , 1999Chamizo et al., 2005;Carrillo et al., 2007;Alves et al., 2009). However, it is not a consensus that TNF-␣ profile would be a good indicator of resistance or susceptibility after L. chagasi infection, considering the similar levels of TNF-␣ showed in dogs presenting distinct clinical signs Lage et al., 2007). Moreover, LBSap group did not present any differences in TNF-␣ levels when compared to other experimental groups. In fact, our data were similar to Leishmune ® results, that did not present differences in the expression of this molecule (Araújo et al., 2009;De Lima et al., 2010). In addition, assessment of IL-12 levels in the group immunized with LBSap revealed increased levels of this cytokine at T 3 , T 90 , and T 885 in the presence of VSA stimulation, compared to T 0 . Interestingly, higher levels of IL-12 after vaccine protocol in relation to C and LB group (T 3 , in VSA-stimulated cultures), and in the early period post challenge in relation to Sap and LB groups (T 90 , in SLcA-stimulated cultures) was the hallmark of LBSap group. Since this cytokine has been associated with protection in CVL (Strauss-Ayali et al., 2005;Menezes-Souza et al., 2011), high levels of IL-12 and impaired TGF- production would indicate the establishment of immunoprotective mechanisms induced by LBSap vaccination.
IFN-␥ is considered an important pro-inflammatory cytokine for establishing protective immunity against the Leishmania parasite, inducing NO synthesis, and activating microbicidal function in macrophages (Trinchieri et al., 1993;Reiner and Locksley, 1995). Thus, NO is considered one of the most important molecules responsible for killing intracellular parasites such as those of the Leishmania genus (Heinzel et al., 1989;Bogdan, 2001;Sisto et al., 2001;Gradoni and Ascenzi, 2004). In this context, we found that the LBSap group had increased levels of IFN-␥ after the vaccine protocol (T 3 ), presenting sustained improvement at the early (T 90 ) and late (T 885 ) time points after L. chagasi experimental challenge in the presence of the SLcA stimulus, compared to T 0 . Interestingly, after the vaccination protocol (T 3 ), the LBSap group showed increased levels in IFN-␥ in VSA or SLcA-stimulated cultures compared to other groups. Moreover, in both early (T 90 ) and late (T 885 ) period post challenge, the LBSap group remained producing increased levels of Leishmania-specific IFN-␥, as compared to the respective stimulated cultures (VSA or SLcA) from the other groups. Furthermore, the increased IFN-␥ levels at T 885 was concomitant with higher NO amounts in cultures stimulated with SLcA and VSA. Since IFN-␥ is associated with a resistance profile to Leishmania infection in different experimental models (Squires et al., 1989;Andrade et al., 1999;Murray et al., 1992;Carrillo et al., 2007;Fernandes et al., 2008), our data revealed an intense Leishmania-specific induction of IFN-␥ after immunization with LBSap.
Considering the lack of a sufficient amount of biological material, we performed PCR analysis to assess the parasite burden. However, only the LBSap and LB groups showed one dog each with positive parasitological results, which may indicate that the antigen of L. braziliensis can induce protection after experimental L. chagasi challenge. Further investigations will focus on the efficacy of the LBSap vaccination in protecting against an experimental challenge with L. chagasi, using quantitative PCR.
In conclusion, our data point to a prominent type 1 immune response is elicited by higher levels of IL-12 and IFN-␥ following complete vaccination and after L. chagasi challenge. Additionally, the levels of TGF- are reduced in the early immune response after L. chagasi challenge, while NO production is enhanced at a late time point following L. chagasi challenge. Furthermore, based on bone marrow parasitological analysis, the frequency of parasitism is decreased in the presence of the vaccine antigen. Thus, LBSap vaccine appears to elicit prominent, long-lasting type 1 immunogenicity.
|
2018-04-03T02:53:47.723Z
|
2013-09-23T00:00:00.000
|
{
"year": 2013,
"sha1": "1c2d73590b83ab6e513f5abac5abcef40983300b",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.vetpar.2013.09.011",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c68747fda3ed69798e168a3f9d7fa035b7378b3b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
115167996
|
pes2o/s2orc
|
v3-fos-license
|
Divisor and Totient Functions Estimates
New unconditional estimates of the divisor and totient functions are contributed to the literature. These results are consistent with the Riemann hypothesis and seem to solve the Nicolas inequality for all sufficiently large integers.
Introduction
The divisor function The divisor function is an oscillatory function, its value oscillates from its minimum σ(N) = N + 1 at prime integers N to its maximum σ(N) = c 0 Nloglog N, some constant c 0 > 1, at extremely abundant integers N. An extremely abundant integer N is an integer with lots of small prime factors and certain multiplicative structure. Similarly, the totient function ϕ(N) is an oscillatory function, its value oscillates from its maximum σ(N) = N − 1 at prime integers N to its minimum σ(N) = N/c 0 loglog N, some constant c 0 > 1.
Currently the best unconditional estimates are the following: On the other hand there are several conditional criteria; some of these are listed below. These results herein are stated in the notations of readily available sources such as [L], [S], et cetera, and other freely available papers. The parameter 1 -b < β < 1/2 arises from the possibility of a zero ρ ∈ ℂ of the zeta function ζ(s) on the half plane b = Re(ρ) > 1/2. This in turns implies the existence of more primes per interval than expected if the Riemann hypothesis is valid. The effect of the zeros of the zeta function on the distribution of primes is readily revealed by the explicit formula. holds.
Theorem 6. ( [NS]) Let N k = 2⋅3⋅5⋅⋅⋅p k be the product of the first k primes. Earlier works on this topic include the works of Ramanujan and other on abundant numbers, see [RJ], [AE], and recent related works appeared in [BR], [BS], and [WZ]. The new contributions to the literature are the unconditional estimates stated below. for all such N but finitely many exceptions.
These unconditional results are consistent with the Riemann hypothesis, and seem to prove the Nicolas inequality, Theorem 6-i, for all sufficiently large integers. Just a finite number of cases remain unresolved as possible counterexamples. The proofs of Theorems 7, 8 and 9 are given in Sections 10, 11, and 12. The other sections are background and supplemental materials focusing on various characteristic of the divisor function, totient function, and other arithmetic functions.
Properties of the Divisor Function
, and let the symbol p α || N denotes the maximum prime power divisor. Multiplicative. 3) Mobius inversion pair.
9)
and the opposite inequality holds for infinitely many integers.
The case Re(s) > 1 is easy to resolve by mean of the multiplicative formula and the case s = 1 is Gronwall's Theorem, more details on this appears in [TN,p. 88].
Representations of the Divisor and Totient Functions
Many of the important properties of the divisor and totient functions can be considered representations of these functions. All these representations are useful in the analysis of these functions. A few of them are recorded here.
Proposition 11. Let N be an integer, and let the symbol p α || N denotes the maximum prime power divisor. Then (ii) The first and second of these representations are well known, and the third appear to be new. More general versions of these representations are possible, but are let to the reader to work out. Other related identities are given below.
Proposition 12. Let N be an integer, and let the symbol p α || N denotes the maximum prime power divisor. Then Proof: To verify (ii) consider computing the power divisor function σ 6 (N) in two ways: Now rearrange the products. And to verify (iv), use routine algebraic manipulations to simplify the product (ii), and then apply Property 10 in Section 2. ■ There are several ways of establishing these results.
Related Arithmetic Functions and Average Orders
Theorem 13. For x ≥ 1, the average orders of the divisor functions are Confer [AP,p 61] for the analysis. The error terms of arithmetic functions are well studied problems in number theory, extensive details are given in [IV]. For example, the latest improvement on the error term in the expression for the number of divisor appears to be θ = 23/73 + ε, Huxley 1993.
The analysis and determination of the average order of an arithmetic function over subsequences of integers are significantly longer. Two recently determined cases over the sequence of binomial coefficients are included here, the reader should confer the paper for the proofs.
A few Prime Numbers Results
The nth prime in the sequence of prime 2, 3, 5, 7, … is denoted by p n , and let } prime is : Confer [DT] for recent developments in this area. The work of Hardy and Ramanujan on the function culminated in the probabilistic result given below.
Theorem 17. Let ε > 0, and N ≥ 1. Then The current perspective on the analysis of the function ) (N ω is discussed in fine details in [GR].
The Chebychev functions are defined by The first is the logarithm of the product of all the primes ≤ x, and the second is logarithm of the lowest common multiple of the integers ≤ x. A related function appears in the explicit formula where ρ = 1/2 + it are the zeroes of the zeta function The constants are approximately one, and the error term E(x) = x − ϑ(x) tends to infinity as x tends to infinity. The function ) (x ϑ is monotonically increasing, but ) (x ψ is an oscillating function of x. The peaks and valleys of the oscillation are known to satisfy Proof: Confer [RS], [SC] and [DT] for other sharper estimates too. ■ ∈ ℕ and let f(x) > 0 be an strictly increasing function of x such that f(x) Proof of (i): Write , and simply compute the logarithmic difference: The unconditional case uses the function , some constants A > 0, c > 0, and the corresponding logarithm difference is In contrast, the conditional on the Riemann hypothesis case, uses the function
Finite and Asymptotic Results
Theorem 22. For every integer N ≥ 3, there is an absolute constant c 0 > 0 such that Proof of (ii): Taking the logarithm and rearranging return Combining these data give Now taking the inverse logarithm yields the claim. ■ The techniques just used above are quite standard in the literature, see [RD,p. 614], [SH,p. 341], [NT,p. 278], et cetera.
Theorem 26. The normal orders of σ(N) and ϕ(N) are Put N x log log 6 = and apply Merten's formula (Theorem 31-i) to this data. Then for almost all sufficiently large integer N. The proof for σ(N) is derived from this via the relation The fact that σ(N) is a normal random variable is easily seemed by means of the Duncan's formula below. More precisely, for squarefree integers σ(N) is bounded by a linear transformation b N a N N Proposition 27. ( [DN]) Let N ∈ ℕ be a squarefree integer. Then Proof: Rewrite the sum of divisors function as )) ( must have monotonically decreasing exponents v 2 ≥ v 3 ≥ v 5 ≥ ⋅⋅⋅ ≥ v p ≥ 1, and every prime up to the nth prime p n included in the product.
Given any exponents vector u 1 , u 2 , …, u n ≥ 1, there are infinitely many integers Specifically, a detailed discussion appears in [S]. A related definition states that a colossally abundant number is an integer for which ε ε σ σ for all 1 ≤ M < N, and ε > 0.
The multiplicative structure of a colossally abundant number is specified by where the exponents satisfy 1 ) Numerical results and an algorithm for generating colossally abundant integers is described in [BS].
Harmonic and Quasiharmonic series
The harmonic series is the sum of the reciprocals of the positive integers up to x. A quasiharmonic series is a sum of the reciprocals of x numbers less than or equal to x. The summation techniques used to estimate these series are mostly elementary. On the other hand the applications of these series in the mathematical sciences run deep.
A selection of useful series are recorded here, the proofs are scattered in the literature but easily available.
Theorem 29. For every positive number x ≥ 286, the followings hold: (ii) ) / 1 ( ) ( These are proved using summation methods, such as the Euler-Maclaurin Formula, see [RM,p. 234 The left side of this estimate can be evaluated in terms of logarithm and the exponential integral, but it is let as an exercise for the reader to improve the analysis.
Theorem 33. Let x ≥ 2. Then Proof: Put x = klog k, and use the fact that the nth prime p n satisfies n bn p n an n log log ≤ ≤ , some constants a, b > 0. Then This is obtained using the integral approximation ■
Harmonic and Quasiharmonic Products
The harmonic product ∏ ≤ − Proof: These are improved versions of the original works, see [RS] for more details. The asymptotic version of this result is Theorem 25, see also [CU,p. 110], and [EL,p. 31] for similar details.
An Estimate of the Totient Function
The . This assumption appears to places an obstacle in both numerical calculations and in the theory of these functions since any term in this analysis is important. The proof below employs a reductio ad absurdum argument. (10) unconditionally, see Theorem 19 and Proposition 21 for more details. Further, using the unconditional form of the error term ) log , B > 0 constant, see (6), one can rewrite the penultimate line as The left side is of order , where C = min { A + 1, B } ≥ 1, and the right side is of order . Ergo for all sufficiently large integers N k , this is a contradiction. ■ The contradiction in inequality (12) remains in effect for any error term R(x) and any ( ) , see Proposition 21. Accordingly, the result is consistent with the Riemann hypothesis and numerical data.
The Divisor Function Inequality
This section proposes a weak form of the divisor function inequality N N e N log log ) ( γ σ < for large N ≥ 5041 due to Ramanujan and Robin. It builds on the analysis of the previous sections and previous papers.
In the previous section the logarithmic difference This was used to prove the inequality is sufficiently abundant, then N p k log < . Accordingly the logarithmic difference 0 log log log log log < − N p k is negative. This property will be used in the next result.
The worst case , where p i is the ith prime, v i ≥ 1, will be assumed, the arbitrary case , where q i is prime and v i ≥ 0, is handled in the same way mutatis mutandis.
this follows from Proposition 8-iii. Continuing as in (8) to (10) (14) Replacing the unconditional estimate of R(N), see (6) . Squarefree integers are mapped to the lower end of the interval and numbers divisible by high powers of small primes 2, 3, …, are mapped to the upper end of the interval. For example, the value ρ(N) of an extremely abundant number increases toward 1 as N increases but remains bounded ρ(N) < 1.
in Proposition 12-iv. The analysis is similar to the proof of Theorem 7.
s-Free Integers and the Divisor Function Inequality
This section provides a the divisor function inequality for large s-free integers N. It extends the idea of s-free integers as in the paper [S]. (16) Rewrite this as Simplifying this expression yields where ( ) (21) Now reverse the logarithm function to complete the claim. ■ The special case s =2 in Proposition 36 is well known.
The density of primes in an integer is a deciding factor on the size of the divisor function. The values of the normalized divisor function
An applications to Sums of Four Squares Representations
It is not obvious at all which integers have the maximum number of representations as sums of four squares. For example, does 5040 = x 2 + y 2 + z 2 + w 2 have more solutions (x, y, z, w) than 5041 = r 2 + s 2 + t 2 + u 2 ? Furthermore, it is not obvious why an additive problem about an integer N is deeply influenced by the multiplicative structure of N.
|
2019-04-10T23:56:00.077Z
|
2008-06-23T00:00:00.000
|
{
"year": 2008,
"sha1": "8dc6bce5d9a68d0a0a985d6d25503e83a11b242f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0115681ca1d648e7be3e80eb5fc04c24a08d999c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
236577583
|
pes2o/s2orc
|
v3-fos-license
|
Forces behind the Use of Herbs during Pregnancy by Zimbabwean Women: A Case of Gweru District
Purpose: The use of herbal remedies is gradually increasing worldwide and Zimbabwe is not left behind. This study therefore sought to explore the forces behind the use of herbs during pregnancy by Zimbabwean women. Materials and methods: A qualitative approach was chosen using the case study design to evaluate the forces. The study was carried out at three maternity hospitals in Gweru. Thirty (30) women who used herbs during pregnancy were purposefully samples and interviewed using a structured interview schedule. The date was analyzed thematically. Results: It was noted that culture and belief system, previous experience as well as significant people in the woman’s life were the forces behind the use of herbs in pregnancy. Conclusion: The study concluded that these forces promoted the indiscriminate use of herbs which are passed from mother to daughter without considering the safety of the herbs to the mother and baby. Therefore, more research is needed to analyze the safety of these herbs to ensure that the mother and baby are safe.
society, one would expect women to follow the modern trends of treatment but they still take herbs. There must be other things which are compelling the women to take herbs and this is what this study seeks to explore.
The use of herbal medicines is associated with cultural and personal beliefs [1], as these influence the philosophical views on life and health. Woman's culture and beliefs can be a compelling factor in the use of herbs as its use is viewed as a primary source of healthcare. Traditional Medical Practice views the use of herbs as an integral part of the culture of those communities who use them [2].
Culture controls the behaviour of individuals and even health seeking behaviour.
According to the theory of reasoned action [3], the behaviour of a woman is determined by the culture and background of the women, her perceptions as well as the influence of the significant people around her. Thus, the significant people in a woman's life can be a factor promoting the use of herbs in pregnancy.
However, traditional practices co-exist with modern practice with most of the women exhibiting biculturalism which encompasses the culture of their grandparents and the culture of other persons surrounding them [4]. As a result, a woman can take or is given some herbal remedy whilst in labour to precipitate the delivery as part of the African tradition but goes to deliver at the hospital.
The use of herbs by the women in labour is congruent with their health beliefs and is part of their transition process to motherhood [5]. Pregnant women believe that herbs enable them to control their childbearing process [5] [6]. The other belief of women which makes them use herbs is the assumption that herbs are safer than conventional drugs [6] [7]. Pregnant women used herbs because they perceived them to be more effective than conventional drugs [8]. It can be argued that the erroneous belief that herbal products are superior to manufactured products makes pregnant women prefer to use herbs [2]. These facts support the claims that, "traditional medicine has always been at the heart of most African people and in particular the Shona people of Zimbabwe" [9]. Therefore, use of herbs can be attributed to the beliefs of the women.
Women use herbs, because they want to ensure that their unborn baby is safe. This was evident in a study in which 62% of the woman who took herbs during pregnancy in the study said that the herbs gave them fewer side effects than conventional drugs [10], thus reassuring them of the safety of the baby from the teratogenic effects of conventional drugs. This is also backed by the findings that women fear to be subjected to teratogenic effects by conventional drugs [7]. As a result, the woman worries on the safety of her baby can force the woman to choose herbal preparations.
Pregnant women, as consumers of health care, have preferences which will influence their choices to opt for natural therapies as opposed to conventional medicines [2]. In the same vein, women opt for herbs after they have compared experiences between conventional healthcare professionals and complementary medicine practitioners [6]. Dissatisfaction with the attitudes of midwives at health institutions may make the woman to decide on self-medication with her- bal preparations at home [7]. As a result, women may decide to use herbs due to dissatisfaction with the results from orthodox pharmaceuticals and the belief that herbal medicines might be effective [2]. In addition, the user friendliness of the herbs was given as the reason for choosing herbs over biomedicine [9].
On the other hand, the choice to use herbs is based on the individual's experience [11]. For the primigravida, who is pregnant for the first time, the experience of the significant others around them is used thus supporting the theory of reasoned action which propounds that an individual's decision is controlled by her close people. The pregnant woman may have a belief or fear that their physician or midwife have not properly identified the problem hence the feeling that herbal remedies are another option [2]. On the other hand, the marketing strategies used by the traditional practitioners makes the herbal preparations to be more alluring to the pregnant mother who wants the best for her baby [2]. The same author goes on to blame the various claims on the efficacy or effectiveness of plant medicines by the traditional practitioners for the increased interest in alternative medicines by pregnant woman. However, the woman's knowledge of herbs and her attitude towards its effectiveness makes her use herbs in pregnancy [6].
The use of herbal preparations in pregnancy is promoted by the high availability of the herbs which are easily accessible without any strict control measures [6]. Herbal preparations are not subjected to quality test and production standards; neither are efficacy nor issues of licensure vigorously controlled before marketing thus making them easily accessible to the pregnant woman without any prescription [12]. In the same vein, the preference of women to use herbs in pregnancy maybe due to the absence of strict regulations like in modern medicines since no prescription is required [11]. It is assumed that herbs are preferred because they are within the vicinity of the users due to mushrooming of self-proclaimed street herbalist [9]. To add to this, the women of today have a high self-esteem which makes them want to move towards self-medication hence herbal remedies with their assumed "safety" tag becomes the target [6]. When the pregnant woman is bombarded with the easily accessible herbs which are highly marketed carrying a safe tag but without any control measures, the temptation to use them is irresistible.
Many women living in rural areas especially in developing countries may decide to use herbs during pregnancy as the herbal preparations from the traditional practitioner may be the main source of health care and sometimes the only source [13]. In their study on the role of Indigenous Medicinal Knowledge in the treatment of illness in Rural Zimbabwe [14] found that women used herbs because of the high prices of the western medicines which they could not afford.
In addition to this, it was noted that pregnant women used herbs because they are more accessible without a doctor's prescription [6]. Since these traditional herbs are preferred and are easily accessible, there is need to study them thoroughly so that they are safe for use by these rural folks who at times have no al-
ternative. Journal of Biosciences and Medicines
According to the health belief model, Individual perceptions influence how a woman perceives her health [15]. If she perceives that health is important to her and that pregnancy is a condition which can kill her if she does not do something about it, she will be prompted to take action and in this case taking herbs.
The same author goes on to explain that modifying factors such as perceived threats and cues to action are factors which are part of the woman's environment that will prompt specific health behaviour in a person. What the woman is told by the elders about pregnancy, labour and the consequences of not taking herbs will be perceived as threats and becomes the cues to action. It can be assumed that if a woman is constantly told that not taking herbs will result in an episiotomy, a long and difficult labour with the possibility of losing her life and that of the baby, she will take the herbs. In addition, if she believes that herbs will prevent such occurrences, then she will take them without any hesitation.
The perceived benefits [15] can motivate a woman to take herbs. If a woman perceives the probability of producing a healthy baby after a short labour, she will use the herbs.
The theory of reasoned action (TORA), which is based on the assumption that most behaviours of social relevance are under volitional (wilful) control assumes that individuals are usually quite rational and make systematic use of information available to them [16]. This information which could be from the society, adverts or friends is considered before a decision to engage or not engage in a given behaviour is made [3]. The use of herbal preparations in pregnancy is by choice so the woman, as a person, after obtaining some information, will make a rational decision whether to use or not to use herbal preparations. The theory postulates that an individual's intentions to or not to perform a behaviour is the immediate determinant of that action. The attitude towards the intention is determined by the person's belief that a given outcome will occur if the behaviour is performed [16]. If the pregnant woman believes that herbs will bring a better outcome of her pregnancy, then she will use herbal preparation according to the information gathered. On the other hand, the general subjective norms are all about social pressure put on the individual to perform certain behaviours. This is determined by the person's normative belief about what the important or significant people think should be done and also by the individual's motivation to comply with those other people's wishes or desires. What this means is that, if the important people in the woman's life advise her to take herbs, she will, depending on her motivation to comply with their advice. Whilst a pregnant woman has rights and is not use herbal remedies during pregnancy.
Methods and Materials
The study was carried out in Gweru urban at the tree hospitals which offer maternity services. Two of the hospitals are private and one is public. This choice enabled the researcher to get the views of the poor who visit the public hospital and the effluent who visit private hospitals. A qualitative approach was used as it tends to investigate the meaning that individuals ascribe to social phenomena such as herbal usage in pregnancy [17].
The qualitative method is based on discovery and to understand rather than to predict [17]. The intention of this study is to discover and understand the factors which influence women to use herbs in pregnancy. A case study design was chosen as it provided in-depth study of herbal usage and also provided detailed descriptions of herbal usage in pregnancy. Interviews were done on 30 women who admitted to taking herbs. The sample was chosen using the heterogeneous purposive sampling in order to capture a wide range of conditions in which herbs were taken such as, behaviours, experiences, incidents, or situations [18]. It was anticipated that the women use herbs in various conditions. Some may be forced to take herbs while others use them by choice or certain incidences may have made one to take herbs. In order to make the research rich, the researcher intends to capture different conditions in which herbs are taken. The data was collected using in-depth face to face interviews to allow flexibility and clarification of questions as the interviewer can modify the line of inquiry to enable the interviewee to understand. The interview allowed probing of interesting responses and observation of the non-verbal responses since the information in this study was sensitive and needed observation of body language to probing and bring out the sensitive information [19].
The data was analysed using the constant comparative analysis strategy which allowed data from one interview to be compared it with data from the other interviews noting similarities and differences [20]. The comparisons brought out the aspects of human behaviour and experiences in terms of herbal use which were coded and then analysed thematically according to the research questions.
However statistical methods were used in some instances for easy interpretation of data.
The study was carried out in accordance with the Helsinki Declaration Principles. The ethical issues were taken into consideration. Permission to carry out the research was sought from the responsible authorities and approval was granted by the Medical Research Council of Zimbabwe (approval number MRCZ/A2409). The participants consent was obtained through signing the consent form in which matters of confidentiality were explained.
Results
The study identified three types of forces that make women use herbs as shown in Figure 1.
The Woman's Culture and Belief System
The study found that the woman's culture and beliefs are forces that make women use herbs during pregnancy. Figure 2 indicate the cultural beliefs which lead to herbal use by pregnant women.
Of the women who used herbs, 83.3% said they did so because it is part of Journal of Biosciences and Medicines their culture. One of the beliefs that came out of the study was that herbs shorten labour. The majority (63.3%) of the women who used herbs said that they took herbs to shorten labour. Here are some of the statements from the women who claim to have delivered in a very short time because they had used herbs.
"Labour started at 10.00 hrs and I delivered at 15.00 hrs" (Participant 25) "My labour was very short lasting about 4 -5 hours unlike the previous one where I spent a good 3 days in labour" (Participant 15) "I had a very short labour lasting 3.5 hrs but the pain was terrible" (Partici- "Herbs are effective in that the labour becomes short and easy" (Participant 7) These statements are indicative of very short labour of about 3 -5 hours. Normal active labour takes about 10 -12 hours excluding the latent stage which varies per individual from hours to days [21].
Some of the participants (30%) claimed that they use herbs to protect their unborn baby and to prevent abortions. "I had three abortions so I was given herbs to prevent abortion and now I have my baby" (Participant 3). The women believe that herbs protect their babies from any harm even spiritual as indicated by the quotations from two of the participants; "I had a still birth so I took some herbs to protect my unborn baby from evil spirits" (Participant 10) "You can be bewitched so that delivery becomes very difficult so herbs can prevent that" (Participant 1) This means that the women believed that herbs are important in that they protect the unborn baby.
The study also showed that it is the women's culture and belief that herbs should be taken to prepare the birth canal. All the participants (100%) claimed that they took herbs for "masuwo" that is widening of the birth canal so that delivery becomes easy and fast. The following statements indicated the use of herbs for "masuwo"; "I was given herbs for 'masuwo' since it is my first child" (Participant 5) "I was given the herbs so that I will not have stitches" (Participant 16) "I was given herbs to widen the birth canal and make delivery easy" (Participant 12).
From the findings, it was evident that there was some element of coercion and intimidation so that the young women take the herbs for the intended purpose.
One of the participants verbalised this saying, "I was told that if I do not take the 'masuwo' concoction, then I was going to have a very difficult and long labour" (Participant 30). Another participant also said, "My mother in law told me that if anything happens to the baby then I will face the consequences if I do not take the herbs" (Participant 22).
These statements clearly indicate the use of coercion and intimidation as a force for the use of herbs during pregnancy.
The study also showed that the belief that herbs are safe (66.7%) is another force behind the use of herbs as indicated by the following interview extracts; "I have used herbs on all my 4 pregnancies and I did not encounter any problem. After all they are natural and free from toxins" (Participant 29).
"I had no complications and my baby was fine so they are ok with me" (Participant 21).
"I had a C/S on my first baby but now I used herbs and everything was ok" (Participant 7).
"All my children are fine and I delivered them normally because of the herbs I use" (Participant 20).
"I have never had any problems but I use the same herb all the time. Everyone in our family uses them with no problems" (Participant 6).
From the statement above, it can be assumed that women use herbal therapy Journal of Biosciences and Medicines during pregnancy and labour because they believe that the herbs are safe.
Herbs Are Used Because of the Previous Experience
The other force that made women take herbs came out as previous experience.
Previous experience can influence women to use herbs especially if the outcome was good as stated by some of the participants; "I have used herbs on all my 4 pregnancies and I did not encounter any problem. After all they are natural and free from toxins" (Participant 29).
"I had no complications from herbs on all my deliveries so they are ok with me" (Participant 3).
"In all my deliveries, I have used herbs and I delivered normally with no problems so herbs work" (Participant 4).
"I have never had any problems but I use the same herb all the time. Everyone in our family uses them with no problems" (Participant 22).
From the findings, it is quite clear that the participants used herbs repeatedly as verified by the interview extracts above. However, others still used herbs even when the outcome was bad, "Herbs are safe. My child died due to the delayed operation when my child was in distress. The culprits are the midwives not the herbs because all my sisters used the same herb with no problems" (Participant 11). One participant believed that herbs work since her previous experience without herbs was not favourable, "I had a caesarean section on my first baby but now I used herbs and everything was ok so the herbs work" (Participant (26).
Herbs Can Be Used Because of the Influence of the Significant People in the Women's Life
The study results revealed that the significant people in the woman's life can be forces behind the use of herbs in pregnancy. These included the mother, grandmother, aunt, friend and the traditional Birth Attendances (TBAs). The study also identified the sources of herbs as close associates of the pregnant women.
The mother was identified as the main source of herbs as the majority, 25 (85%) of the participants were given herbs by their mothers. The remaining 5 (15%) were given by a grandmother/aunt or a friend. A mother is the most trusted person by any individual and more so if one is pregnant. All the herbs however, were provided by significant persons of the women and those who had no knowledge on herbs out sourced them. This was explained by one participant who said, "my mother gave me the herbs but she sourced them from the TBA".
Discussion
The data elicited from the women who used herbs showed that the use of herbs is part of the woman's culture. Of the women who used herbs, 83.3% said they did so because it is part of their culture. Cultures share a belief that during pregnancy, the mother and the foetus are vulnerable as a result, herbal remedies is heavily depended on the culture and background of the woman and will contin- ue to play a pivotal role during the gestation period and in labour just like they did in the past [1]. In addition, pregnant women believe that herbs are able to control their childbearing process [3]. Therefore, culture and beliefs are a force in the use of herbs by women.
The majority (63.3%) of the women who used herbs said that they took herbs to shorten labour.
The purpose of taking herbs as stated by the participants was to have an easy and fast delivery, so to these women, herbs are effective. Labour lasting about 3 hours is definitely precipitate labour. This is in line with the findings that a woman is given herbs to precipitate delivery as part of the African culture [1].
This belief that herbs make labour quick and easy is a force that propels women to use herbs for who does not what labour to be easy and quick. Accordingly [13], a woman can use her rights as a bio-psycho -social and cultural being to act on her health. This means that if a woman wants a short labour and has the means to obtain one, then she will do so even if it means using herbal preparations. As a result, the need to have a short and easy labour is a force that makes women use herbs.
Some of the participants claimed that they use herbs to protect their unborn baby and to prevent abortions. The African tradition believes in witchcraft and evil spirits of which these things can be dealt with through use of traditional herbs as one of the participant said. These sentiments show that herbs play a significant role in the psycho-spiritual realm of an individual. If a mother perceives that her unborn baby is threatened, then she will take every possible action to prevent that including herbs. This is in accordance with the Health Belief Model (HBM) which looks at perceived threat and perceived benefits as cues to action which is happening with these women. The women perceive a threat to their unborn baby and believe that herbs will protect the baby hence their use during pregnancy. These findings are similar to the claims that the influence of religion and the spiritual consciousness of the pregnant women make them more inclined to use treatment based on their faith rather than scientific beliefs [2].
The study showed that preparation of the birth canal is very important to the expecting mother. However, it appears as if the mothers were being given and not taking herbs. It looks like coercion is being used with some bit of intimidation (...since it is my first child and ... will not have stiches). Every woman is excited about her first child so by stressing on this, the woman will definitely take herbs to have a good outcome of her "first" baby. In the same vein no woman wants stiches on her perineum. So this gentle persuasion and intimidation can be a force to use of herbs in pregnancy. These women took herbs because someone was deciding for them as herbs could be taken due to social pressure from significant people in the pregnant woman's life [1]. In addition, the behaviour of a pregnant woman is influenced by her customs [4]. So in the African custom where elders are respected, it becomes very difficult for the pregnant woman to resist advice and or pressure from her family especially the mother and aunts. ers so that they take the herbs.
All 30 (100%) of the participants believed that herbal therapy is safe. The belief that herbs are safe can be a force behind their use. These finding concur with the claims that herbal preparations are safe with rare incidences of adverse effects of the mother and baby [14]. Women used herbs because they think that herbs are safer than conventional methods [4]. It is an assumption of the women that because herbs are natural they are safe from toxic effects. If they thought that herbs can be dangerous they would not use them as one of the forces for using herbs was to protect their unborn babies and not to harm them. Those who believe that the herbs are safe had a pleasant. Experience and outcome and these are likely to use them again. This is supported by the argument that the choice to use herbs is based on the individual's previous experience [7].
Previous experience can influence women to use herbs especially if the outcome was good. What this seems to imply is that if the woman used herbs and it worked, she will continue to use them. Even those who had complications blamed the health system as they believed that herbs are safe. Women believe that the outcome of their pregnancy was better off when they use herbs [19] and they also analyse the attitude of the health workers. Some women from the study blamed the health workers for the complications they encountered rather than the herbs. This supports other research finding that the use of herbs is based on individual experiences [9]. Women are comfortable with traditional medicines and are satisfied with the results hence they choose their traditional medicines irrespective of the existence of western medicine [21]. It then can be assumed that the use of herbs can be attributed to previous experience.
The majority 25 (85%), of the participants were given herbs by their mothers who is a significant person in the pregnant woman's life as claimed by the theory of reasoned action. It becomes difficult for the woman to refuse the herbs being given by her mother especially if it coupled with coercion and threats. So besides preparing the birth canal and fear of a difficult labour, the other force compelling women to take herbs is trust and respect for the elders.
A mother is the most trusted person by any individual and more so if one is pregnant. The Shona culture has a tradition of sending a woman who is pregnant for the first time to her mother's home until she delivers. The mother is then expected to take care of her daughter throughout the last trimester of pregnancy, ensure a safe delivery and teaches her how to care for the infant. So to ensure a safe delivery, the mother gives her daughter herbs which she was given by her mother. This way the herbs are passed from generation to generation and that the family and friends are the sources of herbs [9]. If the mother is not knowledgeable about herbs, she out sources them from the TBAs so as to fulfill her societal expectations and duty. This concurs with the claim that birth attendances are a source of herbal information for pregnant mothers [1]. On the other hand, the marketing strategies used by the traditional practitioners makes the P. Tsitsi et al.
herbal preparations to be more alluring to the pregnant mother who wants the best for her baby [14]. The problem however arises as family members who mostly recommended the use of herbs may not have sufficient knowledge to advise pregnant women about the use of herbal drugs [20]. Since the family is the key to herbal usage this becomes another force to herbal use in pregnancy.
Conclusion
A pregnant woman is capable of making decision pertaining to her pregnancy, but the study revealed that there are a number of forces that compel her to make a decision to take herbs. The African culture places the young mother in a compromised position where she has to take herbs whether she likes it or not. These traditions which are passed from mother to daughter make the mother give her daughter herbs simply, because that's the family tradition. The gentle pressure and intimidation makes the woman take the herbs, because they want the best outcome of pregnancy. The woman's beliefs that herbs are safer and shorten labour coupled with previous experience drive the women to use herbs. There is however the question of the safety of the herbs. The issue of safety of the herbs used during pregnancy needs to be taken seriously. There is need to carry out more studies on the effects of these herbs on the health of the mother and child.
Recommendations
After analyzing the findings of the study, it is recommended that more research be carried out on the use of African herbs during pregnancy and labour to ensure safe motherhood.
|
2021-08-02T00:06:12.707Z
|
2021-05-06T00:00:00.000
|
{
"year": 2021,
"sha1": "906b9da20868397a44ca163d0c804e40397585a1",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=109193",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2a2d03e46cbc9dbf828dbbc556dbcd344e379bf4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
15391121
|
pes2o/s2orc
|
v3-fos-license
|
Unusual Large-Scale Chromosomal Rearrangements in Mycobacterium tuberculosis Beijing B0/W148 Cluster Isolates
The Mycobacterium tuberculosis (MTB) Beijing family isolates are geographically widespread, and there are examples of Beijing isolates that are hypervirulent and associated with drug resistance. One-fourth of Beijing genotype isolates found in Russia belong to the B0/W148 group. The aim of the present study was to investigate features of these endemic strains on a genomic level. Four Russian clinical isolates of this group were sequenced, and the data obtained was compared with published sequences of various MTB strain genomes, including genome of strain W-148 of the same B0/W148 group. The comparison of the W-148 and H37Rv genomes revealed two independent inversions of large segments of the chromosome. The same inversions were found in one of the studied strains after deep sequencing using both the fragment and mate-paired libraries. Additionally, inversions were confirmed by RFLP hybridization analysis. The discovered rearrangements were verified by PCR in all four newly sequenced strains in the study and in four additional strains of the same Beijing B0/W148 group. The other 32 MTB strains from different phylogenetic lineages were tested and revealed no inversions. We suggest that the initial largest inversion changed the orientation of the three megabase (Mb) segment of the chromosome, and the second one occurred in the previously inverted region and partly restored the orientation of the 2.1 Mb inner segment of the region. This is another remarkable example of genomic rearrangements in the MTB in addition to the recently published of large-scale duplications. The described cases suggest that large-scale genomic rearrangements in the currently circulating MTB isolates may occur more frequently than previously considered, and we hope that further studies will help to determine the exact mechanism of such events.
Introduction
The Beijing genotype of Mycobacterium tuberculosis (MTB) has been shown to be globally spread all over the world [1]. In Russia, half of the local MTB population belongs to Beijing genotype, and one-fourth of these strains belong to the so-called B0/W148 clonal group [2]. Members of this group possess a specific 17-band IS6110 restriction fragment length polymorphism (RFLP) pattern, which was originally identified in Russia in the 1990s [3,4]. In comparison with other Beijing genotypes, B0/W148 strains demonstrated an increased virulence in the macrophage model [5], a stronger association with multidrug resistance [6], and an increased transmissibility [7,8]. The Beijing B0/W148 has been defined as a 'successful Russian clone' of M. tuberculosis, and its pathobiology and phylogeography have recently been reviewed and discussed [2]. Together, these findings have led to assumption that Beijing B0/W148 strains possess unique genomic features that gave them evolutionary advantages.
To date, a small amount of whole genome sequencing data for B0/W148 MTB strains has been uploaded into the international databases, including one genomic scaffold of the W-148 strain (GL877853.1) and raw sequencing data in the NCBI Sequence Read Archive for several strains from the Samara region in Russia [9]. The aim of this work was to get more profound knowledge regarding the properties of Beijing B0/W148 strains using comparative genomics approach. All newly sequenced genomes were shown to be similar to the W-148 strain. Whole genome alignment between W-148 and the reference H37Rv MTB strain revealed two large chromosomal inversions in the W-148 genome. The largest inversion changed the orientation of the three megabase (Mb) segment of the chromosome. The second one occurred in the previously inverted region and partly restored the orientation of the large inner segment. These two inversions were flanked by partial or complete copies of mobile genetic element IS6110 and touched large parts of genome. Detailed PCR analysis of our sequenced strains (n = 4) revealed the rearrangements in their genomes identical to those ones found in W-148 strain.
Remarkably, only two cases of large-scale genome rearrangement events in the MTB have been reported until now. First case was reported by Domenech P. et al. [10] describing the duplication of a 350 kilobase (Kb) region spanning Rv3128c to Rv3427c in the strains belonging to the W/Beijing family of MTB lineage 2. The second case was described by Weiner B. et al. [11], showing the independent duplication events occurred in the MTB lineages 2 and 4 were found. We have found another example of chromosomal rearrangement, i.e. inversions of large DNA segments. Large inversions were previously detected in some bacteria [12,13], but not in MTB.
Here we report two large-scale genome inversions characteristic exclusively for the members of MTB Beijing B0/W148 cluster and further hypothesize that these events occurred in their progenitor. This is the first report of a large-scale inversion in the MTB genome, and we hope that it will be one more step in filling the gap in the knowledge of a history and of an evolution of this pathogen.
Results
Genome sequencing of four clinical M. tuberculosis isolates belonged to Beijing B0/W148 cluster Four Russian MTB isolates SP1, SP7, SP21, and MOS11 of the Beijing B0/W148 cluster were selected for whole genome resequencing (Table 1). Genomes were sequenced up to 98% completion using 454 pyrosequencing with more than 10-fold of coverage. To determine the taxonomy relationship between our strains and previously sequenced Beijing MTB strains deposited in GenBank, we have performed a phylogenetic analysis using polymorphisms relative to the reference genome of H37Rv MTB strain. The CTRI-4 strain previously sequenced in our laboratory and representing ancestral Beijing sublineage [17] was additionally included into analysis. Phylogenetic tree was built based on overall SNPs extracted from genomic DNA sequences after excluding SNPs for PE-PPE and PGRS protein families. This approach does not give us the perfect phylogenetic relationships in a case of fast evolving microorganisms influenced by recombination; however, it can be very efficient for the genetically monomorphic bacteria such as MTB [18]. The resulting phylogeny is shown in Figure 1. Phylogenetic tree demonstrated a close similarity between the genomes of four Beijing B0/W148 strains sequenced in this study and W-148 strain.
Rearrangements in W-148 chromosomal DNA
Similarity of the genome sequences of the studied strains and W-148 gave us an opportunity to analyze structural genomic rearrangements within this group. Start of the W-148 genome was changed in relation to base 1 of the MTB H37Rv genomic sequence. This whole genome alignment of W-148 and H37Rv chromosomal DNA sequences revealed the presence of two large inversions in the W-148 genome. The Mauve 2.3.1 program highlighted these chromosomal rearrangements by subdividing the W-148 genome into five local collinear blocks (LCBs) ( Table 2). This analysis demonstrated that the first, third and fifth LCBs were conserved whereas the second and forth were inverted and rearranged in W-148 with respect to H37Rv (Figure 2).
Chromosomal rearrangements in SP21 MTB strain confirmed by NGS
Based on similarity of genomic DNA sequences between SP1, SP7, SP21, MOS11 and W-148 strains, we expected to find the discovered inversions in the other strains as well. To confirm this, we additionally sequenced the SP21 strain using the mate-pair library strategy. The assembly of the SP21 genome sequence was performed by combining 454 (70 K reads, mean length 540 bp) and Ion Torrent data (650 K reads, 180 bp, mate-pair) that together represented more than 40-fold coverage of the genome. Initial assembly was performed by using the GS de novo Assembler and produced 391 contigs which length ranged from 500 to 69,788 bp. Further scaffolding resulted in 12 scaffolds with a total length of 4.45 Mb (AOUF00000000.1). The alignment of H37Rv, W-148, and SP21 chromosomal DNA sequences revealed the presence of identical large-scale inversions in both SP21 and W-148 stains ( Figure S1).
Chromosomal rearrangements in SP21 MTB strain confirmed by RFLP
The inversions observed in SP21 genome relative to H37Rv were verified by RFLP analysis. Based on analysis of the H37Rv restriction endonuclease map, the MluI was chosen for DNA digestion because its recognition sites were close to recombination junctions. The DNA probes specific to genome regions flanking the recombination junctions were obtained by PCR with specific primers ( Table 1 in supplementary Text S1).
The RFLP analysis was performed for both SP21 and H37Rv M. tuberculosis strains and revealed the rearrangements in the SP21 genome sequence relative to that of H37Rv ( Figure 3). In case of H37Rv, the hybridization signals from A&B, C&D, E&F, and G&H probes perfectly matched each other ( Figure 3A) and the size of RFLP fragments corresponded to the expected one ( Figure 1 in supplementary Text S1). In case of SP21, the RFLP pattern was different ( Figure 3B). The signals from alternative combinations of probes (A&G, F&D, E&C and B&H) matched each other, which indicated the presence of this specific inversion ( Figure 3C). The RFLP fragments corresponded to those expected in the inverted genome.
The presence of inversions in other members of Beijing B0/W148 and non-Beijing B0/W148 groups To verify the presence of the discovered inversions in another Beijing B0/W148 MTB strains, we developed a set of PCR primers flanking the sites of inversions. All primers were designed on the basis of W-148 genome (Table 3). Two pairs (P1&P2, P3&P4) flanked the ends of the external inverted region (between LCB I&IV and LCB II&V, respectively); and other pairs (P5&P6, P7&P8) flanked the ends of the internal region (between LCB IV&III and LCB III&II, respectively) ( Figure 2). The primers were designed in such a way that the same primers used in different combinations would be suitable for analysis of genomic arrangement in other, non-B0/W148 strains. The size of expected PCR products is shown in Table 3.
On the contrary, the primer pairs 3, 4, 7, and 8 using the same primers in different combinations (Table 3) amplified expected PCR products in non-B0/W148 strains ( Figure 4B, lanes 3, 4, 7, and 8), whereas no PCR products were obtained for Beijing B0/ W148 strains ( Figure 4A, lanes 3, 4, 7, and 8). The differences in length of amplicons produced by primer sets 4, 7, and 8 for groups of non-B0/W148 Beijing and non-Beijing (Ural and LAM) strains (Table 3) is related to the presence of a complete copy of IS6110 in the analyzed region in non-B0/W148 Beijing strains in contrast to LAM and Ural strains. The specificity of produced PCR products was confirmed by Sanger sequencing in all cases.
These results were additionally verified by using the alternative primer sets selected in the similar way. Primer sequences, expected amplicons' lengths, and electrophoregram of PCR products obtained for Beijing B0/W148 and non-Beijing B0/W148 strains are presented in the Text S2.
Thus, we demonstrated the presence of identical inversions in chromosomal DNA of the studied Beijing B0/W148 strains (n = 8), which appears to be a unique event specific to this clonal cluster.
The hypothetical reconstruction of recombination events in Beijing B0/W148 progenitor Further we tried to reconstruct the order of rearrangements occurred in a hypothetical W-148 progenitor genome. We suggested that the order and orientation of LCBs in the genome Figure 1. Comparative phylogenetic analysis of strains under study and 12 whole genomes from the NCBI database. Phylogenetic tree based on all SNPs of genomes was constructed using the Neighbor-Joining algorithm. Evolutionary distances were calculated using p-distance method. doi:10.1371/journal.pone.0084971.g001 Table 1. Genotyping and drug resistance data of the B0/W148 strains sequenced in this study. of the W-148 progenitor 1 (P1) was the same as in H37Rv and in other Beijing strains and designed it in silico ( Figure 5). During the evolution, the first 3 Mb inversion occurred symmetrically across the replication axis and affected LCBs II, III and IV with the formation of progenitor 2 (P2). This recombination event rearranged chromosomal DNA between Rv0609a and Rv3327 genes relative to H37Rv ( Figure 2). However, in other Beijing strains, the region between Rv3326 and Rv3327 genes was already disrupted by integration of IS6110. Interestingly, we found only parts of IS6110 in recombination junctions of the inverted region in the genome of W-148. The 812-bp and 543-bp fragments of IS6110 were detected at the boundaries of the LCBs I&II and LCBs IV&V, respectively. These two parts were inverted and formed together a perfect whole sequence of IS6110. We suppose that P1 had two inverted copies of IS6110, which were integrated into sites equidistant from the terminus of a replication (ter) region. According to our hypothesis next step of recombination occurred in Progenitor 2 genome and affected LCB III between Rv3020c and disrupted Rv1135c genes. This inversion restored an original orientation of this segment to the initial form like in P1 and H37Rv and led to the formation of W-148. The inversion of this LCB was most probably mediated by two inverted complete copies of IS6110, which were found on the borders of this LCB. Remarkably, all Beijing strains in the NCBI database have a complete copy of IS6110 between LCBs II and III (between Rv3019 and Rv3020c genes), between LCBs III and IV (disrupted the Rv1135c gene), and between LCBs IV and V (between Rv3326 and Rv3327 genes), while they do not have it between LCBs I and II (between Rv0609 and Rv0610c genes).
Discussion
This study focused on the genomic characterization of the MTB strains of the Beijing B0/W148 cluster, endemic for Russia and representing the epidemiologically successful variant of MTB [2,5,6]. Recently, Mokrousov [2] hypothesized that ''B0/W148 likely originated in Siberia, and its primary dispersal was driven by a massive population outflow from Siberia to European Russia in the 1960-80s'' and ''a historically recent, phylogenetically demonstrated successful dissemination of the Beijing B0/W148 strain was triggered by an advent and wide use of the modern anti-TB drugs and was due to its remarkable capacity to acquire drug resistance''. For this reason, we sequenced genomes of four Beijing B0/W148 MTB clinical strains isolated in Russia in 2010-2011.
We used the 454 pyrosequencing technology, which produces the long reads (up to 800 bp). This gave us a good opportunity to see indels, and to identify most of the LSPs (large sequence polymorphisms) present in the studied genomes. Additionally, the genome sequence of W-148 strain represented in GenBank was included in analysis.
Comparing genomes of H37Rv and W-148 strains, we detected two large-scale inversions in the genome of W-148, which were confirmed to be present in all studied strains of Beijing B0/W148cluster. Notably, the presence of large-scale chromosomal rearrangements within mycobacteria genus was recently shown by in silico analysis [20]. The genome of Mycobacterium smegmatis mc(2) 155 contains a 56 Kb duplicated region when compared with ATCC 607 progenitor. This duplication is flanked by two copies of an IS1096 element [21]. Comparative genomics revealed two large tandem chromosomal duplications, DU1 and DU2, in Mycobacterium bovis BCG strain. DU1 was found only in BCG Pasteur, while four different forms of DU2 were found in different BCG strains [22]. Two cases of large duplications occurred in the MTB belonged to lineages 2 and 4 have been reported to date [10,11]. Some of the duplicated regions were flanked by IS6110 elements supporting a general assertion that the majority of genomic rearrangements are mediated by the mobile genetic elements or repeats [23].
As far as large-scale chromosomal inversions are concerned, a single event was detected among M. tuberculosis KZN strains, and there were several such events in Mycobacterium avium evolution. Three KZN strains sequenced by Broad Institute showed a largescale inversion of nearly 2.5 Mb (spanning coordinates ,1 Mb to ,3.5 Mb, relative to the origin of replication), although the resequencing of one of these strains in another laboratory found no evidence for this event [24]. In M. avium, large-scale inversions were found between M. avium subspecies hominissuis and subspecies paratuberculosis [25]. The interspecies comparison of genomes of fish M. marinum isolates and M. tuberculosis also revealed X-shaped chromosomal inversions derived from the accumulation of rearrangements that were symmetrical across the replication axis [26].
In our study, we discovered the large-scale chromosomal rearrangements characteristic for MTB isolates of the Beijing B0/W148-cluster. The presence of these inversions in all members of Beijing B0/W148 group was confirmed by PCR, sequencing and RFLP hybridization analysis. Additionally, we suggest a twostep scenario of evolution for these strains. In the first step, a largescale inversion of the 3 Mb segment of the chromosome occurred. This assumption is based on the fact that boundaries of inversion are perfectly equidistant from the site of terminus of replication (i.e., symmetrical across the replication axis). There is a lot of data supporting the chromosome rearrangement around the ter region in other bacterial genomes [27], and MTB may have probably implemented a similar mechanism. However, the reason why we have found only half of IS6110 at the boundaries of inversion is not clear. Remarkably, one part of this disrupted IS6110 contains a site for PvuII while its second part contains the sequence used as a probe in IS6110-RFLP typing (between LCBs I and II), which is why only one band is detected in the IS6110-RFLP profile. This ,7.4 Kb band corresponds to two sites for PvuII found in unique regions of the W-148 genome ( Figure S2). Using the BioNumerics version 5.1 package we compared a collection of IS6110 RFLP profiles of different Beijing and non-Beijing genotypes. As a result, only members of the Beijing B0/W148-cluster contained the ,7.4 Kb band demonstrating their unique origine. The second inversion occurred with LCB III and partly restored an orientation of the large inner segment. As it was noted above, the IS6110 flanking LCB III is found in all Beijing strains available in GenBank. One of the characteristics of IS6110 integration is a duplication of the 3-4 base pair region flanking the inserted element at the insertion site [28]. We checked the presence of these duplications in the genomes of B0/W148 and non-B0/W148 Beijing strains. In non-B0/W148 strains, the duplication of AGC proximal to the IS6110 insertion site between LCBs II and III was found, while the CAG was duplicated between LCBs III and IV ( Figure 6). In B0/W148 strains, the sequences of duplicated triplets in the LCB III were in the same orientation, while the sequences of triplets in the LCBs II and IV were inverted and rearranged, which corresponded to the origin of W-148 from W-148 Progenitor 2 ( Figure 6). In this case, a homologous recombination between IS6110 elements appears to be the most appropriate mechanism for the inversion.
Another possible evolutionary scenario suggests that LCBs II and IV have recombined independently of LCB III. According to this hypothesis, LCBs II and IV could recombine simultaneously or stepwise. However, it seems unlikely that blocks II and IV were involved in two independent recombination events simultaneously. Thus, it should have been a sequential recombination process. At first, block II or IV recombined and then the remaining one. Since these blocks are very distantly located from each other, these recombination events most likely were independent. The case where LCBs II and IV have recombined independently of LCB III is also possible but looks improbable.
It has not escaped our notice that the described large rearrangements could have potential consequences for the phenotype as described for other bacteria [29]. Therefore we looked more closely at genes involved into the postulated recombination events. As described in Results section above, the discovered inversions occurred in the proximity of the Rv0609, Rv0610c, Rv1135c, Rv3019, Rv3020c, Rv3326, and Rv3327 genes ( Figure 2). However, only in the B0/W148 strains the disrupted part of IS6110 element is found between Rv0609 and Rv0610c in comparison with other Beijing strains. Both of these genes code for hypothetical proteins, they are located far away from the site of recombination, and it is hard to assume any influence on the phenotype. To understand the causes of recombination events in Beijing B0/W148 strains, the complete list of unique cluster-specific SNPs (n = 94) was built (Table S1). We have included only those mutations which were found in at least four of the five isolates under consideration. All of these SNPs were mapped to genes coding the proteins of repair, recombination and replication (3R) system in the MTB [30,31]. No non-synonymous SNPs were found. One synonymous mutation Gly269Gly was found in the RecF protein, which could hardly be associated to large-scale inversions.
To classify the precise genetic sublineage of our sequenced strains, we examined five specific LSPs present in genomes of the East Asian lineage (RD105, RD207, RD181, RD150 and RD142) [32,33]. According to this analysis, the studied strains belong to the Beijing sublineage 3 (RD105, RD207 and RD181 were deleted), as well as the strains with the large duplications recently reported [10,11]. These studies reported the large duplication occurred in the strains within sublineages 3, 4 and 5, which spans 350 Kb of the chromosome from the Rv3128c to the Rv3427c genes. Additionally, this duplication was flanked by two complete copies of IS6110 in the same orientation. After a detailed review of strains studied, we found no evidence of IS6110 duplication, and the different location of inversions boundaries. Remarkably, according to Weiner et al. [11] the strain T67 had downstream boundary at Rv3326, and it corresponded to the boundary of the LCB V in the W-148 genome. Interestingly, this region additionally corresponds to the RvD5 in the H37Rv genome and Rv3326, which is a part of IS6110 flanking it from one side [34,35]. In almost all Beijing strains, the orientation of IS6110 in this region is different from the H37Rv.
In summary, we described a chromosomal rearrangement, inversions of large DNA segments in strains of the MTB Beijing B0/W148-cluster. The members of this group possess particular pathobiological features mentioned above, and further studies are necessary to determine the impact of the found inversions on the biological properties of the pathogen. These and previously described inversions and previously reported duplications in the region from 3 Mb to 4 Mb are intriguing and cause an increased interest in these genomes. These rearrangements may possibly reflect evolution of the global chromosomal DNA topology or local DNA-DNA interactions within this region. We hope that our study and studies of other bacteria concerning large-scale rearrangements will shed light on the understanding of the genome evolution of MTB.
Mycobacterial isolates
A total of 40 MTB strains were obtained from the culture collection of the Research Institute of Phtisiopulmonology (St. Petersburg, Russia) and Moscow Scientific-Practical Center of Treatment of Tuberculosis of Moscow Healthcare (Moscow, Russia). The susceptibility testing was done using a BACTEC TM MGIT TM 960 Culture system (Becton Dickinson, USA) by standard protocol. Standard MTB genotyping methods, including spoligotyping and 24 loci MIRU-VNTR were applied to these strains as previously described [16,36] (Table S2). Of them 28, 8, and 4 isolates had Beijing, LAM, and Ural spoligotype profiles, respectively. For Beijing isolates with spoligotype SIT1 IS6110 RFLP analysis was additionally performed [37]. BioNumerics version 5.1 package (Applied Maths, Belgium) was used for band comparison. According to RFLP analysis eight isolates belonged to Beijing B0/W148 cluster. Four of them were selected for a current whole genome re-sequencing project ( Table 1).
Whole genome sequencing and assembly
DNA extraction was performed as previously described [37]. Four B0/W148 strains, SP1, SP7, SP21, and MOS11 were sequenced by using the Roche 454 Life Sciences Genome Sequencer FLX following the manufacturer's instructions (Roche 454 Life Science, USA). Assembly of raw sequencing reads with an average length of 540 bases was performed by the GS de novo assembly software version 2.8 (Roche 454 Life Science, USA). Raw sequencing data for MTB genomes SP1, SP7, Sp21, and MOS11 were deposited in the NCBI Sequence Read Archive (http://www.ncbi.nlm.nih.gov/Traces/sra/) under accession numbers SRX216883, SRX216889, SRX216899, and SRX216918.
Phylogenetic tree was built based on overall SNPs extracted from genomic DNA sequences after excluding SNPs for PE-PPE and PGRS protein families using MEGA4. M. canettii was taken as an out-group. A Neighbor-Joining algorithm was used to build a tree. Phylogenetic distance was calculated by using p-distance. Genomic
PCR verification of inversions
The standard PCR was carried out in 25 mL of reaction mixture. The reaction mixture contained 66 mM Tris-HCl (pH 9.0), 16.6 mM (NH 4 ) 2 SO 4 , 2.5 mM MgCl 2 , 250 mM of each dNTP, 1 U of Taq DNA polymerase (Promega, USA), 2.5 mM betain (SIGMA, USA) and 10 pmol of each primer (Table 3, Text S1). One to ten nanograms of genomic DNA were used as a template for PCR. A universal amplification profile included the following steps: the initial heating step was at 94uC for 2 min, followed by 30 cycles of 94uC for 30 sec, 61uC for 15 sec and 72uC for 20 sec and a final step at 72uC for 5 min. The PCR products were then sequenced by conventional Sanger capillary methods on ABI Prism 3730 Genetic Analyzer (Applied Biosystems, USA; Hitachi, Japan) and compared to the H37Rv and W-148 genomes.
RFLP analysis
RFLP analysis was performed as recommended by van Embden et al. [37] with modifications. Briefly, the whole genomic DNA of SP21 and H37Rv M. tuberculosis strains were treated with 15 units of MluI (Thermo Scientific, USA) in recommend reaction buffer during the night at 37uC. Probes for the Southern analysis were obtained by conventional PCR using Amersham ECL labeling and detection systems (GE Healthcare) with dedicated primers sets (Supplementary Text S2). The obtained profiles on ECL films were scanned and processed with BioNumerics version 5.1 package (Applied Maths, Belgium).
Supporting Information
Figure S1 Alignment of genomes of H37Rv, W-148, and SP21 MTB strains represented by Mauve 2.3.1. Colored outlined blocks surround regions of the genome sequence that aligned to part of another genome (LCBs numbering is the same as in the Figure 2 of the manuscript). Lines link blocks with homology between genomes. Genomes from top to bottom: H37Rv, W-148, and SP21. Vertical red lines in the SP21 correspond to the boundaries of the scaffolds. Scaffolds 5, 3, and 9 containing sequences of inverted regions are indicated by double-headed arrows. The sequences flanked the sites of inverted regions were found within scaffolds 5, 3, and 9. Scaffold 5 (392,333 bp) includes full sequence of the LCB IV (for LCB numbering and length see Table 2 and Figure 2 in the main text) and parts of the LCB I and III (29 Kb and 16 Kb, respectively). Text S1 RFLP-analysis for confirmation of inversions.
|
2016-05-04T20:20:58.661Z
|
2014-01-08T00:00:00.000
|
{
"year": 2014,
"sha1": "500850fba7b30e54d5f0e38cb8b71ca1099c97e0",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0084971&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "500850fba7b30e54d5f0e38cb8b71ca1099c97e0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.