id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
10530822
pes2o/s2orc
v3-fos-license
Multiple filamentation induced by input-beam ellipticity The standard explanation for multiple filamentation (MF) of intense laser beams has been that it is initiated by input beam noise (modulational instability). In this study we provide the first experimental evidence that MF can also be induced by input beam ellipticity. Unlike noise-induced beam breakup, the MF pattern induced by ellipticity is reproducible shot to shot. Moreover, our experiments show that ellipticity can dominate the effect of noise, thus providing the first experimental methodology for controlling the MF pattern of noisy beams. The results are explained using a theoretical model and simulations. The standard explanation for multiple filamentation (MF) of intense laser beams has been that it is initiated by input beam noise (modulational instability). In this study we provide the first experimental evidence that MF can also be induced by input beam ellipticity. Unlike noise-induced beam breakup, the MF pattern induced by ellipticity is reproducible shot to shot. Moreover, our experiments show that ellipticity can dominate the effect of noise, thus providing the first experimental methodology for controlling the MF pattern of noisy beams. The results are explained using a theoretical model and simulations. PACS numbers: 260. 5950, 190.5530 The propagation of high-power ultrashort pulses through the atmosphere is currently one of the most active areas of research in nonlinear optics, with potential applications such as remote sensing of the atmosphere and lightning control [1]. In experiments, narrow filaments of typical width of 100µm have been observed to propagate over distances of hundreds of meters, i.e., over many Rayleigh lengths. The stability of a single filament over such long distances is nowadays known to be the result of the dynamic balance between the focusing Kerr nonlinearity, diffraction and the defocusing effect of plasma formation due to multiphoton ionization. The initial stage of propagation during which filaments are formed, however, is much less understood. In particular, since in these experiments the laser power is many times the critical power for self-focusing, a single input beam typically breaks-up into several long and narrow filaments, a phenomenon known as multiple filamentation (MF). Since MF involves a complete breakup of the beam cylindrical symmetry, it has to be initiated by a symmetry-breaking mechanism. The standard explanation for MF in the Literature has been that it is initiated by input beam noise [2], see also Ref. [3] for a review. Since noise is, by definition, random, this implied that the MF pattern would be different from shot to shot, i.e., the number and location of the filaments is unpredictable. This constitutes a serious drawback in applications where precise localization is crucial (e.g., laser eye surgery) or in experiments where one wants to measure the filament properties (power, transverse profile, etc.) after some propagation distance. Unfortunately, noise is always present in such high-power lasers, and is not easy to eliminate to a degree that will lead to a deterministic MF pattern. Recently it was predicted theoretically that input beam ellipticity can also lead to MF [4]. In this case the MF pattern is deterministic, i.e., reproducible from shot to shot. In this study we provide the first experimental evidence that input beam ellipticity can indeed induce a deterministic MF pattern. Moreover, although a certain amount of noise is present in our beam, we observe that the MF pattern is nearly identical from shot to shot. This shows that sufficiently large ellipticity can dominate noise in the determination of the MF pattern. In other words, rather then trying to eliminate noise, one can control the MF pattern by adding sufficiently large ellipticity to a noisy input beam. Most recent experimental studies of MF of intense laser beams have been performed in connection with atmospheric propagation [5,6,7,8]. Despite some important differences (nonlinear response, dispersion, etc.) between gases and condensed media, it is expected that the physical processes leading to MF are very similar in both cases. Indeed, some of the present authors have recently demonstrated self-guided propagation of femtosecond light pulses in water for distances exceeding several Rayleigh lengths [9]. In the experiments reported in this study, we increase the incident beam power and modify its spatial parameters, resulting in MF in water. A 170-fs, 527-nm pulse was provided by secondharmonic compressed Nd:glass laser system (TWINKLE, Light Conversion Ltd., Lithuania) operated at 33 Hz repetition rate. Spatially filtered beam was focused into ∼ 85µm FWHM beam waist at the entrance of water cell by means of f=+500 mm lens. Incident energy was varied by means of a half-wave plate and a polarizer. The focused beam has a small intrinsic ellipticity, which was evaluated as a parameter e=a/b=1.09. Highly elliptical beam (e=2.2) was formed by inserting slightly off-axis iris into the beam path. The output face of the water cell was imaged onto the CCD camera (Pulnix TM-6CN and frame grabber from Spiricon, Inc., Logan, Utah) with 7× magnification by means of an achromatic objective (f=+50 mm). In the first series of experiments we recorded transverse distribution patterns at fixed propagation length z=31 mm (∼ 0.7L DF , L DF = nk 0 r 0 /2) as we increased the incident power, see Fig. 1. Two cases were examined; a near-circular input beam (e=1.09) and an elliptic beam (e=2.2). Several important conclusions can be drawn: 1) The threshold power for MF is much less for the elliptic beam, 2) The number of filaments increases with input power, 3) At power levels moderately above the threshold for MF, in addition to the central filament, there are two filaments along the major axis of the ellipse. At higher powers there are additional filaments in the perpendicular direction. At even higher powers (P=23P cr ) one can observe a quadruple of filaments along the bisectors of the major and minor axes. 4) MF starts as nucleation of an annular ring, which contains the power that was not trapped in the central filament (this is more evident for e=1.09). 5) Since the MF patterns shown in Fig. 1 were reproducible from shot to shot, they were not induced by random noise. 6) Investigation of dynamics of the MF structure (data not presented here) showed that it is robust in terms of propagation, i.e., after an initial transient each of the filaments propagates as an independent entity. In Fig. 1 we observe that the side filaments are always pairs located symmetrically along the major and/or minor axis, and/or quadruples located symmetrically along the bisectors of the major and minor axes. This observation can be explained based on the following symmetry argument. Consider an elliptic input beam of the form E 0 (x, y, t) = F (x 2 /a 2 + y 2 /b 2 , t). Since the medium is isotropic, the electric field E should be symmetric with respect to the transformation x → −x and y → −y. Therefore, if the filamentation pattern is induced by input beam ellipticity, it can only consist of a combination of 1) a single on-axis central filament, 2) pairs of identical filaments located along the ellipse major axis at (±x, 0), 3) pairs of identical filaments located along the minor axis at (0, ±y), and 4) quadruples of identical filaments located at (±x, ±y). Whereas ellipticity decreases the threshold power for MF, it increases the threshold power for the formation of a single filament. Indeed, the threshold for observing a single filament at z=31 mm were 6P cr and 4.9P cr for the elliptic and the near-circular beams, respectively. This ∼ 20% increase is in good agreement with the theoretical prediction for the increase in the threshold power for collapse (of cw beams) due to beam ellipticity [10]. In the experiment shown in Fig. 2 we produced two input beams with the same ellipticity parameter (e=2.2), but with different orientations in the transverse plane. In both cases we observe that the beam is elliptic and still focusing at P=5P cr , a single central filament at P=7P cr , an additional pair of comparable-power secondary filaments along the major axis of the ellipse at P=10P cr , and a second pair of weaker filaments in the perpendic- ular direction at P=14P cr . The rotation of the filamentation pattern with the ellipse rotation thus confirms that the MF in these experiments is indeed induced by the intrinsic beam ellipticity. We recall that it was recently shown that polarization effects could also lead to reproducible MF pattern [11]. In that case, however, the orientation of the filamentation pattern is determined by the direction of linear polarization. To check that, we changed the direction of linear polarization of the incident beam and verified that it has no effect on the orientation of the MF pattern. Indeed, polarization effects are important only when the radius of a single filament becomes comparable with the wavelength. This is not the case in our experiments, as the FWHM diameter of a single filament is ∼ 20µm. In our simulations we used a simpler model of propagation of cw beams in a medium with a saturable nonlinearity, i.e., iA z (z, x, y) + ∆A + |A| 2 1 + ǫ sat |A| 2 A = 0, (1) This model is considerably simpler than the physics governing propagation of intense ultrashort pulses in water. Nevertheless, numerical simulations of equation (1) reproduced the same qualitative features observed experimentally. For example, in Fig. 3(a) the MF pattern consists of a strong central filament, a pair of filaments along the minor axis, and a second pair of weaker filaments along the major axis. In Fig. 3(b) the MF pattern consists of a central filament, a quadruple of filaments along the lines y = ±0.37x, and a pair of very weak filaments along the major axis. These simulations, therefore, suggest that MF induced by ellipticity is a generic phenomenon that does not depend on the specific optical properties of the medium (air, water, silica, etc.) or on pulse duration. In conclusion, we have demonstrated for the first time that input beam ellipticity can lead to MF. Unlike noiseinduced MF, the filamentation pattern is reproducible and consists only of a central filament and/or pairs of identical filaments lying along the major and/or minor axes of the ellipse, and/or quadruples of identical filaments along the bisectors of the major and minor axes. The effect of ellipticity on MF seems to be generic, i.e., independent of the optical properties of the medium. Since a certain amount of astigmatism is always present in experimental setups, this observation may explain previous MF experiments, in which the filamentation pattern was reproducible. In addition, this study shows that one can overcome the random nature of noise and control the MF pattern simply by adding large ellipticity to the input beam.
2014-10-01T00:00:00.000Z
2003-12-03T00:00:00.000
{ "year": 2003, "sha1": "6ba15a70e20faa9bc2b6394e35658e69e43ac369", "oa_license": null, "oa_url": "http://www.math.tau.ac.il/~fibich/Manuscripts/MF_water.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "658aa72d5b6526cacb2225ebf8aef9ada8397021", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
266520764
pes2o/s2orc
v3-fos-license
Review of surveillance systems for tephritid fruit fly threats in Australia, New Zealand, and the United States Abstract Many countries conduct fruit fly surveillance but, while there are guidelines, practices vary widely. This review of some countries in the Pacific region demonstrates the diversity of fruit fly surveillance practices. All utilize 3 parapheromones—trimedlure, cuelure, and methyl eugenol—to trap adult male fruit flies. Some target species are not attracted to these compounds so other attractants such as food-based lures are used in certain areas or circumstances. Lure loading and replacement cycles depend on the target species and the local climate. Malathion and dichlorvos (DDVP) are commonly used toxicants, but not in all countries, and other toxicants are being developed to replace these older-generation pesticides. Jackson and Lynfield are commonly used trap designs but newer designs such as cone and Biotrap are being adopted. Local factors such as chemical registrations and climate affect the choice of trap, lure, dispenser, toxicant, and bait concentration. These choices affect the efficacy of traps, in turn influencing optimal trap deployment in space and time. Most states now follow similar practices around trap inspection, servicing, and data handling, but these processes will be disrupted by emerging automated trap technologies. Ultimately, different practices can be attributed to the unique fruit fly risk profiles faced by each state, particularly the suite of fruit flies already present and those that threaten from nearby. Despite the diversity of approaches, international trade in fruit continues with the assurance that fruit fly surveillance practices evolve and improve according to each country’s risk profile and incursion experience. Introduction As the world tries to feed the burgeoning human population, trade and travel have resulted in accelerating international spread of insect pests (Venette and Hutchison 2021).Some of the most critical invasive insects are the fruit flies (Diptera: Tephritidae) which cause direct economic impact on a wide range of fresh horticultural commodities (e.g., fruit and vegetables, hereafter "fruit") (Follett et al. 2021).Many countries impose stringent quarantine restrictions to prevent their entry (Follett and Neven 2006).Therefore, the detection of an alien fruit fly can disrupt domestic and international trade and usually triggers a regulatory response to prevent establishment or eradicate an incipient population (Suckling et al. 2016). Fruit fly surveillance is conducted in most countries in the Pacific region, and elsewhere in the world.Here, we compare and contrast the operational details of the fruit fly surveillance systems of 4 Pacific countries/states (hereafter "states")-Australia, California, Hawaii, and New Zealand (NZ)-and briefly touch on aspects from other jurisdictions.The origins and nature of fruit fly risks differ markedly between the 4 states.While Hawaii manages several wellestablished exotic species, NZ is free of all economically important fruit flies and focuses on early detection and eradication of occasional incursions.California experiences relatively high invasion pressure, and Australia's states vary with several endemic pests in the north and fruit fly freedom in the south.All 4 states conduct surveillance programs based on identifying, assessing, and ranking the flies of greatest threat.The target species determine which lure types are used.Once attracted to the vicinity of the traps, trap architecture and toxicants are important to maximize fly capture.Trap placement and frequency of inspection may optimize early detection.However, investment in surveillance resources (traps, lures, labor, etc.) must be balanced against the level of threat and potential damages of an incursion. Figure 1 summarizes some of the main factors contributing to the probability of detecting an incipient fruit fly population early.Our review is structured to assess many of these different factors and to demonstrate the similarities and differences among the approaches taken in 4 states with some of the most active biosecurity systems: NZ, Australia, Hawaii, and California.For these jurisdictions, we provide specific details to complement and update the global review of exotic fruit fly trapping networks by Quilici and Donner (2012), and synthesize recent research that is relevant to fruit fly surveillance.We show how current practices in each state continue to evolve to reflect previous experiences, perceived threats, and budget justification.These surveillance programs all achieve a level of assurance accepted by importing countries, despite differing in many details.The components in Fig. 1 demonstrate the complexity of fruit fly surveillance, and that there is no "one size fits all" approach to tephritid fruit fly surveillance. Target Species and Risk Assessment Threats to New Zealand Fruit flies are regarded as one of the most significant biosecurity threats to NZ's horticultural industries, which have a combined export value of over NZD 4 billion per annum.Of our 4 chosen states, only NZ is free from fruit flies of economic importance (Fig. 2), and we focus frequently on NZ's effective surveillance system for early detection. There are multiple sources of fruit fly risk for NZ (Popa-Báez et al. 2021).Australia, NZ's closest neighbor and a major trading partner, has c. 90 fruit fly species, about 10 of which are of economic concern.Of these, the most damaging is Queensland fruit fly, Bactrocera tryoni (Froggatt), which is present in non-arid eastern Australia (Dominiak and Mapson 2017).This species has been detected in NZ on 8 occasions, including an established population in Auckland in 2015 which was successfully eradicated (Kean 2016).Lesser Queensland fruit fly, Bactrocera neohumeralis (Hardy) has a similar host range and is also present along the eastern Australian coast north of Sydney (Dominiak and Worsley 2016).Jarvis fruit fly, Bactrocera jarvisi (Tryon), is endemic in Queensland and possibly north coastal New South Wales (NSW) (Dominiak and Worsley 2017).However, these latter 2 species have never been detected in NZ. Mediterranean fruit fly (Ceratitis capitata Weidemann) established and was eradicated from at least 4 places in NZ in 1907(New Zealand Department of Agriculture 1907, 1908), when it was widespread in southeastern Australia (Dominiak and Daniels 2012).This species is no longer present in eastern Australia, reducing the risk of introduction to NZ.Nevertheless, C. capitata was established in Auckland in 1996, with haplotyping and pathway analysis suggesting the origin of this incursion was Hawaii (George Gill, MPI, pers. comm.).This infestation was successfully eradicated (Holder et al. 1997).Together, C. capitata and B. tryoni comprise 11 of the 15 There are many Asian fruit flies that pose a risk to NZ from other trading partners.Oriental fruit fly [Bactrocera dorsalis (Hendel)] is considered the greatest fruit fly threat to NZ and Australia from Asia, and (as its synonym B. papayae) was trapped in NZ in 1996.At that time, B. dorsalis had infested some 850,000 ha of Queensland, Australia (Cantrell et al. 2002), and this was the likely source of the NZ incursion.Since then, B. dorsalis has been eradicated from Australia (Cantrell et al. 2002, Suckling et al. 2016), reducing the risk of introduction to NZ.However, B. dorsalis remains of critical concern to both Australia and NZ due to its demonstrated invasion ability, competitiveness with other fruit flies, and wide host range (Duyck, Sterlin et al. 2004, Clarke et al. 2005, De Villiers et al. 2016). The remaining NZ trap detections were tropical Pacific species: Fijian fruit fly Bactrocera passiflorae (Froggatt) in 1990, Zeugodacus tau (Walker) in 2016, and B. fascialis in 2019(MacLellan et al. 2021).The first of these is widespread in Fiji and present also in Niue, Tonga, Tuvalu, and Wallis & Futuna Islands, the second is widespread in eastern Asia, and the third is known only from Tonga (Clarke et al. 2004). Threats to Australia For Australia as a whole, the main exotic threat comes from South-East Asia because NZ is free of damaging fruit flies.Therefore, Australian surveillance is conducted primarily for domestic and exotic Asian tephritids.In October 1995, the exotic B. papayae (Drew and Hancock) was detected near Cairns in northern Queensland (Gillespie 2003).Shortly afterward in November 1997, B. philippinensis (Drew and Hancock) was detected in Northern Territory.Both incursions were eradicated (Hancock 2013).These 2 species were later synonymized into B. dorsalis (Schutze et al. 2015) and a possible future incursion of B. dorsalis is the primary driver for continued surveillance in all first ports of entry (Dominiak 2020). Within Australia, fruit fly threats vary from state to state.Bactrocera tryoni is endemic along the eastern Australian coast (Dominiak and Mapson 2017).Producers wishing to sell produce in areas free from B. tryoni (domestically or internationally) must demonstrate that their property is free from B. tryoni.One advantage of B. tryoni being endemic is that it appears to exclude C. capitata from establishing (Dominiak and Mapson 2017).Southeastern Australia is concerned that "northern" flies such as B. jarvisi and Z. cucumis may establish south of the Queensland border, and use traps to monitor any range changes, possibly due to climate change (Sultana et al. 2017, 2020, Dominiak 2020, Simpson et al. 2020). Northern Queensland fruit flies, such as mango fruit fly Bactrocera frauenfeldi (Schiner) (Royer et al. 2016) and B. aquilonis (May), have not been detected in NSW or NZ and are unlikely to pose a threat to these areas under current climates.Cucumber fruit fly, Z. cucumis, is a hot climate pest and until recently has not persisted south of Queensland's boundary with NSW (Fay et al. 2022). Threats to Hawaii Hawaii has long been severely impacted by Tephritids of economic importance, starting with the introduction of melon fly (Zeugodacus cucurbitae Coquillett) around 1895, followed by C. capitata in 1910.These 2 species had a significant impact on the diversified agriculture of the islands, as evidenced by the establishment of a "fruit fly investigations" laboratory in Honolulu by the United States Department of Agriculture in the first decade of the 20th century.Bactrocera dorsalis was accidentally introduced in 1945 and has remained one of the most serious fruit pests in the state.Additional responses included extensive efforts in biological control, with an unprecedented 32 braconid wasp species introduced to Hawaii between 1947 and 1952 specifically targeting invasive tephtritids (Vargas, Leblanc, et al. 2012).Still, incursions and establishments of fruit flies have continued.Solanaceous or Malaysian fruit fly (Bactrocera latifrons [Hendel]) was detected in 1983.More recently, olive fruit fly (Bactrocera oleae [Rossi]) was detected for the first time on Hawai'i and Maui islands in 2019 (Matsunaga et al. 2019).Internal quarantines against many of these pests slowed but did not halt their spread, and, except for the most recent arrival, these species have now infested all the major Hawaiian Islands.Hawaii implemented a successful area-wide pest management program against these pests (Vargas et al. 2008), but this could be threatened by further fruit fly incursions. Hawaii has a very high volume of visitors, peaking at over 10 M in 2019 (Hawaii Tourism Authority 2022).Since air travel is a significant pathway for fruit fly invasion (Liebhold et al. 2006), Hawaii poses a significant threat of fruit fly spread to the mainland USA and other areas.Hawaii is thought to be the source of the C. capitata incursion in NZ in the 1990s. Surveillance via trapping for additional species that threaten Hawaii, such as Zeugodacus tau (Enderlein) or Bactrocera zonata (Saunders) (species that respond to the male lures cuelure and methyl eugenol, respectively), is complicated by the fact that there are already established populations of cuelure and methyl eugenol responders (as well as Mediterranean fruit fly, which respond to trimedlure).Due to high population sizes, traps baited with male lures are quickly overwhelmed with hundreds or thousands of individuals of the established species. Threats to California California has B. oleae established (Rice et al. 2003), but the permanent presence of breeding populations of other species of economic importance is not generally accepted.There are regular detections of multiple species of fruit flies in California, including B. dorsalis, C. capitata, Mexican fruit fly (Anastrepha ludens Loew), Z. cucurbitae, and peach fruit fly (B.zonata [Saunders]) (Fig. 2).This has led some authors to suggest that some of these species may be widely established in California at sub-detectable levels (e.g., Papadopoulos et al. 2013).However, most researchers and authorities consider this to indicate frequent incursions that fail to persist due to trapping and response (Mcinnis et al. 2017).Clearly, propagule pressure is very high (Liebhold et al. 2006), and California's border biosecurity measures for fruit flies are less stringent than those of Australia and NZ. California is the major producer of fresh fruit in the United States of America, and the establishment of a species such as C. capitata or B. dorsalis would cause billions of dollars of economic damage in the first years alone (Suckling et al. 2016).For this reason, the state maintains a robust surveillance network based on adult trapping to guard against spread following incursions.Trapping and detection procedures are comprehensively detailed by the California Department of Food and Agriculture (Gilbert et al. 2013), and the state operates a network of approximately 95,000 traps against tephritids (Quilici and Donner 2012).Since the mid-1990s, California also has operated a preventative release program of sterile C. capitata throughout the greater Los Angeles area to help reduce the number of incursions that established (Barry et al. 2004). Changing Risk Factors Changes in international distributions of fruit fly species, together with shifting import and tourism patterns mean that the risk profiles of particular fruit fly species are constantly changing.For example, C. capitata was a very common interception in Australian fruit imports to NZ a century ago (New Zealand Department of Agriculture 1907) but is now confined to Western Australia (Dominiak and Daniels 2012) and poses a lower risk of introduction to NZ.Similarly, the single oriental fruit fly (as B. papayae) trapped in Auckland in 1996 likely originated from the large infestation in Queensland which was later eradicated, so oriental fruit fly is no longer a threat from this region.Conversely, the geographic range of B. tryoni has increased in recent years, arguably due at least in part to climate change (Dominiak andMapson 2017, Simpson et al. 2020).There is some indication that Bactrocera bryoniae (Tryon) has extended its range in 2020/2021 to become established in Sydney (Dominiak and Millynn 2022).Another factor that can change risk calculations is shifts in dominant fruit fly species in source regions due to subsequent establishments.Complex shifts in abundance and frequency of established exotic fruit fly pests have been observed in La Réunion island with subsequent introductions of other species (Charlery De La Masselière et al. 2017).The dependence of introduction risk on fruit fly status of nearby and trading partners (Deschepper et al. 2021) highlights the need for states to work together to mitigate fruit fly threats. Local climate at the destination of incursions is another key component of risk (Stephens et al. 2016), including the effects of artificial irrigation (e.g., Szyniszewska et al. 2020).In Australia, modeling has identified that the optimal climatic niche for B. tryoni is moving south, primarily along the coast (Sultana et al. 2017, Simpson et al. 2020).Parts of Victoria are predicted to become increasingly suitable for B. tryoni and productive tablelands to become increasingly susceptible to B. tryoni establishment.As winters become warmer, more B. tryoni adults are predicted to survive winter, and the flight threshold and successful matings are likely to occur earlier in each spring.A lengthened and warmer season may result in more generations per year and increased population sizes (Simpson et al. 2020).Also, shifts in rainfall patterns will affect B. tryoni populations.The 2010/2011 season was the wettest 2-year period in NSW's history and resulted in widespread B. tryoni outbreaks in the Victoria and NSW Fruit Fly Exclusion Zone (FFEZ).Despite considerable expenditure of funds and resources, it became clear that fruit fly freedom was no longer technically feasible or economically sustainable there, and the NSW portion of the FFEZ was closed in 2013 (Dominiak and Mapson 2017).Similarly, NZ and Tasmania can no longer rely on past environmental conditions to limit fruit fly establishment nor freedom in the case of incursions.However, now that B. tryoni is endemic through all of eastern Australia, there may be relatively little further increase in propagule pressure on NZ from its range increase. Climate may play an additional significant role in mediating competitive interactions between invasive fruit flies.When B. dorsalis invaded Hawaii, it displaced C. capitata throughout much of its range (Duyck, David, et al. 2004, Vargas et al. 2016), especially at lower elevations.However, C. capitata remained dominant in areas without high numbers of B. dorsalis hosts (Vargas et al. 1995).Similar patterns have been seen elsewhere in the world (e.g., Ekesi et al. 2009), often with dominance by B. dorsalis (Duyck, David, et al. 2004) or B. tryoni (Dominiak and Mapson 2017). Parapheromones Mature males of many Tephritidae will respond to specific parapheromone attractants (Sivinski and Calkins 1986), and these lures became the mainstay of most fruit fly surveillance programs worldwide (Quilici and Donner 2012).The operational significance of 3 particular lures-trimedlure, cuelure, and methyl eugenol-is such that fruit flies are typically grouped depending on which lure they respond to.Trimedlure is attractive to Ceratitis species including C. capitata, while cuelure attracts B. tryoni, Z. cucurbitae, and at least 55 other crop-damaging fruit fly species (Drew 1974).Methyl eugenol is strongly attractive to B. dorsalis and at least 22 other pest species (Drew 1974).A fourth group, including Z. cucumis, B. jarvisi, B. oleae, and the Anastrepha species, are not strongly attracted to any of the 3 main male lures but may be detected with generic attractants, typically food lures (Martinez et al. 2007). Researchers continue to investigate new parapheromone lures with greater surveillance potential, such as ceralure (Jang et al. 2010), raspberry ketone trifluoroacetate (Siderhurst et al. 2016), zingerone (Wee et al. 2018), anisyl acetone (Royer et al. 2020), and α-copaene (Shelly et al. 2023).In some cases, more effective alternatives are not adopted because of their higher production costs (e.g., Jang et al. 2010) or the need for additional traps in existing networks.For example, in Queensland zingerone is more effective than cuelure for B. jarvisi (Fay 2012, Royer et al. 2017) but is less effective than cuelure for B. neohumeralis and B. tryoni (Fay 2012).Deploying both lures would substantially increase overall costs, but zingerone traps might be used specifically in mango orchards where B. jarvisi thrives (Dominiak and Worsley 2017). Traps individually baited with trimedlure, cuelure, and methyl eugenol characterize fruit fly surveillance networks in NZ (MacLellan et al. 2021), Australia (Dominiak et al. 2003), California (Gilbert et al. 2013), and elsewhere (Quilici and Donner 2012) (Table 1).Lure dispensers are a key component of these traps, with a choice of wicks, plugs, and wafers.In Australia, cotton wicks are usually used as dispensers to carry both toxicants and male lures (Dominiak et al. 2011, Anderson et al. 2017).New Zealand recently changed from plugs to wafers in response to a study that found that wafers emit higher concentrations of attractants, last longer, and may catch more fruit flies than plug dispensers (Suckling et al. 2008).Improvements to polymers used in plugs and wafers continue apace (e.g., Kuzmich et al. 2021). Appropriate baiting intervals are dependent on local climatic conditions.For example, the conversion of cuelure to raspberry ketone (the active attractant) is accelerated in moist conditions (Metcalf 1990) and possibly retarded in dry inland Australia.Similarly, baiting intervals in California are varied to partially account for climate (Gilbert et al. 2013).New Zealand refreshes cuelure and methyl eugenol lures every 12 weeks, and trimedlure every 8 weeks.Australian researchers found that cuelure can be loaded at a higher concentration to achieve a longer replacement interval.Cuelure rates of up to 9 times the current standard (2 mL) were tested in male annihilation blocks without any repellent effect (Dominiak et al. 2009).In Queensland, the standard 2 ml of male attractant is used (Fay 2012), but NSW traps are baited with 4.4 ml of cuelure and are refreshed every 6 months (Dominiak and Nicol 2010).Wicks were made up at least 1 month before they were required in the field to allow cuelure to begin to break down to raspberry ketone, which is the attractive compound, and avoid a lag in attractivity. Food-Based Lures Many fruit flies do not respond to the current male lures, including the Anastrepha fruit flies of the Americas.In addition, adults of responsive species that have fed on particular host fruits such as tropical almond may be less attracted to the standard lures than expected (Manoukis et al. 2018).Where such species are of concern, food baits may be used, and many detection programs pair parapheromone lure traps with food bait traps (Shelly et al. 2014).These are not species-specific or gender-specific, allowing the possible detection of incursion by species that may not be under specific surveillance, or for which there is no parapheromone lure available (Martinez et al. 2007, Epsky et al. 2011).Also, food-baited traps can detect sexually immature males and adult females who would not be attracted to parapheromone lures (Henneken et al. 2022). Food-based lures are diverse.The principal ones used for tephritids are protein-based attractants such as torula yeast or NuLure (Epsky et al. 1993) and synthetic volatile blends such as BioLure containing 2 or 3 of the components: ammonium acetate, triethyl amine, and putrescine (Heath et al. 1995(Heath et al. , 1997)).California utilizes 4 different food baits in specific trap types: torula yeast in glass McPhail traps; (Piñero et al. 2020).While most of our discussion concerns the use of food-based lures for trapping, experience in Hawaii shows the importance of protein baits for control measures.There, GF-120 was a new commercial formulation of proteins with higher attraction than Nu-Lure, and was heavily used together with male annihilation (cuelure and/or methyl eugenol) and field sanitation.In combination with biological control, SIT, and other measures, protein was important to effectively control fruit fly species of economic importance in Hawaii (Vargas et al. 2001, Prokopy et al. 2003, Stark et al. 2004). Relative to parapheromone lures, protein-baited traps are considered to have a more limited attraction to fruit flies, but this may vary by species.For example, in California, where similar numbers of trimedlure and protein baited traps are deployed (Quilici and Donner 2012), first detections of C. capitata occurred approximately as frequently in protein traps as in trimedlure traps (K.Hoffmann, pers.comm.2012).In contrast, Dominiak and Nicol (2010) found that although protein-baited McPhail traps will catch both male and female B. tryoni, they were considerably less effective (about one-seventh) for males than cuelure in Lynfield traps.It seems that protein lures may be effective as in-canopy lures but may fail to attract flies from adjacent trees.For melon fly, Shelly and Manoukis (2018), working in Hawaii, found only 3.6% of released flies were recaptured from a distance of 10 m in a food-baited (torula yeast) Multilure trap.Other studies found better results, such as an effective sampling range of 28 m for C. capitata in a mango orchard (Epsky et al. 2010), and 30 m for Anastrepha suspensa in guava (Kendra et al. 2010). A second issue with liquid protein baits is that they are shortlived and labor-intensive (IAEA 2003, Dominiak 2006).Proteinbaited traps may need replenishing twice weekly and require longer to service than parapheromone-based traps (Dominiak and Nicol 2010).In addition, fly samples can degrade in the liquid protein.More user-friendly protein gels are now available (Bain and Dominiak 2022).In southern Australia, the Biotrap (Biotrap 2023) with a protein gel lure performed as well as cuelure-baited Lynfield traps for catching B. tryoni (Bain and Dominiak 2022), which was unexpected.New Zealand is also evaluating the protein gel-based Biotrap (Voice and MacLellan 2020). Protein-based traps may still suffer from excess bycatch, for example, of blow flies, particularly in pastoral areas (Dominiak et al. 2003).The issues of low efficacy, frequent servicing, and excess bycatch currently preclude protein-baited traps from general use in NZ and Australia.However, these traps may still have value for trapping in the highest-risk urban areas and when no specific attractants exist for a given target species (Lasa et al. 2015). Often, synthetic food-based lures such as BioLure are superior to torula-yeast protein lures (Gazit et al. 1998, Epsky et al. 1999, Katsoyannos et al. 1999, Papadopoulos et al. 2001) except for Bactrocera flies (Leblanc et al. 2010).For C. capitata in Australia, BioLure in Tephri or McPhail traps were superior to orange ammonia and liquid protein hydrolysate regardless of climate, tree type, or population level (Broughton and De Lima 2002).BioLure traps may catch more C. capitata than those baited with the parapheromone trimedlure (Epsky et al. 1999, Katsoyannos et al. 1999), depending on trap architecture (Broughton and Rahman 2017).The effectiveness of this lure is sufficient for it to be successfully used for mass trapping in Spain and Israel (Cohen andYuval 2000, Navarro-Llopis et al. 2008).The problem of non-target captures ("bycatch") inherent to food lures still exists with BioLure but was reduced compared with traditional protein and can be minimized further by placement strategies (Mangan and Thomas 2014). Fruit extract lures have been used in several situations.Orange juice lure was investigated in Australia (Dominiak and Nicol 2010) but was not adopted for broad-scale use.Orange juice or hydrolyzed protein can be used for a short time to help identify the epicenter of an outbreak (Bateman 1991).Grape juice lures were used for Anastrepha monitoring in Latin America (Robacker et al. 2011, Epsky et al. 2015, Herrera et al. 2016).These lures have the same disadvantages as other liquid-based lures and suffer from a low attraction range. Combined Lures Each of the 3 main fruit fly lures is currently delivered in separate traps spaced at least 3 m apart at any location, since early work suggested that combinations of the lures may depress trap efficacy (Hill 1986, R. Cunningham, pers. comm.cited by Cowley and Frampton 1989).However, trials in Hawaii with a combined triple lure containing trimedlure, methyl eugenol, and raspberry ketone (closely related to cuelure) and the toxicant DDVP found no reduction in efficacy compared to single-lure traps for C. capitata, B. dorsalis, and Z. cucurbitae (Shelly et al. 2012, Vargas, Souder, et al. 2012).Stringer et al. (2019) found similar results, except that the catch of B. dorsalis was significantly reduced in traps baited with a combination of trimedlure, cuelure, and methyl eugenol compared to traps baited with methyl eugenol only.Shelly et al. (2016) concluded that the combination lure may be less effective than current practice for some species.Cuelure and trimedlure can be combined without loss of efficacy for key target species, but cuelure and methyl eugenol together may experience depressed catch (Royer and Mayer 2018).In NSW, the combination of cuelure and methyl eugenol had merits in the drier inland climate but was of debatable merit in coastal Sydney (Dominiak et al. 2011).There is a need for further trials with certain lure combinations to confirm their suitability for early detection programs.Another possibility is to combine a food lure or its components with a parapheromone.Trials in Hawaii showed increased suppression of C. capitata when both trimedlure and BioLure were used (Vargas et al. 2018). One of the reasons for varying results between tests of combination lures may be the relative abundance of each of the target species.In parts of Hawaii, any trap baited with methyl eugenol quickly becomes overwhelmed with B. dorsalis males when this species is abundant.For a delta trap, this might mean other species such as Z. cucurbitae land on a thick layer of B. dorsalis, making it easy for them to fall out of the trap or escape.In addition, a large number of B. dorsalis might cause behavioral interference limiting the catch of other species (Manoukis et al. 2023). Toxicants Toxicants are used to prevent insect egress from dry, non-sticky tephritid traps.Significant differences between the toxicants used by different countries reflect different states of chemical registration and restriction, and different trap architectures used.Nevertheless, some combinations have become de facto international standards, such as cuelure-baited Lynfield traps with malathion or dichlorvos (DDVP) used in Australia and NZ (Dominiak et al. 2003).Malathion is stable and is active for up to 6 months (Dominiak and Nicol 2012).However, Queensland uses malathion 500 g/L (Lloyd et al. 2010, Fay 2012) while NSW and Western Australia use 1,140 g/L (Dominiak et al. 2003).The trapping program in the Torres Strait uses malathion (Huxham 2004). Generally, toxicants are not required for traps utilizing a sticky surface for the retention of smaller fruit flies such as C. capitata.However, a toxicant is important to improve trap efficacy for larger species such as Z. cucurbitae and even B. dorsalis (Vargas et al. 2009, Manoukis et al. 2023).California uses dibrom ("Naled," dimethyl 1,2-dibromo-2,2-dichloroethyl-phosphate) in Jackson traps that target Bactrocera and Zeugodacus flies because these large species may be strong enough to escape from a sticky panel.In California, the move from malathion to dibrom was dictated by the public's reaction to an aerial application incident involving malathion, even though it is probably less toxic to mammals than dibrom (Haberman 2014). Dibrom is not registered for any use in Australia or NZ.Instead, NZ utilizes another organophosphate, dichlorvos (DDVP).This was initially thought to have a repellent effect on fruit flies because traps using DDVP in Hawaii caught fewer flies than other traps (Vargas et al. 2003, Shelly et al. 2016).Alpha-cypermethrin performed equally well as DDVP strips and may replace DDVP in NZ (Voice and MacLellan 2020).Manoukis (2016) found that fresh "hot" DDVP may kill some flies before they even enter a Jackson trap, mimicking repellency, but suggested the effect would likely be insignificant.Effects of vapor-borne toxicants like DDVP will depend partly on trap architecture, so toxicant trials should ideally use trap and lure configurations that match those in the country of intended use.DDVP was subsequently found no more repellent in Lynfield traps than alternative toxicants bifenthrin and alpha-cypermethrin (Voice and MacLellan 2020).Surveillance trapping in Hawaii employs DDVP as a killing agent (Leblanc et al. 2012(Leblanc et al. , 2014)). Another possible toxicant for use in fruit fly traps is spinetoram/ spinosad (Reynolds et al. 2017).This reduced-risk insecticide was applied successfully against tephritids in Hawaii and other areas decades ago (Peck and McQuate 2000) and is used today as part of male annihilation in California and elsewhere (Vargas et al. 2014).Spinosad's relatively high rate of photodegradation (Tomkins et al. 1999) makes it suitable for bait sprays, but may be problematic in extended-use traps with clear sides. Health and safety considerations partly dictate which toxicants can be used in each country.The number of pesticides for fruit fly activities continues to decline and surveillance managers should not rely on any one toxicant (Dominiak and Ekman 2013).Ideally, whatever toxicants are used should be purchased pre-packaged to minimize handling hazards.Until 2017, NSW authorities were manufacturing their own wick/lure/toxicant combinations; they now purchase the entire trap unit, including lures and toxicants, pre-built (Biotrap 2023). Trap Architecture California uses Jackson traps as the main trap design.These comprise a delta trap with a sticky mat to collect insects (IAEA 2003).However, once the mat has accumulated one layer of insects, subsequent insects are not retained and populations will be underestimated.This is only a potential problem in high pest populations or when dust and similar debris fouls the sticky mat.To supplement these, California utilizes glass McPhail traps, ChamP traps, and Pherocon AM (yellow sticky panel) traps impregnated with ammonium acetate and protein hydrolysate, and Multilure traps baited with BioLure (IAEA 2003, Quilici and Donner 2012, Gilbert et al. 2013).The heavy glass McPhail traps are favored in California for their stability during the region's strong offshore wind events (J.Leathers, pers.comm.) but are not commercially available so the state has been transitioning to plastic Multilure traps.Monitoring in Hawaii, the other US state we focus on, has employed bucket traps for male lures and multilure traps for wet protein lure (torula yeast) (Leblanc et al 2014). Fruit fly surveillance in the Torres Strait between Australia and Papua New Guinea uses Paton traps at permanent trapping sites.The lighter Steiner trap is still used when additional trapping is required (Huxham 2004).Fruit fly trapping in Papua New Guinea and the tropical north of Australia uses modified Steiner traps (Iamba et al. 2021).However, Jackson sticky traps were found to be twice as effective as the standard Steiner traps in Victoria, so Jackson traps became the standard for a decade in southeastern Australia (O'Loughlin et al. 1983).Subsequently, Lynfield traps were found to be more effective than Jackson traps for B. tryoni (Cowley et al. 1990).Lynfield traps consist of a 1 L cylindrical clear plastic pottle (120 mm in depth and diameter), a lid, and a lure dispenser (Cowley et al. 1990, Dominiak andNicol 2010).Four 25-mm holes are drilled at equally spaced locations around the sides of the pottle to allow the lure vapor to exit the trap and for insects to enter.An additional four 2-mm holes are drilled in the bottom for water drainage.Lure dispensers comprising cotton wicks-4 dental cotton rolls (each 10 mm × 40 mm long) held together by a wire clamp-are suspended from the middle of the Lynfield trap lid.The wick hangs at about the same level as the ingress holes in the side wall of the trap.One advantage of the Lynfield traps is their large capacity which makes them a better option in high fly populations, compared to Jackson traps.In addition, flies are loose and do not have to be removed from sticky mats. Recently, NZ and Australia compared Lynfield traps with Biotrap and cone traps (Dominiak et al. 2019, Voice and MacLellan 2020, Bain and Dominiak 2022).The Biotrap was developed in Australia and has some design commonality with MacPhail traps.Biotrap traps are popular in some regions (Bain and Dominiak 2022).Cone traps were developed in Spain for C. capitata surveillance.They have a clear lid and yellow sides, exploiting the finding that various fruit fly species are attracted to yellow (Hill andHooper 1984, Katsoyannos 1987).One problem with Lynfield traps is that trapped flies are drawn to the clear sides and may stumble out of the entrance holes before they die or contact the toxicant.In cone traps, the ingress holes have invaginations in the yellow wall, with a tunnel of about 1 cm helping to prevent accidental escape (Dominiak et al. 2019, Voice andMacLellan 2020).In addition, the clear lid draws trapped flies away from the entrance holes and up to the toxicant, which may be painted on or suspended from the lid.Dead flies fall to the bottom of the cone, where a clip-on trapdoor allows inspectors to efficiently collect them, even in windy conditions when specimens may blow out of an open Lynfield trap during sample collection.Administratively, cone traps are transported flat-packed with lids, potentially saving costs in distributing traps to surveillance areas.Lynfield bases do not flat pack and require considerable space for storage or transport (Dominiak et al. 2019). A wide range of modern commercial fruit fly traps have been designed for mass trapping, most of which might be considered variants of the Lynfield or earlier traps.Several studies have compared their performance (e.g., Lasa et al. 2014, Broughton and Rahman 2017, Dominiak et al. 2019, Bain and Dominiak 2022) and found that most performed well under different conditions and it is unlikely that any trap one trap will suit all circumstances.Therefore, the adoption of any particular trap design may be determined more by cost and convenience than by their relatively small differences in efficacy. Seasonal Trap Deployment States differ in the portion of the year they deploy fruit fly surveillance traps for detection of new incursions (Fig. 3).In Australia, the Code of Practice for the Management of Queensland Fruit Fly (Department of Primary Industries 1996) specifies year-round trapping, based largely on the risk of fruit flies to be spread domestically at any time of year.Similarly, trapping is conducted year-round in southern California, except in Imperial County and Coachella Valley in the height of summer (Gilbert et al. 2013).Further north in California, winter cold limits fruit fly persistence (De Villiers et al. 2016, Szyniszewska et al. 2020).In the San Francisco Bay area, trimedlure, methyl eugenol, and torula yeast traps are deployed from April to November, while cuelure traps are set out from June through October.These periods are shortened by a month on either end in other urban areas of northern California (Gilbert et al. 2013).Traps are deployed for 6 months in inland Northern California and the Central Coast, but the exact timing varies to allow counties to take advantage of local knowledge on the availability of host fruit when placing traps. In NZ, the fruit fly surveillance season has been adjusted several times in response to new knowledge and new technologies.Initially, traps were deployed year-round in the northern North Island and elsewhere removed during winter (Somerfield 1989).With the introduction of Lynfield traps, all locations north of Christchurch were trapped year-round and more southerly locations from September through April (Cowley et al. 1990).In 1999, all winter trapping was stopped when it was realized that it contributed little to fruit fly surveillance.More recently, Kean and Stringer (2019) modeled the effects of seasonal temperatures on trap catches of C. capitata, B. tryoni, and B. dorsalis in their native and invaded ranges, and used this to determine the optimal trapping periods for NZ locations.Similar results were obtained by considering the proportion of the days that air temperatures are above the threshold for male flight (Kean 2016).In response, the dates for starting and ending surveillance were adjusted and fruit flies are now trapped from mid-September to mid-June in the north and shortened by about a month on either end in the south (Fig. 3; MacLellan et al. 2021). Hawaii has a mild tropical climate with little seasonal variability, so trapping efforts for surveillance generally need to be conducted year-round.Recent efforts have focused on year-round trapping in areas around ports of entry on the island of Oahu (Leblanc et al 2014) but in the past, there was island-wide trapping on Oahu yearround (Leblanc et al 2012). Relative Trapping Effort Trap spacing practices vary considerably across the reviewed states (Table 2).Most states trap predominantly in urban areas, which are considered to have elevated risk of entry and establishment due to human-vectored dispersal (Liebhold et al. 2006, Dominiak andCoombes 2009) and the availability of poorly managed backyard fruit trees.The details of many countries' exotic fruit fly trapping networks were reviewed by Quilici and Donner (2012), so here we briefly summarize and update their results. New Zealand deploys around 3,500 cuelure-baited traps each year, the same number of trimedlure traps, and about 800 methyl eugenol traps (MacLellan et al. 2021).Cuelure and trimedlure traps are spaced at approximately 400-m intervals in a grid across areas with relatively high identified risk.Methyl eugenol traps are placed more sparsely at 1,200 m intervals, reflecting their higher attraction radius.Kean (2017) estimated that these densities would give a high probability of detecting incipient populations before they reach 40-100 adult males (Fig. 4a), and such populations may be successfully eradicated (Suckling et al. 2016).Across NZ as a whole, and considering the estimated risks outside trapped areas, the current trapping program was estimated to give a 59% probability of detecting at least one of the first 100 C. capitata males present.The equivalent estimates for B. tryoni and B. dorsalis were 83% and 66%, respectively.By the time a new population of any of these species had produced 10,000 males, there was estimated to be a > 99% chance of detection by trapping or passive surveillance (Kean 2017).Similarly, simulation models such as "TrapGrid" (Manoukis et al. 2014) can extrapolate from the decline in trap captures with distance (e.g., Manoukis et al. 2015, Manoukis andGayle 2016) to estimate the temporal cumulative probability of detecting fruit fly populations in trapping grids with particular configurations (Fig. 4b; Fang et al. 2022). Over all its states and territories, Australia deploys around 4,800 traps (equal numbers of trimedlure, cuelure, and methyl eugenol traps) around ports of entry.Cuelure and methyl eugenol traps are deployed across multiple sites in the Torres Straits between Cape York and Papua New Guinea (Quilici and Donner 2012).The individual states of Australia deploy an additional c. 25,000 fruit fly traps, usually at 400-m intervals in urban areas and 1,000 m in rural areas (Table 2).For example, South Australia runs approximately 7,500 traps (a similar number as used in NZ) but added more than 18,000 traps during recent eradication efforts against B. tryoni and C. capitata (Department of Primary Industries and Regions 2021).Meanwhile, Tasmania declares freedom from fruit flies based on approximately 1,000 traps deployed annually in urban areas (Blake 2019). Recently, NSW adopted a much sparser trapping network than was previously used, with traps deployed no closer than 5 km apart everywhere, including in urban areas.No exotic fruit flies have been detected in NSW in the last 20 years (Dominiak 2020) and risk management practices have improved markedly (e.g., Dominiak 2019, van Klinken et al. 2020).Current thinking is that fruit fly incursions into NSW are most likely to be linked to travelers moving small quantities of fruit, with a resultant low chance of establishment (Maelzer 1990, Dominiak andCoombes 2009).Furthermore, if C. capitata were to enter from Western Australia, it would likely be prevented from establishing by the entrenched endemic population of B. tryoni (Dominiak and Mapson 2017).These arguments gave the NSW authorities the confidence to markedly reduce their surveillance efforts for exotic fruit flies. California employs at least 3 times as many fruit fly traps as Australia and NZ combined, reflecting the relatively high rates of entry and establishment there.Arrays of approximately 25,000 trimedlure, 20,000 cuelure, 20,000 methyl eugenol, and 27,500 food-based traps are deployed across urban areas, together with around 700 sticky panels for detecting Rhagoletis fruit flies (Quilici and Donner 2012).Similar numbers are used in Florida, while Texas targets Anastrepha species from Central America using food-based lures (Quilici and Donner 2012).In Hawaii, trapping with male lures was a key method to measure the effectiveness of the areawide IPM program (Vargas et al. 2008).A 20-fold reduction of Z. cucurbitae catch in Waimea was a leading indicator of successful control measures.Similar results were seen elsewhere for a total of 653 farms state-wide (Vargas et al. 2008). To compare trapping effort across states, it is useful to contextualize these numbers by risks and benefits.If propagule pressure is determined largely by human activities (Maelzer 1990, Dominiak andCoombes 2009), then traps per million people may indicate how different states perceive fruit fly propagule pressure (Fig. 5a).In these terms, NZ's cuelure trapping is high relative to other states, but may appropriately reflect the importance of detecting and excluding B. tryoni and other cuelure-responsive threats.Trimedlure trapping is similar between NZ and California but much lower than Florida.New Zealand's methyl eugenol trapping is low compared to Australia, California, and Florida, where these traps are deployed at the same density as other lure types. Another way to compare trapping effort is to consider the resource being protected.Through this lens, NZ's trapping effort relative to the value of fruit exports is low (Fig. 5b).Florida and Texas invest relatively more in trimedlure traps, perhaps indicating elevated concern about Ceratitis species, or possibly to compensate for the relatively short attraction radius of this lure (Manoukis et al. 2015).Texas's local threat of Anastrepha influx from Mexico is reflected in its relatively high investment in torula yeast traps.And Australia invests relatively heavily in cuelure and methyl eugenol in response to the local threats from Bactrocera species, particularly from Asia (Fig. 5b). Local Trap Placement New Zealand and California use host preference lists to prioritize trees in which to hang fruit fly traps (Gilbert et al. 2013), though in urban areas the choices can be limited.The plant species on which traps are suspended is of key importance in the early detection of C. capitata (Papadopoulos et al. 2001).We speculate that this is because the relatively short attraction radius of trimedlure (Manoukis et al. 2015) and short dispersal flight distance of C. capitata (Dominiak 2012) contribute to traps having greater efficacy when placed in host trees with active C. capitata populations.A review of the ranked host list for C. capitata is now available (Dominiak and Taylor-Hukins 2022) to better inform trap placement. In NSW, Mo et al. (2014) found that B. tryoni were most likely to be trapped in pome trees; apples are not a primary host but can readily be infested (Follett et al. 2021).Follett et al. (2021) created a host suitability index to rank the capacity of different hosts to support the fruit fly life cycle.Some hosts, such as guava (Psidium guajava L.), are known to be more suitable than others (Woods et al. 2005, Lloyd et al. 2010).Traps placed in these preferred hosts may be more likely to provide an early warning for increasing populations.A full review of ranked hosts for B. tryoni and B. dorsalis is yet to be published. Trees with fruit will slow down the movements between trees (Hendrichs andHendrichs 1990, Dalby-Ball andMeats 2000) so the consensus is that traps should be placed in fruiting trees if available.De Lima et al. (2011) found benefits for detection in moving B. tryoni traps throughout the season to keep them in trees with mature fruit.In NZ, inspectors move fruit fly traps to trees with the most ripening fruit on the same property (MacLellan et al. 2021).Generally, fruit flies may be more likely to be found in urban areas rather than forests or orchards because the diversity of fruit trees provides a higher likelihood that host fruit is available at a given time (Raghu et al. 2000).Almost universally, traps are hung at about 1.5 m above the ground (Dominiak et al. 2003, Royer et al. 2020, Iamba et al. 2021, MacLellan et al. 2021) as this is a convenient height for trap inspectors and often the widest part of a fruit tree canopy.Within a site, traps are placed at least 3 m apart to avoid the potential for interference.Trap sites in Australia typically contain 3 traps, each in a different tree (Gillespie 2003, Dominiak 2020) and a different lure in each trap.Host trees must be about 4 m apart to minimize interference between lure plumes (Hill 1986).Similar protocols are followed in NZ (MacLellan et al. 2021) and California (Gilbert et al. 2013). Trap Inspections and Servicing Generally, trap inspections everywhere are conducted fortnightly except in sensitive states under high propagule pressure or in an emergency response procedure.The Australian mainland Code of Practice specifies weekly inspections except during the winter months of June to October when fortnightly trap inspections are used (Bateman 1991, Dominiak et al. 2003).Some food-based lures may degrade rapidly and require weekly servicing (Gilbert et al. 2013).New Zealand's wafer lures are replaced every 12 weeks for cuelure and methyl eugenol, or 8 weeks for trimedlure.The change from plug dispensers to wafers was informed by locally conducted lure degradation studies that suggested these practices were adequate (Suckling et al. 2008).Cuelure wicks are refreshed only 6 monthly in NSW because parallel work on the Male Annihilation Technique suggested that residual lure and toxicant from the initial dose of 4.4 ml of cuelure would still be above the minimum standard for effective attraction (Dominiak et al. 2011).Where DDVP toxicant is used in Australia and NZ, these strips are replaced every 2 months.In California, 2 g trimedlure gel plugs are replaced every 4 (in summer) to 12 weeks (winter), in accordance with temperaturebased degradation curves (Gilbert et al. 2013). In California, inspectors replace the sticky inserts in Jackson traps monthly, or more often as needed (Gilbert et al. 2013).Suspected exotic specimens are not removed from sticky surfaces but are submitted as-is to a diagnostic laboratory for identification.Where traps with toxicants are used, inspectors remove the individual dead insects for identification.For example, NZ inspectors submit all flylike specimens within a size range that encompasses the fruit flies of concern (MacLellan et al. 2021).Australia, NZ, and California all use audit flies to seed traps and test the entire trap retention, detection, and reporting system.Audit intervals vary across countries. Data Capture, Analysis, and Review A range of integrated technologies are used to record and map trap locations, capture digital trap records, track specimens, audit and summarize data, and manage notifications (Schellhorn and Jones 2021).For example, the California Department of Food and Agriculture created a data collection system, CalTrap, that is customized specifically for the state's requirements (California Department of Food and Agriculture 2021).In Australia, Victoria developed Trapbase, a database built from SharePoint lists and a mobile application, enabling automatic reporting for the Commonwealth government.New South Wales has recently adopted Trapbase, dropping their own bespoke PestMon digital system (Dominiak et al. 2007).South Australia and Western Australia have also transitioned to Trapbase, and other states are evaluating the system.New Zealand's fruit fly trapping data are digitally collected and curated by an operational contractor. It is likely that automated trapping, meaning remote detection and/or identification of a catch (Potamitis et al. 2017), will soon be operationalized for fruit fly surveillance.In November 2022, NZ deployed 60 RapidAIM automated traps targeting B. tryoni (Ministry for Primary Industries 2022).This particular trap uses a capacitance sensor to identify any insect entering the trap, but other solutions may employ optical imagery, wing-beat frequency, or the amount of electric current required to surround and kill the insect (Schellhorn and Jones 2021).Generally, these automated systems also deliver real-time reporting, and in areas where target fruit flies are rare or absent this may alleviate the need for manual trap inspections. This is a rapidly developing area, and such tools will continue to decline in price and improve in accuracy.However, widespread operational use will take time, as any significant change from current practice would need approval by international trade partners, a process that can take several years.Meanwhile, such technologies might be incorporated into domestic trade, perhaps as part of a systems approach (Dominiak 2019, van Klinken et al. 2020). Discussion Our review highlights the diversity of approaches to exotic fruit fly surveillance trapping currently conducted across 4 Pacific states.The choice of trap, lure, dispenser, toxicant, and bait concentration may be partly dictated by local factors such as the target species, available (registered) chemicals, and climate.These choices help determine the efficacy of traps and the optimal trap deployment in space and time, though experiments and modeling are only recently starting to address this in a systematic way.Currently, most states follow similar practices around trap inspection, servicing, and data handling, but these processes are likely to be disrupted by emerging automated trap technologies.Ultimately, different practices can be traced back to the unique fruit fly risk profiles faced by each state, particularly the suite of fruit flies already present and those that threaten from nearby. States which are free from economically damaging fruit flies, such as NZ, South Australia, and Tasmania, have an important advantage in being able to use specific parapheromone lures to minimize bycatch and facilitate rapid diagnosis of trapped specimens.In contrast, fruit fly endemic areas such as eastern Australia and Hawaii trap considerable volumes of bycatch, including non-economic fruit flies.For instance, NSW has many endemic non-economic tephritids, adding to bycatch and identification service costs (Dominiak 2020).In Hawaii, methyl eugenol surveillance traps would be rapidly overwhelmed by local B. dorsalis, and cuelure baited traps by Z. cucurbitae in many areas.Where abundant bycatch may saturate traps, this can influence the choice of trap architecture and dictate servicing intervals. The situation in Hawaii is complicated not only by the limited usability of parapheromone lures for surveillance trapping due to large standing populations of pestiferous tephritids that tend to overwhelm traps but also by its geographic configuration.The most recent surveillance effort in the state has run since 2006, and since 2009 this has focused on ports of entry on the island of Oahu (Leblanc et al 2012(Leblanc et al , 2014)).This is justified by Oahu being the most heavily populated of the Hawaiian islands (with > 70% of the state's residents), and receiving the bulk of domestic and international flights.However, passengers to Oahu often transit to other islands with more agricultural land.New invasive fruit flies might therefore not necessarily be detected outside the international airport in Honolulu (Oahu), and impact of an establishment might be greater on other islands. The most recent surveillance effort led by USDA-APHIS on Oahu (2006-2023) was extensive for that island.Approximately 350 sites were sampled about 17,500 times per year from 2009 to 2013, yielding a total capture of 8.5 million flies from the 4 established species (olive fruit fly was not established during that time).Over the whole period starting in 2006 a single exotic fruit fly, a Bactrocera albistrigata (de Meijere) male, was detected on Oahu in 2017 (T.Shelly, pers. com.).It is difficult to conclude that this is due to low propagule pressure in Hawaii, but the reasons for such a low rate of detection of exotic fruit flies in the face of large passenger volumes are difficult to divine. Traps using food-based lures are particularly prone to nontephritid bycatch (Leblanc et al. 2010).In addition, the lures may be difficult to handle (Dominiak 2006), though new gel-based formulations may solve many of the issues associated with liquid protein baits (Bain and Dominiak 2022).Generally, food-based lures are less attractive than parapheromones, but their use is justified in areas such as California, Texas, and Florida which face significant threats from species that do not respond to the current suite of parapheromones.Also, food-based lures may complement parapheromones by targeting adult females (Henneken et al. 2022) and in this way may even perform as well as trimedlure, the weakest of the 3 standard parapheromones, under some conditions (Epsky et al. 1999). All reviewed states utilize the parapheromones trimedlure, cuelure, and methyl eugenol (Table 1), though only NZ varies the density of traps to reflect the relative efficacies of these lures (Table 2).A better understanding of the factors influencing the effective sampling area of fruit fly traps (e.g., Manoukis et al. 2015) would improve the interpretation of trap catches (Manoukis et al. 2014).This knowledge might allow spatial trap and lure deployment to be optimized to achieve surveillance sensitivity targets (Kean 2017).There is considerable current research to develop new parapheromones that may ultimately widen the range of species that can be effectively trapped, but much of this work is motivated by local threats (e.g., Wee et al. 2018) so it may further increase the diversity of fruit fly surveillance approaches used across states. The reliance on particular toxicants, such as DDVP and other organophosphates, is a potential vulnerability for some tephritid trapping systems.Although a range of toxicants are used internationally, not all have been registered for use in particular countries, leaving some trapping systems without fit-for-purpose alternatives if DDVP or similar chemicals become unavailable.Some new automated trap types will not require toxicants (Schellhorn and Jones 2021), and this is just one of many ways that such technologies are likely to disrupt current fruit fly surveillance practices in the near future. The diversity of fruit fly risks and surveillance approaches used across countries in the Pacific region makes it clear that a "one-sizefits-all" approach would not be appropriate.Every state has tailored its approach to address its own individual risks and circumstances including the species threats, propagule pressure, climate, existing fruit fly fauna, and consequences of a new exotic establishment.Current systems are not perfect, as evidenced by establishments in Hawaii and California (Fig. 2), but systems continue to evolve and improve within the time frames dictated by international trade assurances. The main challenge for the future may be whether international trade assurances can keep pace with the accelerating need for changes in fruit fly surveillance systems resulting from new lures, climate change effects on lure degradation, automated traps, and shifting risk profiles as invasive fruit flies continue to spread internationally.Regulators need evidence-based and biologically informed surveillance strategies for fruit flies, underpinned by an understanding of the diversity of international practices. Fig. 1 . Fig. 1.Main factors contributing to the probability of early detection of a fruit fly infestation. Fig. 2 . Fig. 2. Summary of the main fruit flies of economic significance present in each of 4 Pacific states.Dates indicate when each species invaded and became established.Species listed in grey have a history of post-border detection, but authorities accept that these have failed to establish or have been eradicated. Fig. 3 . Fig. 3. Comparison of fruit fly detection trapping seasons in different regions, from midwinter to midwinter. Fig. 4 . Fig. 4. Cumulative probability of detection for different species/lure combinations at 2 different trap spacings.a) Probability of detection with population size (Kean, 2017).b) Mean detection over time from 250 simulations, each of 200 trappable flies, from the TrapGrid model (Manoukis et al. 2014). Fig. 5 . Fig. 5. Comparison of the number of fruit fly surveillance traps deployed in 2 different states, relative to a) human population size and b) value of fresh fruit exports.Data are updated from [Quilici & Donner (2012)] and national statistics authorities. Table 1 . Summary of trap architecture, lure and toxicant combinations primarily used by different states for early detection of exotic fruit flies Country/state Trap design Lure Toxicant/fly retention Pherocon AM (yellow sticky panel) traps impregnated with ammonium acetate and protein hydrolysate; and Multilure traps baited with BioLure(Quilici and Donner 2012, Gilbert et al. 2013).The mechanisms of attraction and their dependence on insect physiological state, environment, and other factors remain difficult to elucidate
2023-12-25T06:17:16.801Z
2023-12-23T00:00:00.000
{ "year": 2023, "sha1": "65bd6b08c622efcc6b68ce3fa73abfe1ecc72420", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jee/advance-article-pdf/doi/10.1093/jee/toad228/54802492/toad228.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7fbaf39ed827f7558986555c6a1f59c78109cd8d", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4854959
pes2o/s2orc
v3-fos-license
Episiotomy and obstetric outcomes among women living with type 3 female genital mutilation: a secondary analysis Background To investigate the association between type of episiotomy and obstetric outcomes among 6,187 women with type 3 Female Genital Mutilation (FGM). Methods We conducted a secondary analysis of women presenting in labor to 28 obstetric centres in Burkina Faso, Ghana, Kenya, Nigeria, Senegal and Sudan between November 2001 and March 2003. Data were analysed using cross tabulations and multivariable logistic regression to determine if type of episiotomy by FGM classification had a significant impact on key maternal outcomes. Our main outcome measures were anal sphincter tears, intrapartum blood loss requiring an intervention, and postpartum haemorrhage. Results Type of episiotomy performed varied significantly by FGM status. Among women without FGM, the most common type of episiotomy performed was posterior lateral (25.4 %). The prevalence of the most extensive type of episiotomy, anterior and posterior lateral episiotomy increased with type of FGM. Among women without FGM, 0.4 % had this type of episiotomy. This increased to 0.6 % for women with FGM Types 1, 2 or 4 and to 54.6 % of all women delivering vaginally with FGM Type 3. After adjustment, women with an anterior episiotomy, (AOR = 0.15 95 %; CI 0.06–0.40); posterior lateral episiotomy (AOR = 0.68 95 %; CI 0.50–0.94) or both anterior and posterior lateral episiotomies performed concurrently (AOR = 0.21 95 % CI 0.12–0.36) were all significantly less likely to have anal sphincter tears compared to women without episiotomies. Women with anterior episiotomy (AOR = 0.08; 95%CI 0.02–0.24), posterior lateral episiotomy (AOR = 0.17 95 %; CI 0.05–0.52) and the combination of the two (AOR = 0.04 95 % CI 0.01–0.11) were significantly less likely to have postpartum haemorrhage compared with women who had no episiotomy. Conclusions Among women living with FGM Type 3, episiotomies were protective against anal sphincter tears and postpartum haemorrhage. Further clinical and research is needed to guide clinical practice of when episiotomies should be performed. Plain English summary Female genital mutilation (FGM) encompasses a range of procedures that damage and change women's external genitalia. More than 200 million girls and women have been subjected to FGM, and an estimated three million girls are at risk every year. FGM has significant effects on women's health, especially during pregnancy and delivery. There is very little information available for health care providers to help provide evidence based care for women living with FGM, and minimize obstetric risks. We looked at how episiotomy, an incision to extend the vaginal opening during birth, varied by FGM status. We also looked at whether type of episiotomy improved maternal health outcomes. We found that women living with FGM were more likely to have the most extensive types of episiotomies performed. Our findings suggest that anterior episiotomy, to release scar tissue, may reduce some obstetrical risk among women with the most extensive type of FGM. We need more information to help women and providers decide when is the best time to provide defibulation during pregnancy. Background Female Genital Mutilation (FGM) includes a range of procedures involving partial or total removal of the external female genitalia for non-therapeutic reason [1]. The World Health Organization (WHO) has defined four types of FGM (Table 1). The procedures performed vary by country, and range from partial or total removal of the clitoris (Type 1) to narrowing of the vaginal opening by the removal and suturing of the labia (Type 3). Type 4 consists of all other harmful procedures to the female genitalia for non-medical purposes, for example, pricking, piercing, incising, scraping and cauterisation. The impact of FGM on obstetric outcomes has been investigated in several studies [2][3][4]. Compared to women without FGM, women with FGM have an increased risk of episiotomy, caesarean delivery, haemorrhage, extended maternal hospital stay, infant resuscitation, and inpatient perinatal death [3]. The risk of adverse obstetric outcomes varies by FGM type, with the most extensive forms of FGM being associated with the greatest risk [3,5]. Women with Type 3 FGM have been shown to have increased risk of episiotomy, caesarean delivery, postpartum haemorrhage and stillbirth [3]. There is an urgent need for evidence on how to minimize the negative perinatal consequences for women living with FGM [6,7]. The majority of existing recommendations for obstetric practice in this population are based on expert opinion [6]. New guidelines from the WHO examine the evidence for optimizing the health care management of women living with FGM [8]. Topics included reflect a broad range of health care needs including: female sexual health, mental health, information & education needs for women and providers, as well as defibulation. Improved data to guide defibulation practices was identified as a research priority by the WHO. The scar tissue from FGM, in particular with Type 3, narrows the vaginal introitus, and is thought to increase the risk for obstructed labour and extensive perineal lacerations [9,10]. Prolonged labour is a risk factor for postpartum haemorrhage [11]. Anterior episiotomy (or defibulation) to release the scar tissue is commonly performed, but when a circumcised woman presents in labour, the optimal type of episiotomy and time to perform it is not known. Performing the procedure early in labour requires anesthesia, and may increase risk of intrapartum bleeding, as the incision would be irritated by subsequent cervical exams. [9] Delaying the procedure until immediately prior to delivery may increase the risk of postpartum haemorrhage due to obstructed labour. Episiotomy is the surgical enlargement of the vaginal opening due to a perineal incision [5,12]. Seven different types of episiotomies are reported in the literature, although only anterior, mediolateral and midline posterior are commonly used [13]. Among women without FGM, anterior, mediolateral and midline posterior episiotomies are typically performed. A posterior lateral episiotomy may also be referred to as a "J-shaped" episiotomy [13]. Anterior episiotomy, or defibulation, is the opening of the scar associated with FGM, most commonly used with women living with FGM Type 3 [13]. It is frequently performed during labour, to allow for cervical exams and to prevent obstructed labour [14,15]. Anterior episiotomy may be performed alone, or in combination with midline posterior or posterior lateral episiotomies. A provider may choose to only perform a midline posterior or posterior lateral episiotomy as well, to avoid incising the scar tissue anteriorly. The decision of what type of episiotomy to perform is typically based on provider training and preference. Episiotomy is not without risks: it is associated with increased risk of pain, perineal trauma (extensive lacerations), need for suturing, and healing complications [12]. It is likely that the more extensive the episiotomy performed, the greater the risk of maternal harm. There is scant evidence to guide episiotomy practice among women living with FGM [6,16]. All existing guidelines are based on expert opinion with respect to episiotomy practice and FGM. The Royal College of Obstetricians and Gynaecologists recommends that intrapartum episiotomy in women with FGM be performed if inelastic scar tissue prevents progress. In general, existing guidelines advise a low threshold for performing episiotomy, despite the absence of studies on the real Table 1 WHO classification of Female Genital Mutilation Type I : Partial or total removal of the clitoris a and/or the prepuce (clitoridectomy) Type Ia: Removal of the clitoral hood or prepuce only Type Ib: Removal of the clitoris a with the prepuce Type II: Partial or total removal of the clitoris a and the labia minora, with or without excision of the labia majora (excision) Type IIa: Removal of the labia minora only Type IIb: Partial or total removal of the clitoris a and the labia minora Type IIc: Partial or total removal of the clitoris a , the labia minora and the labia majora Type III: Narrowing of the vaginal orifice with creation of a covering seal by cutting and apposition the labia minora and/or the labia majora, with or without excision of the clitoris (infibulation) Type IIIa: Removal and apposition of the labia minora Type IIIb: Removal and apposition of the labia majora Type IV: Unclassified All other harmful procedures to the female genitalia for non-medical purposes, for example, pricking, piercing, incising, scraping and cauterisation a When total removal of the clitoris is reported, it refers to the total removal of the glans of the clitoris benefits of episiotomy with each type of FGM [6,17]. No evidence exists to guide the type or timing of episiotomy to perform. The objective of this study is to investigate the association between type of episiotomy and obstetric outcomes among women with living with FGM Type 3. We examine whether episiotomy improves maternal outcomes including anal sphincter tears, intrapartum blood loss requiring intervention, and postpartum haemorrhage. Methods The WHO previously conducted an international, multicentre study examining obstetric outcomes in women by FGM status. The cohort contained women without FGM, as well as women with FGM, categorized by the WHO classification system. Previous papers have reported on the risks of different obstetric outcomes for both the woman and the neonate, as well as estimated costs to the health system [3,18]. In this sub analysis, we focus on the association between type of episiotomy and maternal outcomes in women with FGM Type 3. Women who presented for singleton delivery at 28 obstetric centres in Burkina Faso, Ghana, Kenya, Nigeria, Senegal and Sudan between November 2001 and March 2003 were screened for study eligibility. Women with multiple gestations, or presenting for elective caesarean delivery or in advanced labour (unable to complete a pelvic exam prior to delivery) were excluded from the study, along with women who were unwilling or unable to give informed consent. Women and their infants were then followed until time of maternal discharge from the hospital. All participants provided informed consent prior to enrolment. Institutional review boards at all participating hospitals and the World Health Organization (WHO) Secretariat Committee on Research Involving Human Subjects gave ethics approval. We used descriptive statistics and bivariate measures of association to describe the study population and the population of women by type of FGM. Bivariate and multivariable logistic regression models were used to examine the association of type of episiotomy and maternal outcomes (anal sphincter tears, intrapartum blood loss requiring intervention, and postpartum haemorrhage) among women with type 3 FGM. Study population We included only women having a vaginal delivery; this included normal vaginal delivery, assisted operative delivery (forceps or vacuum) and assisted breech delivery. Women giving birth by caesarean were excluded. Participants had an antepartum examination of the external genitalia, by a trained study midwife, to determine whether or not they had undergone FGM. If they had FGM, the type was categorized according to the WHO classification system (Table 1). The pelvic exam also included an assessment of outlet obstruction: the dimension of the introitus was evaluated by fingerbreadths. For the analysis of the association between episiotomy and maternal health outcomes, we limited our sample to women who were living with FGM Type 3 with data on episiotomy status. Study variables Our key independent variable for analysis was episiotomy type. If an episiotomy was performed, the study investigator recorded the type. Episiotomy was classified as follows: no episiotomy, anterior (deinfibulation), posterior lateral, and anterior with simultaneous posterior lateral episiotomy. The dimension of the introitus was assessed by finger breadths and coded as one, two, three, or more than three fingerbreadths. For the multivariable models, we included the following demographic characteristics of the woman: her age, place of residence (urban/rural), socioeconomic status (low, medium, high) and level of education. Three maternal health outcomes served as our dependent variables-anal sphincter tears, intrapartum blood loss requiring an intervention, and post partum haemorrhage. Degree of tear was included as a dichotomous variable, with comparing more extensive lacerations (anal sphincter tears, also called 3 rd and 4 th degree obstetric tears) to no tear or 1st or 2 nd degree tears. Intrapartum blood loss was dichotomized comparing women who required an intervention (e.g uterotonics, dilation and curettage, transfusion) to those who did not. Postpartum haemorrhage, blood loss occurring within 24 h of delivery, was coded as a binary variable using the standard threshold of exceeding 500 ml [11]. Models We examined the association between episiotomy type among women living with FGM Type 3 and each of the following outcomes-anal sphincter tears, intrapartum bleeding requiring intervention, and postpartum haemorrhage. Each type of episiotomy was compared with no episiotomy. Theoretically relevant model covariates included parity, pelvic introitus width, age, socioeconomic status, and education level. Initially we planned to enter the covariates in blocks-obstetric factors, sociodemographic factors and then the combination for fully adjusted models. However, the adjustment variables had minimal impact so we present only the unadjusted and then fully adjusted models. Odds ratios (OR) with 95 % confidence intervals were assessed for each of the three maternal outcomes. As the data were clustered in the 28 centres, robust standard errors were used to adjust for this clustering [19]. Table 2 shows the characteristics of the sample population overall, and by type of FGM. 26,640 women were included-6,744 who had no FGM, 6,211 with Type 3 FGM, and 13,685 with any other type of FGM (Types 1, 2 and 4; Table 2). The majority had undergone FGM (74.7 %) and were multiparous (95.8 %). The mean age was 26, and the majority lived in an urban setting ( Table 2). The majority of births were spontaneous vaginal deliveries (90.0 %) with assisted vaginal delivery (vacuum or forceps) accounting for 2.7 % of births, and assisted breech deliveries 1.1 %. Compared to women who had either no FGM or FGM Types 1, 2 and 4, women with FGM Type 3 were significantly older, more likely to live in urban areas, have more education, medium SES and to be living in Sudan. These women were also significantly more likely to have an anterior/ posterior episiotomy, and significantly less likely have anal sphincter, intrapartum and postpartum haemorrhage. We then analysed the characteristics of our population by type of episiotomy performed (Table 3). women without episiotomies. And lastly, women with the most extensive episiotomy type (anterior and midline posterior) were found to be significantly more likely to be of urban residence (72.4 % vs 67.6 %) and significantly less likely to be of low socioeconomic status (9 % vs 37.9 %). Women with FGM Type 3 had significantly narrower introituses when compared with women without FGM or with other types of FGM (mean of 2.37 fingers compared with 2.56 and 2.45, p < 0.001). Width of pelvic introitus was associated with episiotomy performed among women with FGM Type 3; women with more narrow introituses were significantly more likely to have an episiotomy. The analysis sample was limited to the 6,187 women who had FGM Type 3 with data on episiotomy status. Results We first investigated whether type of episiotomy performed reduced risk of anal sphincter tear (3 rd or 4 th degree obstetric laceration) ( Table 4). As there is minimal difference between the unadjusted and adjusted models, we present the adjusted results. Among women with FGM type 3, anterior, posterior lateral and anterior with posterior lateral episiotomy significantly decreased the odds of an anal sphincter tear. Compared with no episiotomy, anterior episiotomy had a stronger protective effect against anal sphincter tears (AOR = 0.15; 95 % CI 0.05-0.45) than posterior lateral (AOR = 0.66; 95 % CI 0.55-0.80) or both anterior and posterior lateral episiotomies performed concurrently (AOR = 0.21; 95 % CI 0.11-0.37). We then examined the association between type of episiotomy and risk of intrapartum bleeding requiring intervention (Table 6). Among women with Type 3 FGM, no significant association was seen between anterior or posterior lateral episiotomy and odds of intrapartum bleeding. There was a statistically significant protective association between the combination of the two types of episiotomy, anterior and posterior lateral concurrently was observed (AOR = 0.03; 95 % CI 0.01-0.08). Main findings Our study suggests that among women with Type 3 FGM anterior episiotomy in labour is protective against anal sphincter tears and postpartum haemorrhage, and does not have a significant effect on intrapartum bleeding that required an intervention. A protective effect was seen with all types of episiotomy and anal sphincter tears and postpartum haemorrhage among women with FGM Type 3. Only concurrent anterior and posterior lateral Table 4 Unadjusted and adjusted odds ratios of anal sphincter tear among women with FGM Type 3 by episiotomy type Strengths and limitations Our study should be interpreted with the following limitations in mind. A key limitation is that the indication for episiotomy was not recorded; episiotomy may have been performed for a specific medical indication such as obstructed labour or foetal distress, or done routinely based on provider preference. Timing of episiotomy is also not known, and this may have an impact on study outcomes. For example, the protective effect of anterior and posterior lateral episiotomy observed may be due to differences in timing of when providers performed episiotomies. If anterior episiotomies were differentially performed earlier in labour than other types, there would be greater length of time for bleeding to occur intrapartum. Another limitation of our study is that it only includes facility based deliveries; women who delivered in the community are omitted. This biases our results towards the null, as this population may have worse outcomes. Additionally, women presenting for scheduled caesarean delivery were not eligible for study participation. Information regarding the indication for the caesarean would be of benefit in interpreting these findings. While the full sample includes over 26,000 women across six African countries, it is important to note that the majority of women in our analytic sample (n = 6,211) with Type 3 FGM (82.7 %) came from Sudan. This affects the generalizability of our results. While we adjusted our models to account for data clustering by centre or facility, obstetric practices and medical training are thought to vary widely by country and facilities clustering does not fully account for this unobserved heterogeneity Currently FGM is not included in the curriculum of most medical and midwifery training, and recommendations regarding clinical management are not widely known [6]. Provider education regarding the appropriate management and clinical care of women living with FGM is essential to optimizing care. Strengths of our study include the relatively large analytic sample size of women living with Type 3 FGM. To our knowledge, no other study has provided evidence on the distribution of type of episiotomy by FGM classification or how this may impact maternal outcomes. Interpretation Our study is consistent with previous evidence demonstrating that women with FGM have increased rates of episiotomy [3]. To date, episiotomy practice among women with FGM has been guided by expert opinion and provider preference. We provide new information on the association between type of episiotomy and key maternal outcomes (anal sphincter tear, intrapartum and postpartum bleeding) among women with Type 3 FGM. Our analysis demonstrates that episiotomy may reduce the odds of three poor obstetric outcomes; however, the risk of episiotomy needs to be also considered. Episiotomy is painful, and may result in infection, perineal trauma or healing complications [12]. Performing the smallest episiotomy necessary to achieve maternal or foetal gain is a reasonable clinical approach, however our data show that women living with FGM are more likely to have the most extensive type of episiotomy (anterior with concurrent posterior lateral episiotomy). Working with providers to train them in the specific and evidence based care of women living with FGM is essential to mitigating the consequences of FGM [6,8]. To achieve this, education on FGM needs to be incorporated into the curriculum of nursing, midwifery and medical programs. Additionally, clinical research is needed to investigate the impact of interventions in improving health outcomes for women, both during and outside of pregnancy [8]. Conclusion The objective of our study was to investigate the association between type of episiotomy and obstetric outcomes including anal sphincter tears, and intrapartum blood ***p ≤ 0.001, ***p ≤ 0.05 Adjusted for clustering at the centre level (n = 28) Note: Pelvic introitus assessed by fingerbreadths loss requiring an intervention, and postpartum hemorrhage among women with living with FGM Type 3. We found that all types of episiotomies are protective against these outcomes. Given the risks associated with episiotomy however, the smallest episiotomy needed should be utilized. Currently women living with FGM Type 3 are significantly more likely to have the most extensive type of episiotomy, with both an anterior and posterior incision. There is not strong data to support this clinical practice. More data are needed to guide the medical care of women living with FGM. Evidence to inform when (antenatal or during labour) anterior episiotomy or deinfibulation is performed is urgently needed. Research to identify when episiotomy should be performed and for which women living with FGM is needed. Anterior episiotomy, or defibulation in pregnancy, at the first and at the second stage of labour, should be prospectively compared for blood loss, rate of episiotomy, perineal tear, demand for reinfibulation, and acceptance and satisfaction with deinfibulation for women. Provider training to improve the obstetrical care of women with FGM is also needed.
2017-07-07T17:43:09.682Z
2016-10-10T00:00:00.000
{ "year": 2016, "sha1": "04b91d5e42db152ef56e0c969d68493e845a8cbe", "oa_license": "CCBY", "oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-016-0242-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04b91d5e42db152ef56e0c969d68493e845a8cbe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256939713
pes2o/s2orc
v3-fos-license
Transmission of monkeypox virus through the saliva and air that need careful management The ongoing outbreak of monkeypox is believed to be predominantly transmitted through direct contact with lesions or infected bodily fl uids, with possible involvement of fomites and large respiratory droplets [1 – 5] . We read with great interest three letters to the editor entitled “ Salivary transmission of monkeypox virus – a potential possibility that needs careful management?, ” “ Can monkeypox virus be transmitted through the air? monkeypox virus (MPXV) can be transmitted through oral saliva and air, respectively [6][7][8] . In fact, the detection of MPXV DNA in the saliva was reported a little earlier in the previous studies [9][10][11] , before the time (6 September 2022) of the letter submitted by Ganapathy et al. [6] Furthermore, the evidence on transmission of MPXV through the saliva and air are newly reported in the literature [12,13] . The current correspondence attempts to conduct a compact review of PubMed and Web of Science online databases of studies reporting the evidence on the detection of MPXV in saliva and air, so as to improve knowledge on the transmission and management of the virus. As for salivary transmission of MPXV, there are 32 cases studied in four existing studies (Table 1). Of these, the virus DNA in saliva samples of 30 cases was positively detected. The high concordance (93.75%) of MPXV infection between saliva and skin lesion, indicating that saliva-based tests may be a viable method for MPXV DNA detection. Ganapathy et al. [6] stated that medical literature demonstrating monkeypox in saliva is unclear. Nowadays, MPXV in the saliva is demonstrated and salivary transmission should be substantial. Compared with MPXV detection sampling from anatomical sites, salivary diagnostics has some advantages, such as noninvasive, easy collection, even self-collection, less exposure of healthcare workers and risk of cross-infection, no need of specific instruments. More importantly, the positive prevalence (93.75%) and viral load (median C t , 27) of MPXV DNA test in saliva (Table 1) was reported to be higher than those in other bodily fluids, such as semen, blood, urine [14] . In addition to the diagnosis itself, the research on molecular feature of the saliva containing cytokines, chemokines, immunoglobulins, proteomic, and metabolomics at different times of disease course in cases of monkeypox will help understanding molecular changes of transmissible viral form. As for airborne transmission of MPXV, there was theoretical risk of airborne transmission but lack of direct evidence on positive air samples or viable MPXV detected in environmental air where patients with monkeypox have been managed, before the time of the letter submitted by Wang et al. [7] and Saied [8] . Engrossingly, MPXV in air samples is demonstrated and the substantial evidence on airborne transmission is contributed by Gould et al. [13] . The authors report for the first time the detection of MPXV DNA and viable virus in air samples collected at distances of greater than 1.5 m from the patient's bed and at a height of about 2 m supports the theory that MPXV can be present in either aerosols or suspended skin particles or dust containing virus, and not only in large respiratory droplets that fall to the ground within 1-1.5 m of an infected individual. MPXV DNA is also detected on the personal protective equipment (PPE) of healthcare workers and in areas used for the removal of PPE [13] . These evidence supports the use of PPE, including respiratory protection equipment, regular surface cleaning, and appropriate doffing, and the disposal of materials that are likely to be contaminated. Further investigation should consider the effectiveness of cleaning protocols and doffing procedures in decontaminating environments, and explore further the risk of respiratory transmission. Understanding the mode of transmission could allow for the development of proper interventional approaches to reduce the intensity of the current outbreak [15] . Taken together, the current evidence supports MPXV transmission through the saliva and air that need careful management, especially in the current 2022 outbreak of disease. Saliva is a promising sampling material for diagnosis of monkeypox infection and may emerge as a viable sample for public health goals for MPXV testing. Notably, it is of public health importance that airborne transmission of MPXV is demonstrated in hospital settings, which should inform policy to protect healthcare workers and reduce the risk of nosocomial transmission of monkeypox. Ethical approval Not applicable. Gould et al. [13] UK 20 Air Air sampling using the MD8 Airport (with gelatine filters, flow rate 50 l/min for 10 min; Sartorius, Goettingen, Germany); Monkeypoxspecific Real-Time PCR Assay 5 25 35.5 C t , quantitative PCR crossing threshold value of moneypox DNA detected.
2023-02-18T06:16:33.701Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "a1f68e52abb1ca1ca98a2dded0c4eaa98f55c82a", "oa_license": "CCBYNC", "oa_url": "https://journals.lww.com/international-journal-of-surgery/Fulltext/2023/01000/Transmission_of_monkeypox_virus_through_the_saliva.19.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "21a663bc5672a9c9dd483d71dd5a28a9a6b0d62b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222319054
pes2o/s2orc
v3-fos-license
The importance of organizational commitment in rural nurses' intent to leave Abstract Aims To examine determinants of intention to leave a nursing position in rural and remote areas within the next year, for Registered Nurses or Nurse Practitioners (RNs/NPs) and Licensed Practical Nurses (LPNs). Design A pan‐Canadian cross‐sectional survey. Methods The Nursing Practice in Rural and Remote Canada II survey (2014–2015) used stratified, systematic sampling and obtained two samples of questionnaire responses on intent to leave from 1,932 RNs/NPs and 1,133 LPNs. Separate logistic regression analyses were conducted for RNs/NPs and LPNs. Results For RNs/NPs, 19.8% of the variance on intent to leave was explained by 11 variables; and for LPNs, 16.9% of the variance was explained by seven variables. Organizational commitment was the only variable associated with intent to leave for both RNs/NPs and LPNs. Conclusions Enhancement of organizational commitment is important in reducing intent to leave and turnover. Since most variables associated with intent to leave differ between RNs/NPs and LPNs, the distinction of nurse type is critical for the development of rural‐specific turnover reduction strategies. Comparison of determinants of intent to leave in the current RNs/NPs analysis with the first pan‐Canadian study of rural and remote nurses (2001–2002) showed similarity of issues for RNs/NPs over time, suggesting that some issues addressing turnover remain unresolved. Impact The geographic maldistribution of nurses requires focused attention on nurses' intent to leave. This research shows that healthcare organizations would do well to develop policies targeting specific variables associated with intent to leave for each type of nurse in the rural and remote context. Practical strategies could include specific continuing education initiatives, tailored mentoring programs, and the creation of career pathways for nurses in rural and remote settings. They would also include place‐based actions designed to enhance nurses' integration with their communities and which would be planned together with communities and nurses themselves. | INTRODUC TI ON In recent decades, global concern about the impact of nursing shortages on healthcare delivery has sparked considerable research on nursing turnover and retention. Research on turnover has focused on nurses' intentions to leave a nursing position and/or the nursing profession (Hayes et al., 2012), while research on retention has focused on nurses' intent to stay in their position or organization (Cowden & Cummings, 2015). Most research on turnover and retention has been conducted in urban, acute care centres (Halter et al., 2017). There remains a distinct gap in research in rural and remote areas, where a nurse's practice in the workplace and as a community member are often interconnected and where nursing turnover can pose a threat to the viability of a community's health services. The purpose of the present study was to examine determinants of intent to leave a nursing position in the next year, for regulated nurses who work in all regions of rural and remote Canada. | Background Reducing turnover and keeping nurses in their current positions have been examined using several constructs: intent to leave (ITL), turnover, intent to stay (ITS), and retention. Turnover has usually been measured in retrospect, or through the construct of ITL, often measured by a single item question on a survey (Hayes et al., 2012). Retention has been more commonly measured through the construct of ITS (Cowden & Cummings, 2015). Some researchers have used the language of ITS to frame a study, while using a measure of ITL for the analysis (e.g., Dotson, Dave, Cazier, & Spaulding, 2014). While there is considerable overlap in the constructs of intent to leave and intent to stay, they are distinct constructs that are influenced by different factors (Lee, Ju, & Lim, 2020;Nancarrow, Bradbury, Pit, & Ariss, 2014). For example, ITL may be voluntary (e.g., to take up a new opportunity in a different community) or involuntary (e.g., due to closure of the healthcare facility). ITS, while voluntary, may be with differing amounts of enthusiasm or reluctance (Hom, Mitchell, Lee, & Griffeth, 2012). ITL and ITS are not interchangeable and the differences in construct mean that they cannot be operationalized as "two sides of the same coin" (Nancarrow et al., 2014, p. 292). It is important to note that the ITL literature has discriminated between nurses leaving the profession and nurses leaving their particular position or job (Moloney, Boxall, Parsons, & Cheung, 2018). Explanations of nurses leaving the profession tend to be at the individual level (e.g., health-related, burnout), whereas leaving a position has been explained by work organization factors such as the work environment and flexibility of scheduling (Leineweber et al., 2016). Research acknowledging the complexity of turnover intention among the rural health workforce is scarce (Cosgrave, Malatzky, & Gillespie, 2019). | Nurse credentials and practice settings Research on turnover (Halter et al., 2017) and ITL (Chan, Tam, Lung, Wong, & Chau, 2013) has mainly been conducted in urban, acute care settings. Some studies of ITL have focused on a single credential, predominantly Registered Nurses (RNs; Dall'Ora, Griffiths, Ball, Simon, & Aiken, 2015; Moloney et al., 2018). Others have used databases that include more than one credential, such as nurses and midwives (Perry et al., 2017). Many studies use the generic term "nurse" (Lee et al., 2020) and few have separately analysed data according to type of nurse (Perry et al., 2017). Only a few studies have separately identified regulated practical nurses in their analyses. Licensed practical nurses (LPNs) are regulated nurses, also known as registered practical nurses, licensed vocational nurses, or enrolled nurses. Their credential is diploma based and they provide basic nursing care within their scope of practice, sometimes under the supervision of an RN. In a study that compared ITL between RNs and LPNs, Havaei, MacPhee, and Dahinten (2016) showed that RNs were more likely to intend to leave their current position in the next year than LPNs. For both RNs and LPNs, the most common reason for ITL was workload. RNs were more likely to intend to leave for career advancement than LPNs; whereas for LPNs, low salary was associated with ITL more than for RNs. In companion studies of intention to stay (ITS) in their current positions, for RNs and Registered Practical Nurses (LPNs) in northern Ontario, Nowrouzi et al. (2015Nowrouzi et al. ( , 2016 showed that RNs and LPNs shared some of the same reasons to intend to stay in their current positions (e.g., rural lifestyle), but for LPNs, ITS was associated with staff mix and decision-making; while for RNs, ITS was associated with staff development opportunities and overtime hours. The Nurse Practitioner (NP) credential builds on RN competencies and represents an advanced level of legislated nursing practice. Historically, RNs have taken on an advanced practice role in sparsely populated remote areas without this credential. RNs with the additional NP education and credential are still relatively few in numbers in rural and remote areas of Canada communities and which would be planned together with communities and nurses themselves. | The rural context What counts as rural varies across countries and often within countries according to the reason for the rural/urban delineation, which is usually technical or social (du Plessis, Beshiri, Bollman, & Clemenson, 2001). Many studies that use nursing databases for sampling do not specify rural in the analysis (e.g., Moloney et al., 2018). Despite the continuing call to address the maldistribution of the rural and remote nursing workforce and to understand rural and remote nursing turnover (World Health Organization, 2020), there has been little specific attention to the rural nursing workforce. Too often, it is assumed that rural and urban nurses experience the same organizational and workplace challenges. The resources available to rural nurses depending on the rurality of the community are not taken into account (Smith, Plover, McChesney, & Lake, 2019). | Conceptual framework The 2001-2002 pan-Canadian study, The Nature of Nursing Practice in Rural and Remote Canada (RRNI;MacLeod, Kulig, Stewart, Pitblado, & Knock, 2004) used the Statistics Canada definition of rural and small town, with a population cut-off of 10,000 (du Plessis et al., 2001). This definition encompasses both the rural and more remote areas of Canada. The RRNI study surveyed 3,933 RNs and NPs working in rural and remote areas (Stewart et al., 2005). A logistic regression analysis was conducted on the responses of 3,051 nurses (RNs & NPs) who reported that they intended to leave their current nursing position in the next year. Characteristics of the individual nurse, workplace, and work community were associated with intent to leave (Stewart et al., 2011). The conceptual framework ( Figure 1) represents our view, gained from the literature (e.g., Cosgrave et al., 2019;Stewart et al., 2011) and from the experience of the research team and the advisory group, of the interrelated concepts and relationships relevant to turnover and retention in rural and remote nursing. This broad framework provides guidance for the F I G U R E 1 Decisions to leave or stay in a nursing position in rural and remote settings. Adapted from Stewart et al. (Polit & Beck, 2021). The present analyses are based on 2014-2015 survey data from the second national study, Nursing Practice in Rural and Remote Canada (RRNII;MacLeod et al., 2017). In contrast to the urban-centric nature of most research investigating intent to leave, which has exclusively addressed factors related to the individual nurse and the workplace, both national studies (RRNI & RRNII) highlight the impact of community on the work and worklife of rural and remote nurses, which is not examined in literature with an urban lens. There has been a "growing concern in many countries" (World Health Organization, 2020, p. 31) about retaining nurses in rural and remote areas, yet research on reducing turnover remains limited. The present study conducted separate analyses on RNs (including NPs) from LPNs. Examination of the determinants of intending to leave a nursing position (as a proxy for turnover) was conducted in relation to characteristics of the individual nurse, the workplace, the work community, and issues related to practice. | Aims The primary aim was to examine: (a) determinants of intent to leave (ITL) a position as an RN or NP within the next 12 months; (b) determinants of ITL a position as an LPN within the next 12 months; and (c) career plans in the next 12 months of RNs/NPs and LPNs who intend to leave a nursing position. | Design The data in these analyses were accessed from the RRNII crosssectional survey (MacLeod et al., 2017), a replication, and extension of the RRNI cross-sectional survey (Stewart et al., 2005). Details on the RRNII questionnaire development, sampling method, and survey implementation are available in MacLeod et al. (2017). The study used confirmatory multivariable analyses with ITL as the outcome. | Sampling frame In RRNII, the sampling frame was developed to be representative of all Canadian rural and remote regulated nurses provincially, territorially, and nationally (confidence level of 95% and margin of error of 0.05). Multilevel stratified, systematic sampling was performed with stratification first by province, then by type of regulated nurse, and finally by geographic area in each province (MacLeod et al., 2017). The regulated nurses who practiced in all provinces and territories were RNs, NPs, and LPNs. Rural communities were defined as those with a core population of less than 10,000 (du Plessis et al., 2001). Remote communities were not separately identified but included all communities in the northern Territories (Nunavut, Northwest Territory & Yukon Territory). All regulated nurses working in the northern territories and all rural and remote NPs in Canada were included in the sampling frame. Additionally, all rural regulated nurses in each of the 10 Canadian provinces were systematically sampled based on work postal codes, or home postal codes failing the availability of work codes, as available from the registration forms of their professional nursing associations. | Study sample In RRNII, 9,622 nurses met the eligibility criteria of practicing in a rural or remote community in Canada at the time of the survey, or being on leave for 6 months or less. A total of 3,822 regulated nurses returned completed surveys, for a response rate of 40% (3,822/9,622). | Outcome measure Intent to leave (ITL) within the next 12 months was operationalized by the dichotomous (yes/no) measure, assessed with the question TA B L E 1 Potential explanatory variables, scales, and sources RN/ NP LPN Item Source Individual: sociodemographics and health Gender x x What is your gender? (Female/Male) RRNI (Stewart et al., 2005) Age x x What is your year of birth? (coded to under 30, 30-39, 40-49, 50-59, and 60-69) RRNI (Stewart et al., 2005) Marital status x What is your current marital status? (Married/Living with partner, Single/Divorced/Separated/Widowed) RRNI (Stewart et al., 2005) Dependent children x x Do one or more dependent children live with you? (Yes/No) RRNI (Stewart et al., 2005) Perceived stress x How often you felt or thought a certain way: 5-point Likert scale from never to very often (score range 4-20) Cronbach's alpha = 0.80 Perceived Stress Scale (Cohen et al., 1983) Burnout x I feel burned out from my work (1 item) 7-point Likert scale from never to always (score range 1-7) Malach-Pines (2005) Individual: professional Registration status x From registry (RN/NP; LPN) Highest attained education credential x Educational background: Mark all that apply (16 credentials including "other") RRNI (Stewart et al., 2005) Advanced practice nursing x Recode of educational background and primary position to identify advanced practice nursing group RRNI (Stewart et al., 2005) Employment status x What is your nursing employment status? (full-time/permanent, part-time/permanent, job share, casual, contract/term) RRNI (Stewart et al., 2005) Number of rural or remote communities worked in for 3 or more months x Over the course of your nursing career, how many rural and/or remote communities have you worked in for three months or longer? (1-3, 4-6, 7-9, and 10 or more) RRNI (Stewart et al., 2005) Duration of employment (years) x x How long have you worked for your primary employer? (7 categories from less than 1 year to 20 years or more) RRNI (Stewart et al., 2005) Hours worked in last 12 months Read each statement and decide if you ever feel this way about your work (9-items scored from never to everyday; score range 0-54) Cronbach's alpha = 0.91 Utrecht Work Engagement Scale-Short Form (Schaufeli et al., 2006) Organizational commitment x x With respect to your own feelings about your primary workplace (12 items scored on a 7-point Likert scale from strongly disagree to strongly agree; score range 12-84) Cronbach's alpha = 0.75 Modified (reduced from 18 to 12 items) from Meyer et al. (1993) (Continues) RN/ NP LPN Item Source Interprofessional collaboration x x Thinking about interprofessional collaboration in your primary nursing practice position, indicate the choice that best describes your feelings about the statement: I am able to share and exchange ideas in a team discussion (7-point Likert scale from not at all to a very great extent, and not applicable; score range 1-7) King, Shaw, Orchard, and Miller (2010) JRIN subscale: Supervision, recognition, and feedback Thinking of your primary workplace and primary work community, please indicate your level of agreement with the following (Scored on a 5-point Likert scale from: 1-strongly disagree; 2-disagree; 3-neutral; 4-agree; to 5-strongly agree. Higher JRIN scores indicated a higher level of job resources related to each subscale) (4 items; score range 4-20) Cronbach's alpha = 0.88 Developed for this study (Penz et al., 2019) JDIN subscale: Preparedness for scope of practice Thinking of your primary workplace and primary work community, please indicate your level of agreement with the following (Scored on a 5-point Likert scale from: 1-strongly disagree; 2-disagree; 3-neutral; 4-agree; to 5-strongly agree. Higher JDIN scores indicated a higher level of job demands related to each subscale) (4 items; score range 4-20) Cronbach's alpha = 0.83 Developed for this study (Penz et al., 2019) JDIN subscale: Equipment and supplies x Same as above (3 items; score range 3-15) Cronbach's alpha = 0.83 Developed for this study (Penz et al., 2019) JDIN subscale: Comfort with working conditions x Same as above (4 items; score range 4-20) Cronbach's alpha = 0.64 Developed for this study (Penz et al., 2019) JDIN subscale: Safety x x Same as above (4 items; score range 4-20) Cronbach's alpha = 0.71 Developed for this study (Penz et al., 2019) Workplace Number of hours per week spent travelling for work x On average, how many hours per week do you spend travelling for work-related activities (e.g., travel between work sites, flying in or out of different communities to provide service, travel to see patients in or outside of your primary work community)? (8 ordinal categories from less than 1 hr to more than 30 hr) Modified from RRNI (Stewart et al., 2005) Input into work schedule "Do you plan to leave your present nursing position within the next 12 months?" Nurses who responded "yes" were also asked to indicate their career plans in the next 12 months by marking all that applied from 15 categories. The same ITL question was used in RRNI and RRNII. | Potential explanatory variables Based on the conceptual framework (Figure 1), the relevant literature (e.g., Halter et al.,2017), RRNI, and insights from the advisory group, the research team identified potential explanatory variables from the RRNII questionnaire hypothesized to be independently as- (Kleinbaum, 1994) rather than the statistical criterion, gender was included for both RN/ NP LPN Item Source Adequate number of rest days between shifts x Is the number of rest days between shifts adequate? (Yes/No) RRNI (Stewart et al., 2005) Support network of colleagues x Do you have a support network of colleagues who provide consultation and/or professional support? (Yes/No) Developed for this study Work community Population of primary work community x What is the population of your primary work community? [(9 categories recoded to 3 categories (999 or less, 1,000-9,999, and 10,000 or more)] Modified from RRNI (Stewart et al., 2005) Perceived rurality of primary work community x x Do you consider your primary work community to be: (rural, remote, rurban, none of the above) Modified from RRNI (Stewart et al., 2005) Distance to advanced referral centre x How far is your primary work community from the closest advanced referral centre? E.g., a major metropolitan centre with subspecialty services such as cardiac surgery, neurosurgery, paediatric surgery, and radiation oncology (0−99km, 100−199km, 200−499km, 500−999km, 1,000 or more km) RRNI (Stewart et al., 2005) Came to primary work community due to family or friends x I came to work in my primary work community for the following reason: family or friends (Yes/No) Modified from RRNI (Stewart et al., 2005) Came to primary work community due to work flexibility x I came to work in my primary work community for the following reason: work flexibility (Yes/No) Modified from RRNI (Stewart et al., 2005) Came to primary work community due to location of community x I came to work in my primary work community for the following reason: location of community (Yes/No) Modified from RRNI (Stewart et al., 2005) Came to primary work community due to advanced practice opportunities x I came to work in my primary work community for the following reason: advanced practice opportunities (Yes/No) Modified from RRNI (Stewart et al., 2005) Psychological sense of community x Thinking of your primary work community, please indicate your level of agreement with the following statements: (9 items on a 5-point Likert Scale from strongly disagree to strongly agree; score range 9-45) Cronbach's alpha = 0.92 Modified from Buckner (1988) for this study (Kulig et al., 2018) Experienced an extremely distressing incident in primary work community x Over the past two years in your primary work community, have you experienced a healthcare incident that was extremely distressing to you as a nurse? ( Potential explanatory variables were selected as indicators of each of the four major content domains of potential determinants. Individual variables included two subcategories representing (a) characteristics of the nurse: sociodemographic (e.g., age) and health (e.g., | Validity and reliability The RRNII questionnaire derived several measures from the original RRNI study (Stewart et al., 2005) that were then modified for RRNII (see Table 1). Some new measures were included from the litera- Buckner (1988) for this study (Kulig et al., 2018). Previously developed scales were also embedded in the questionnaire and used in the analyses. The Perceived Stress Scale (Cohen, Kamarck, & Mermelstein, 1983) was an indicator of Individual Health ( | RE SULTS The characteristics of the 3,110 respondents are presented in Table 2. Approximately one in four RNs/NPs (26.4%) and a slightly lower proportion of LPNs (22.2%) indicated an intent to leave their current nursing position within the next 12 months ( Table 2). The career plans of respondents who intended to leave in the next year can be found in Table 3. Less than a quarter of nurses who intended to leave planned to retire (RN/NP 23.1%; LPN 21.5%). A larger proportion of LPNs than RNs/NPs (39.2%; 28.2%) planned to nurse in the same community and a larger proportion of LPNs than RNs/NPs (15.1%; 9.8%) intended to go back to school. The unadjusted odds ratios, frequencies, and means of the potential variables associated with ITL are provided in worked below or beyond their scope of practice, had lower satisfaction with their current nursing practice, had lower organizational commitment, had lower preparedness for their scope of practice, travelled more hours/week for work, were required to be on call, did not come to work in primary work community due to work flexibility, and had lower satisfaction with their primary work community. The final model for LPNs (adjusted odds ratios in Table 7) showed that seven significant variables accounted for 16.9% of the variance in ITL for the LPNs (Nagelkerke R 2 = 0.169). The odds of intending to leave a nursing position in the next year were greater for LPNs who had single marital status (including single, divorced, separated, and widowed), experienced burnout, had education beyond a diploma, had duration of employment under 6 or 20 years or more, had lower confidence in their work, had lower organizational commitment, and perceived their primary work community to be remote. Organizational commitment was the only Practice Issue in the conceptual framework (Figure 1) that increased the odds of leaving for both RNs/NPs and LPNs. For both nurse types, lower commitment was associated with higher likelihood of ITL. Organizational commitment has long been identified as a predictor of ITL (Halter et al., 2017;Hom, Shaw, Lee, & Hausknecht, 2017 the components of affective commitment (commitment associated with a sense of belonging and emotional attachment) and normative commitment (commitment associated with a sense of duty or obligation) are particularly important in rural settings where personal and professional lives are intertwined and the healthcare facility is often a major part of the community. In small rural and remote communities, where work teams can be very small with frequent gaps in staffing (Wakerman et al., 2019), team dynamics, everyday practice experiences, and locally available workplace and community supports may strongly influence RNs/NPs' and LPNs' organizational commitment. In rural and remote settings, work systems need to suit the realities of local contexts to be perceived as working well and being supportive (Cosgrave, 2020). | D ISCUSS I ON Rural and remote nursing practice is acknowledged as being complex and often demanding . For RNs/ NPs, the odds of ITL were greater if they were working below or beyond their registered scope of practice, if they did not feel prepared for their scope of practice, and if they had lower satisfaction with their current nursing practice. LPNs were more than twice as likely to intend to leave if they had a lower confidence in their work. In rural settings, where demands can be high and flexibility in nursing practice is often needed, practice supports are critical. A key support is supportive managers, who are accessible at a distance, yet aware of local circumstances (Lea & Cruickshank, 2017). Workplace variables were significant only for RNs/NPs. Consistent with our RRNI analysis (Stewart et al., 2011), the odds of ITL were greater for RNs/NPs who were required to be on call. In the present analysis, the adjusted odds ratio of the requirement to be on call was 1.6 compared with an adjusted odds ratio of 1. Travelling for work in many places in Canada's large geography can also be challenging for nurses. RNs/NPs were more likely to intend to leave if they travelled more hours each week for work than RNs/NPs with less work-related travel. In addition to the stress of travelling in rural and remote areas, such travel may add to the length of the workday and contribute to the increasing number of work hours experienced by rural and remote nurses (Francis & Mills, 2011). The interconnection of nurses, their work, and communities is characteristic of rural and remote practice (Malatzky, Cosgrave, & Gillespie, 2020;Ross, 2017). Different community variables were significantly related to ITL for RNs/NPs and LPNs. RNs/NPs who were less satisfied with their primary work community had higher odds of ITL, as did nurses who did not choose to come to work in this community for the flexibility of the work. This latter finding reflects that rural and remote nursing practice demands flexibility in everyday work and in living and working in small communities (Malatzky et al., 2020;Wakerman et al., 2019). The fit of the nurse with the realities of practice and the community is important for retention in rural and remote communities (Malatzky et al., 2020). This perception of fit or a lack of it may be the case as well for LPNs who perceived their primary work community to be remote and were more likely to intend to leave. This perception of place is important in identifying various strategies to support all types of nurses in staying in rural and remote communities. | Strengths and limitations An important strength of this study is the representative sample of regulated nurses working in rural and remote areas, in all provinces and territories of Canada. The sample of nurse practitioners was too small for separate analysis, but combining RNs with NPs provided the opportunity to make comparisons over a 13-year time frame between two similar national studies. The cross-sectional nature of the study and use of the variable ITL as a proxy for actual turnover are limitations, but there is evidence that this measure of behavioural intention is a strong indicator of actual turnover behaviour (Hom et al., 2017). Although we used the language of potential explanatory variables, the survey design and correlational nature of associations rule out determination of causality. Common method bias could have affected results due to all variables, including the outcome, measured in the survey questionnaire. While procedural methods were used to counteract response bias (e.g., anonymity), no statistical controls were included (Tehseen, Ramayah, & Sajilan, 2017). | CON CLUS ION The key finding in this analysis was that organizational commitment was the only consistent predictor of ITL across nurse types (RN/ the RN/NP group in both the RRNI and RRNII studies suggest that some issues related to turnover remain unresolved over time and merit further research and policy development. In rural and remote communities, where the implementation of organizational supports needs to be well integrated with local contextual and community realities, organizational commitment could be an umbrella concept for managers and other policy-makers to develop rural-specific turnover reduction strategies, through collaboration among nurses, nurse leaders, their employers, and communities. The results show that the relevant individual, workplace, and work community determinants ( Figure 1) were different across nurse type. It would be important to tailor strategies to the type of nurse and the realities of nursing practice in small communities. As there is great variation in the geographic, population, and organizational contexts of rural and remote nursing practice, it can be anticipated that effective strategies may differ across geographies and health services. The conceptual framework for this study, which was useful to examine nurses' intent to leave across four domains of determinants, could be used to frame further research. In particular, more research is needed to examine the effectiveness of strategies to increase organizational commitment in rural and remote practice settings and thereby reduce ITL and nursing turnover. ACK N OWLED G EM ENTS This article stems from the study: "Nursing Practice in Rural and Remote Canada II," led by Martha MacLeod, Norma Stewart, and Judith Kulig (http://rural nursi ng.unbc.ca). We thank the Advisory Team led by Penny Anguish of Northern Health, the nurses who responded to the survey, and Valerie Elliot, Leana Garraway, and Ali Thomlinson. CO N FLI C T O F I NTE R E S T No conflict of interest has been declared by the authors. AUTH O R CO NTR I B UTI O N S All authors have agreed on the final version and meet at least one of the following criteria (recommended by the ICMJE: http://www. icmje.org/recom menda tions/): (1) substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; (2) drafting the article or revising it critically for important intellectual content. PE E R R E V I E W The peer review history for this article is available at https://publo ns.com/publo n/10.1111/jan.14536.
2020-10-14T13:05:39.750Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "3a223359c888932a8783cae86052565ebcf7667a", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jan.14536", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1d1532cd00ca8fbdd830b89ba2986162f5af291a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
222112286
pes2o/s2orc
v3-fos-license
Half-linear dynamic equations and investigating weighted Hardy and Copson inequalities In this paper, we employ some algebraic equations due to Hardy and Littlewood to establish some conditions on weights in dynamic inequalities of Hardy and Copson type. For illustrations, we derive some dynamic inequalities of Wirtinger, Copson and Hardy types and formulate the classical integral and discrete inequalities with sharp constants as particular cases. The results improve some results obtained in the literature. Introduction Hardy in [12] proved the discrete inequality where a(s) is a positive sequence for s ≥ 1. In [13] Hardy proved the integral form where f is a positive function. In [10] Copson considered a new type of inequalities of the where f is a positive function and q > 1. In [11] Copson (see also [14, Theorem 344]) proved the discrete version of (3), which is given by where q > 1 and a(s) > 0 for s ≥ 0. In [4] Beesack proved an inequality of the form b a ω(y) where γ and ω satisfy the differential equation of Euler-Lagrange type γ (y) z (y) q-1 + ω(y)z q-1 (y) = 0. The method of the proofs in [4] depends on the solution of (6) when the first derivative z > 0 on the interval (a, b). The approach of Beesack extended to generalized Hardy's type inequalities; see, e.g., Beesack [5] and Shum [23]. Some of the conditions on z, γ and ω were removed by Tomaselli [26]. In particular Tomaselli followed up the papers by Talenti [24] and [25] and proved some inequalities with some special weighted functions. The discrete analogues for the continuous results have been considered by some authors, we refer to the articles by Chen [8,9] and Liao [15]. It is worth to mention here that some parts of the proofs of Liao's results are based on the technique used in [7] and [14], which are based on the application of the variational principle which is not an easy task to apply on time scales and then we did not consider in our proofs of characterizations of weights of the inequalities that we will consider in our paper. There is thus an urgent need of a new technique that helps us in studying such problems on time scales, which is our main aim in this paper. The dynamic equations and inequalities have been introduced by Hilger in 1988 and considered by a lot of authors, we refer to Refs. [1-3, 16-20, 27]. One of the applications of Hardy-type inequalities in dynamic equations was demonstrated in [19]. In particular, in [19] the author established a time-scale version of the Hardy inequality, which unifies and extends well-known Hardy inequalities in the continuous and in the discrete setting, and presented an application in the oscillation theory of half-linear dynamic equations to obtain sharp oscillation results. Recently, in [21] the authors established some conditions on the weights of dynamic inequalities of Hardy's type to be hold. More precisely it has been proved that, where p * is the conjugate of p > 1, then where w is a positive function, if and only if there is a number η > 0, such that the equation has a solution u(s) that satisfies In this paper, we establish some relations between the weights in generalized inequalities of Hardy and Copson type by using the solutions of dynamic equations of half-linear types. The technique in our paper allows us to cover the inequalities with tails of Copson's type with weights and improve the above results, since our results do not need the restrictive condition (7). In Sect. 2, we are concerned with the presentation of some basic definitions and preliminaries regarding the time-scale calculus. The dynamic Hardy-type inequality of the form will be proved in Sect. 3, where the method reduces the proofs to the solvability of dynamic equation where u > 0, u > 0. Next, we prove new conditions on weights in the dynamic Copsontype inequality with tail of the form and we prove that the conditions on the weights reduces to the solvability of the dynamic equation where u > 0, u < 0. To the best of the authors' knowledge the results in this case are essentially new. For illustration, we derive some dynamic inequalities as special cases and from them we formulate some classical and discrete inequalities. Preliminaries In this section, we present some basic definitions that will be used in the sequel and for more details see [6]. The derivative on time scales of ϒ and /ϒ of two functions and ϒ are given by The forward jump operator σ (t) on a time scale T is defined by σ (t) := inf{s ∈ T : s > t} and the graininess function μ is defined by μ(t) := σ (t)t, and for any function : T → R the notation σ (t) denotes (σ (t)). The Cauchy (delta)-integral is defined by (t) t. The chain rule for functions : R → R, which is continuously differentiable, and ϒ : T → R, which is deltadifferentiable, is given by this rule leads to the useful form Another formula pertaining to the chain rule states that which provides us with the following useful form: For a, b ∈ T and , ∈ C rd (T), the integration by parts formula is given by The Hölder inequality on time scales is given by where q > 1 and 1/q Main results In this section, we will prove the main results and we begin with inequalities of Hardy's type with heads. In what follows, all functions in the statements of the theorems are assumed to be rd-continuous, and positive functions (see [6]). The set of all such rd-continuous functions is denoted by C rd (T). Let γ , ω ∈ C rd ([α, β] T , R + ), and S ∈ C 1 rd ([α, β] T , R + ). Suppose that γ (t) and ω(t) satisfy the half-linear dynamic equation of Euler-Lagrange type for any real number θ > 1, and and For convenience sometimes in the computations we skip the argument t. Lemma 3.1 Assume that α, β ∈ T and suppose that S(t) and ϒ(t) are nondecreasing. If z ∈Ĥ 1 , and γ and ω satisfy Eq. (19), then Proof From the definition of v and by using the rules of derivative on time scales (15) Since we have from (19) and (25) Let z = Sϒ, then we have by employing the product rule of derivative Also by the product rule of derivative, we have Substituting (26) and (27) into (28), we obtain The product rule of derivative now yields Substituting (30) into (29), we have which is the desired equation, Eq. (23). The proof is complete. Remark 3.3 For the differential form, we get from Remark 3.2 the Wirtinger inequality where γ and ω satisfy the differential equation (48) and u(0) = 0. Remark 3.4 When T = N, we obtain from Theorem 3.3 the following inequality: where u is a positive summable sequence, and the sequences γ and ω satisfy the difference equation where S(n) > 0, S(n) > 0. Lemma 3.3 Suppose that α, β ∈ T and S(t) and ϒ(t) are nonincreasing. If z ∈Ĥ 2 , and γ and ω satisfy the dynamic equation (53) then Proof From the definition of v and by using the rules (15) Since from (53) and (59), we see that (note Let z = Sϒ, then we have Substituting (60) and (61) into By using the product rule, we see that Substituting (63) into (62), we have Since z = Sϒ ≥ 0, we get (note that |S | = -S ) which is the desired equation, Eq. (57). The proof is complete. Remark 3.7 As a special case of (75), we get the Wirtinger type inequality where γ and ω satisfy the differential equation (76) and u(β) = 0. We give some examples for illustration. which is the Hardy-Littlewood inequality with a sharp constant (θ /(1γ )) θ (see [22]). where u is a positive summable sequence over any finite interval (1, k),and the sequences γ and ω satisfy the difference equation where S(n) > 0, S(n) < 0. which is the discrete Hardy-Littlewood inequality with a sharp constant (θ /(1γ )) θ .
2020-10-03T13:30:08.241Z
2020-10-02T00:00:00.000
{ "year": 2020, "sha1": "a186d1d7c324a3f029b41aa33d3d099c0455fb68", "oa_license": "CCBY", "oa_url": "https://advancesindifferenceequations.springeropen.com/track/pdf/10.1186/s13662-020-03006-z", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c2353cda52fb720ee1aacaa5b748d6ae32b014d5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
54443320
pes2o/s2orc
v3-fos-license
Geographical variation and predictors of physical activity level in adults with congenital heart disease Background Physical activity is important to maintain and promote health. This is of particular interest in patients with congenital heart disease (CHD) where acquired heart disease should be prevented. The World Health Organization (WHO) recommends a minimum of 2.5 h/week of physical activity exceeding 3 metabolic equivalents (METS) to achieve positive health effects. It is unknown whether physical activity levels (PAL) in adult CHD patients differ by country of origin. Methods 3896 adults with CHD recruited from 15 countries over 5 continents completed self-reported instruments, including the Health Behaviour Scale (HBS-CHD), within the APPROACH-IS project. For each patient, we calculated whether WHO recommendations were achieved or not. Associated factors were investigated using Generalized Linear Mixed Models. Results On average, 31% reached the WHO recommendations but with a great variation between geographical areas (India: 10%–Norway: 53%). Predictors for physical activity level in line with the WHO recommendations, with country of residence as random effect, were male sex (OR 1.78, 95%CI 1.52–2.08), NYHA-class I (OR 3.10, 95%CI 1.71–5.62) and less complex disease (OR 1.46, 95%CI 1.16–1.83). In contrast, older age (OR 0.97, 95%CI 0.96–0.98), lower educational level (OR 0.41, 95%CI 0.26–0.64) and being unemployed (OR 0.57, 95%CI 0.42–0.77) were negatively associated with reaching WHO recommendations. Conclusions A significant proportion of patients with CHD did not reach the WHO physical activity recommendations. There was a large variation in physical activity level by country of origin. Based on identified predictors, vulnerable patients may be identified and offered specific behavioral interventions. Introduction Due to improvements in the treatment and management of congenital heart disease (CHD), most children with congenital heart disease are expected to reach adulthood and the population of adults with CHD continues to grow [1,2]. However, the risk of complications increases as patients grow older [3]. With increasing age, there is also the risk of acquired heart disease, especially in those with traditional risk factors for cardiovascular disease such as hypertension, diabetes, and hyperlipidaemia [4,5]. In an adult CHD population, prevention of acquired heart disease is especially important given the risks associated with reintervention [6,7] and pre-existing limitations in physical capacity [8]. A physically active lifestyle has the potential to modify cardiovascular risk factors and promote general health [9][10][11]. Most patients with CHD experience some degree of limitations in aerobic capacity, most pronounced in those with complex heart lesions [8]. This may pose barriers for being physically active. However, studies have suggested that adults as well as children with CHD are physically active on the same level as healthy subjects [12,13]. Nevertheless, approximately one-half to three-quarters of both patients and healthy subjects do not reach the World Health Organization (WHO) recommendations of 2.5 h per week of physical activity of 3 metabolic equivalents (METS) or more [12,14]. Several patient-related factors are potentially associated with low physical activity level (PAL) in patients with CHD, such as reduced aerobic capacity [8,15], impaired muscle function [16][17][18], self-concept [19], self-efficacy [20], parental overprotection [21], and restriction recommendations by their cardiologists [22]. In general, physical activity level may also be affected by external factors such as seasonal variation [23][24][25], socio-economic and local environmental factors [26][27][28][29], and by country of origin [30]. These findings raise the question whether the degree of physical activity level also varies in different countries in patients with CHD. In the present study, physical activity level was analyzed in a large international cohort of adults with CHD, including geographical variation in physical activity level and general predictors of physical activity level in this particular population. Patients and procedure In total, 4028 adults with CHD from 15 countries in 5 continents participated in the cross-sectional study APPROACH-IS (Assessment of Patterns of Patient-Reported Outcomes in Adults with Congenital Heart disease -International Study) [31,32] on Patient-Reported Outcomes (PRO). Data were collected from April 2013 to March 2015. Informed consent was obtained from all participants. The study was conducted in accordance with the Declaration of Helsinki. The rationale, design and methodology of APPROACH-IS have been published previously [33]. Patients included in the study met the following criteria: (i) diagnosis of CHD, defined as a structural abnormality of the heart or intra-thoracic great vessels, that was present at birth and had actual or potential functional significance [34]; (ii) 18 years of age or older; (iii) diagnosis established before adolescence (i.e. before 10 years of age); (iv) continued follow-up at a CHD center or included in a national/regional register; and (v) physical, cognitive, and language capabilities necessary to complete self-report questionnaires. Exclusion criteria were prior heart transplantation or primary pulmonary hypertension [33]. The complexity of the congenital heart disease was based on the Bethestha classification [35]. Measurements Socio-demographic variables were patient-reported. The self-report questionnaires in APPROACH-IS were administered to eligible patients by surface mail or in clinic during an outpatient visit. The questionnaires have been validated and reliability-tested and measure PROs within different PRO domains, including health behaviour. Data regarding the participants' medical background, such as CHD diagnosis and disease complexity [35], were added to the APPROACH-IS database by a member of the local research team and based on chart review [33]. The Health Behaviour Scale (HBS-CHD), including data on alcohol consumption, tobacco use and physical activity [36], was used to measure physical activity level. The instrument was translated into Chinese, French, German, Hindi, Italian, Japanese, Norwegian, Spanish, Swedish, and Tamil. The questionnaire has a good to excellent content validity and responsiveness [36]. The validity across different languages has not been tested. The HBS-CHD included questions regarding extremely and moderately demanding physical activity during a 7-day week, also including sports during school hours (the latter was relevant for a minority of the studied patients). The number of hours per week spent at an activity ≥3METS and ≥6METS was summarized. Based on the current WHO recommendations on physical activity for promoting health in adults aged 18-64 (i.e 150 min/week spent ≥3METS or 75 min/week spent ≥6METS or an equivalent combination of both), participants were dichotomized into two categories: high physical activity level (reaching WHO recommendations) and low physical activity level (not reaching WHO recommendations) based on their participation in physical exercise. Statistical analyses All analyses were performed using SPSS 23 (IBM, Armonk, NY, USA). Data were assessed for normality. Differences in means were tested with Student's t-test and ratios with chi 2 -test. The null hypothesis was rejected for p-values b 0.05. The association between patient-specific variables, being age, sex, educational level, employment status, marital status, functional class and disease complexity, versus physical activity level was estimated through generalized linear mixed models that is a form of multilevel logistic regression. We applied a two-level structure in which patients were nested within countries. Hence, all patient characteristics available, which have been used in prior APPROACH-IS reports [31] were used as fixed effects. Country was used as random effect. Generalized linear mixed models do not result in a normally interpretable R 2 statistics. Therefore, we computed the pseudo R 2 using the method described by Nakagawa & Schielzeth [37]. For the physical activity level, full data on 3896 patients was available. For the predictors, missing values occurred in 0.0 to 2.2% of the subjects. Altogether, full data on patient characteristics was available for 3727 (95.7%) of the patients. Therefore, multiple imputation was not used and only patients for whom full data was available for the variables under study were included in the generalized linear mixed models. Results Out of 4028 participants, 3896 had data on physical activity level. 1217 (31%) reached WHO recommendations on physical activity level. The proportion of patients reaching recommended physical activity level varied among countries (p b 0.001), with the lowest proportions in India (10%) and Japan (11%) and highest in Norway (53%), Switzerland (47%), and Sweden (46%). However, the variation was large, also between adjacent countries e.g. France (19%) and Switzerland (Table 1, Fig. 1). More men than women reached WHO recommendations on physical activity level (37% vs. 26%, p b 0.001). Patients with a high physical activity level were younger (32 vs. 36 years, p b 0.001), had less complex heart lesions (35% among patients with simple lesions vs. 26% among patients with complex lesions), and higher educational levels (39% of those with a university degree vs. 16% of those who did not finish high school). Employment status was also associated with physical activity level. Of full-time students, 41% reached WHO recommendations on physical activity level vs. 14% of those who were homemakers or retired. There was an association between self-reported limitations and low physical activity level, with 40% of those who reported no limitations meeting WHO recommendations on physical activity level compared to 10% among those with severe limitations (Table 2). Discussion In the present study, we found that in a globally recruited sample, approximately one third of adults with CHD reached the WHO recommendations on physical activity to maintain or promote health. However, large geographical variations from 10% to slightly above 50% of the population reaching the current recommendations were seen. In a multilevel logistic regression model with geographical area as random effect, sex, age, educational level, employment status, complexity of heart lesions, and self-reported NYHA class were associated with reaching WHO recommendations on physical activity level. This knowledge may help in detecting vulnerable patients and thereby offer specific behavioral interventions. The reasons for the large variation in physical activity level between different geographical regions are not clear. Factors such as climate, cultural variations, infrastructure and socioeconomic factors may be of importance. With a few exceptions, our studied population reached the recommendations on physical activity level to a similar extent as the reference general population in their respective countries [30,[38][39][40]. In our study we demonstrated that men were more likely to be sufficiently active, which is in line with previous studies on adults with CHD [12,14] as well as the general population [41]. However, there are also conflicting reports on adults with CHD [42] where gender was not associated with physical activity level. The difference in physical activity level between the sexes persisted when adjusted for geographical area. Given this, health care providers should not only ensure activity recommendations are provided for all patients but additional efforts should be made to educate their female CHD patients about the importance of physicial activity and the potential long-term benefits. In this study, the odds for reaching the WHO recommendations on physical activity level decreased by 3% each year of life. It is known that physical activity level decreases with age in the general population [41] and others have reported consistent results in adults with congenital heart disease [42]. A reduction in exercise capacity, which is found in the general population [43] as well as in adults with CHD [8], was a possible explanatory mechanism for lower physical activity level with increasing age. Our data supported that a decreased physical activity level due to increasing age, even at relatively young ages, is a global phenomenon in adults with CHD. These observations underscore the importance of addressing physical activity level in the management of older patients with CHD. We found that the complexity of CHD was associated with physical activity level. This finding contrasts with previous reports [12,42,44]. In two previous studies, a slightly different definition of complexity was applied with two groups of complexity instead of three as used in this present investigation [35]. In our study, we showed that patients with simple and moderate lesions had higher physical activity level than patients with lesions of severe complexity. However, we noted that differences were modest and the point estimates were fairly similar for both groups. It may be that patients with lesions of severe complexity in our present study represented more severely limited patients compared with the previous studies. It may also be that the large sample size in this study allowed for detection of smaller differences between groups. Nevertheless, the complexity of the CHD lesion should be considered when giving advice on physical activity to patients with CHD [45,46]. The self-reported NYHA class was strongly associated with physical activity level with almost three times higher odds of reaching recommendations on physical activity level for patients in NYHA class I in contrast to the higher NYHA classes (III and IV). For NYHA class II, the point estimates were in line with NYHA class I but did not reach statistical significance. Our results were in agreement with a previous study that reported higher activity levels for patients in NYHA class I [47]. It was not surprising that patients without limitations are more active than those with limitations for physical activity. For patients describing themselves as physically limited, this is a very strong indicator of having a low physical activity level. These patients are at potential risk of developing complications related to low physical activity and may thus be trapped in a vicious cycle. Patients with higher NYHA classes should be assessed carefully regarding actual physical activity level and offered targeted advice and rehabilitation measures [45]. As in the general population [48][49][50], higher educational levels were associated with higher physical activity level among patients with CHD. Educational level is likely associated with employment status, which was also associated with high physical activity level in the current study. Both educational level and employment status are possible to modify and caregivers can support these efforts with discussions beginning in early adolesence on future education, vocation and employment. Exercise training in adults with congenital heart disease has been proved safe in several trials. There are also recommendations for exercise training that can be applied on an individual basis, taking in account factors such as arrhythmia, arterial saturation and ventricular function [46]. Study limitations The present study population only captured patients with CHD and reference data were lacking. While a reference population were not feasible for the current study, the current methods allowed for identification of those with CHD who may be more or less likely to reach WHO recommendations on physical activity level. Only self-reported instruments were used which have inherent limitations regarding under-or overestimation, recall bias, and social desirability bias. On the other hand, only validated instruments were used and our sample was large, which hopefully allowed for a valid output. We did not have access to data on medication. Some common drugs in cardiovascular therapeutics, e.g. beta-blockers, may affect physical performance and thereby also potentially the physical activity level. Our sample was not Fig. 1. Comparison of PAL between countries. PAL in adults with CHD was measured using the Health Behaviour Scale (HBS-CHD) and the proportion reaching WHO recommendations on PAL was calculated for each country. There was a great inter-country variation in the proportion of CHD patients reaching recommended PAL. In this heat map, countries included in the study are marked in color. Shades of green denote a higher proportion of CHD patients reaching recommended PAL as compared with shades of red. CHD = congenital heart disease, PAL = physical activity level, WHO = World Health Organization. population based and not randomly selected. However, the large sample of adults with CHD and the multicenter recruitment of patients hopefully allowed for valid conclusions based on the present data. Conclusions Almost 70% of adults with CHD did not reach the WHO recommended physical activity level. There was a large variation between countries in the proportion of patients that reached recommended physical activity level, from 10% to slightly above 50%. Given the proportion of patients not reaching recommended physical activity level, many are at potential risk for developing long-term complications related to a low physical activity level. Therefore, issues regarding physical activity level should be encouraged and discussed in all consultations in adults with congenital heart disease. Furthermore, the identified predictors of physical activity level may help to identify vulnerable patients and thereby allow for targeted interventions. Funding This work was supported by the Research Fund -KU Leuven, Leuven, Belgium (OT/11/033); by the Swedish Heart-Lung Foundation, Sweden (20130607); by the University of Gothenburg Centre for Person-centred Care, Gothenburg, Sweden; and by the Cardiac Children's Foundation, Taiwan (CCF2013_02). Furthermore, this work was endorsed by and conducted in collaboration with the International Society for Adult Congenital Heart Disease (ISACHD). Declarations of interest None. p-Values represent comparisons of physical activity levels between different patient variables. n = number; PAL = physical activity level.
2018-12-12T19:54:00.801Z
2018-11-22T00:00:00.000
{ "year": 2018, "sha1": "015faf9714d05066df994f020127747291498900", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijcha.2018.11.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "015faf9714d05066df994f020127747291498900", "s2fieldsofstudy": [ "Medicine", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
118497368
pes2o/s2orc
v3-fos-license
Search for anomalous Wtb couplings and flavour-changing neutral currents in t-channel single top quark production in pp collisions at sqrt(s) = 7 and 8 TeV Single top quark events produced in the t channel are used to set limits on anomalous Wtb couplings and to search for top quark flavour-changing neutral current (FCNC) interactions. The data taken with the CMS detector at the LHC in proton-proton collisions at sqrt(s) = 7 and 8 TeV correspond to integrated luminosities of 5.0 and 19.7 inverse femtobarns, respectively. The analysis is performed using events with one muon and two or three jets. A Bayesian neural network technique is used to discriminate between the signal and backgrounds, which are observed to be consistent with the standard model prediction. The 95% confidence level (CL) exclusion limits on anomalous right-handed vector, and left- and right-handed tensor Wtb couplings are measured to be |f[V]^R|<0.16, |f[T]^L|<0.057, and -0.049<f[T]^R<0.048, respectively. For the FCNC couplings kappa[tug] and kappa[tcg], the 95% CL upper limits on coupling strengths are |kappa[tug]|/Lambda<4.1E-3 TeV-1 and |kappa[tcg]|/Lambda<1.8E-2 TeV-1, where Lambda is the scale for new physics, and correspond to upper limits on the branching fractions of 2.0E-5 and 4.1E-4 for the decays t to ug and t to cg, respectively. Introduction Single top quark (t) production provides ways to investigate aspects of top quark physics that cannot be studied with tt events [1]. The theory of electroweak interactions predicts three mechanisms for producing single top quarks in hadron-hadron collisions. At leading order (LO), these are classified according to the virtuality of the W boson propagation in t-channel, s-channel, or associated tW production [2]. Single top quark production in all channels is directly related to the squared modulus of the Cabibbo-Kobayashi-Maskawa matrix element V tb . As a consequence, it provides a direct measurement of this quantity and thereby a check of the standard model (SM). The single top quark topology also opens a window for searches of anomalous Wtb couplings relative to the SM, where the interaction vertex of the top quark with the bottom quark (b) and the W boson (Wtb vertex) has a V-A axial-vector structure. Flavourchanging neutral currents (FCNC) are absent at lowest order in the SM, and are significantly suppressed through the Glashow-Iliopoulos-Maiani mechanism [3] at higher orders. Various rare decays of K, D, and B mesons, as well as the oscillations in K 0 K 0 , D 0 D 0 , and B 0 B 0 systems, strongly constrain FCNC interactions involving the first two generations and the b quark [4]. The V-A structure of the charged current with light quarks is well established [4]. However, FCNC involving the top quark, as well as the structure of the Wtb vertex, are significantly less constrained. In the SM, the FCNC couplings of the top quark are predicted to be very small and not detectable at current experimental sensitivity. However, they can be significantly enhanced in various SM extensions, such as supersymmetry [5][6][7], and models with multiple Higgs boson doublets [8][9][10], extra quarks [11][12][13], or a composite top quark [14]. New vertices with top quarks are predicted, in particular, in models with light composite Higgs bosons [15,16], extradimension models with warped geometry [17], or holographic structures [18]. Such possibilities can be encoded in an effective field theory through higher-dimensional gauge-invariant operators [19,20]. Direct limits on top quark FCNC parameters have been established by the CDF [21], D0 [22], and ATLAS [23] Collaborations. There are two complementary strategies to search for FCNC in single top quark production. A search can be performed in the s channel for resonance production through the fusion of a gluon (g) with an up (u) or charm (c) quark, as was the case in analyses by the CDF and ATLAS Collaborations. However, as pointed out by the D0 Collaboration, the s-channel production rate is proportional to the square of the FCNC coupling parameter and is therefore expected to be small [22]. On the other hand, the t-channel cross section and its corresponding kinematic properties have been measured accurately at the LHC [24][25][26], with an important feature being that the t-channel signature contains a lightquark jet produced in association with the single top quark. This light-quark jet can be used to search for deviations from the SM prediction caused by FCNC in the top quark sector. This strategy was applied by the D0 Collaboration [22], as well as in our analysis. Models that have contributions from FCNC in the production of single top quarks can have sizable deviations relative to SM predictions. Processes with FCNC vertices in the decay of the top quark are negligible. In contrast, the modelling of Wtb couplings can involve anomalous Wtb interactions in both the production and the decay, because both are significantly affected by anomalous contributions. All these features are explicitly taken into account in the COMPHEP Monte Carlo (MC) generator [27]. In this paper, we present a search by the CMS experiment at the CERN LHC for anomalous Wtb couplings and FCNC interactions of the top quark through the u or c quarks and a gluon (tug or tcg vertices), by selecting muons arising from W boson decay (including through a τ lepton) from the top quarks in muon+jets events. Separation of signal and background is achieved through a Bayesian neural network (BNN) technique [28,29], performed using the Flexible Bayesian modelling package [30]. Limits on Wtb and top quark FCNC anomalous couplings are obtained from the distribution in the BNN discriminants. The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity η [31] coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a fixed time interval of less than 4 µs. The high-level trigger processor farm further decreases the event rate from around 100 kHz to less than 1 kHz, before data storage. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [31]. The particle-flow event algorithm [32,33] reconstructs and identifies each individual particle with an optimized combination of information from the various elements of the CMS detector. The energy of photons is directly obtained from the ECAL measurement, corrected for zero-suppression effects. The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energy. Jets are reconstructed offline from particle-flow candidates clustered by the anti-k T algorithm [34,35] with a size parameter of 0.5. Jet momentum is determined as the vectorial sum of all particle momenta in the jet, and is found from simulation to be within 5 to 10% of the true momentum over the whole transverse momentum (p T ) spectrum and detector acceptance. An offset correction is applied to jet energies to take into account the contribution from additional proton-proton interactions within the same or nearby bunch crossing (pileup). Jet energy corrections are derived from simulation, and are confirmed with in situ measurements of the energy balance in dijet and photon+jet events. Additional selection criteria are applied to each event to remove spurious jet-like features originating from isolated noise patterns in certain HCAL regions. The missing transverse momentum vector p miss T is defined as the projection on the plane perpendicular to the beams of the negative vector sum of the momenta of all reconstructed particles in an event. Its magnitude is referred to as E miss T [32]. Data and simulated events The analysis is performed using proton-proton collisions recorded with the CMS detector in 2011 and 2012 at centre-of-mass energies of 7 and 8 TeV, respectively, and corresponding to integrated luminosities of 5.0 and 19.7 fb −1 . The t-channel production of a single top quark is modelled using the COMPHEP 4.5 package [27], supplemented by an additional matching method used to simulate an effective next-to-leading-order (NLO) approach [36]. The NLO cross sections used for t-channel single top production are σ(7 TeV) = 64.6 +2.6 −1.9 pb [37] and σ(8 TeV) = 84.7 +3.8 −3.2 pb [38,39]. The POWHEG 1.0 NLO MC generator [40] provides an alternative model to estimate the sensitivity of the analysis to the modelling of the signal. Contributions from anomalous operators are added to the COMPHEP simulation for both the pro-duction and decay of top quarks. This takes into account the width of the top quark, spin correlations between the production and decay, and the b quark mass in the anomalous and SM contributions. The LO MADGRAPH 5.1 [41] generator is used to simulate the main background processes: top quark pair production with total cross sections of σ(7 TeV) = 172.0 +6.5 −7.6 pb [42] and σ(8 TeV) = 253 +13 −14 pb [43], and W boson production with total cross sections of σ(7 TeV) = 31.3 ± 1.6 nb and σ(8 TeV) = 36.7 ± 1.3 nb [44], for processes with up to 3 and 4 additional jets in the matrix element calculations, respectively. The subdominant backgrounds from Drell-Yan in association with jets (Z/γ * +jets) production, corresponding to σ(7 TeV) = 5.0 ± 0.3 nb and σ(8 TeV) = 4.3 ± 0.2 nb [44], and from WW, WZ, and ZZ (dibosons) production, corresponding to σ(7 TeV) = 67.1 ± 1.7 pb and σ(8 TeV) = 73.8 ± 1.9 pb [45] are modelled using LO PYTHIA 6.426 [46]. The contribution from multijet events, with one of the jets misidentified as a lepton, is estimated using a mutually exclusive data sample. The details are given in the next section. Single top quark production in the s channel with σ(7 TeV) = 4.6 +0.2 −0.2 pb, σ(8 TeV) = 5.5 ± 0.2 pb, and in the tW channel with σ(7 TeV) = 15.7 ± 1.2 pb, σ(8 TeV) = 22.2 ± 1.5 pb [47] are modelled using the POWHEG generator. The PYTHIA 6.4 program is also used to simulate parton showers for the hard processes calculated in the COMPHEP, MAD-GRAPH, and POWHEG generators. The PDF4LHC recipe [48] is used to reweight all simulated events to the central value of CT10 PDF [49]. The Z2Star [50,51] set of parameters is used to simulate the underlying-events. Because of the importance of the W+jets background and the significant difference in the kinematic distributions, the following contributions are considered separately in the analysis: W boson produced together with a pair of b or c quarks (W+QQ); W boson produced in association with a c quark (W+c); W boson events that do not contain heavy quarks (W+light); and events associated with underlying events (UE) that contain heavy quarks originating from the initial parton interaction (W+QX). Different nuisance parameters for the normalization scale factors are used for these components of the complete W+jets MADGRAPH simulation. Simulated events are reweighted to reproduce the observed particle multiplicity from pileup. Small differences between the data and simulation in trigger efficiency [52,53], lepton identification and isolation [52,53], and b tagging [54] are corrected via scale factors, which are generally close to unity. Event selection The following signature is used to identify t-channel single top quark production candidates: exactly one isolated muon [52], one light-flavour jet in the forward region (defined below); one b-tagged jet [54] from the b quark originating from the decay of the top quark, and an associated "soft" b jet. The "soft" b jet is likely to fail either the p T or η threshold (given below). The presence of a neutrino in the decay of the W boson leads to a significant amount of E miss T , which is used to enhance the signal. The analysis is performed using data collected with a trigger requiring at least one muon in each event. To accommodate the increasing instantaneous luminosity delivered by the LHC in 2011, different triggers were used for various data-taking periods, with the muon p T threshold ranging from 20 to 27 GeV. A single trigger with muon threshold p T > 24 GeV was used in 2012. The selected events are required to have: (i) at least one primary vertex reconstructed from at least four tracks, and located within 24 cm in the longitudinal direction and 2 cm in the radial direction from the centre of the detector; (ii) only one isolated (I µ rel < 0.12) muon [52] with p T > 20 (27) GeV according to the variation of the trigger p T threshold at √ s = 7 and p T > 26 GeV at √ s = 8 TeV, and |η| < 2.1, Signal extraction with Bayesian neural networks originating from the primary vertex, where the relative isolation parameter of the muon, I µ rel , is defined as the sum of the energy deposited by long-lived charged hadrons, neutral hadrons, and photons in a cone with radius ∆R = √ (∆η 2 + ∆φ 2 ) = 0.4, divided by the muon p T , where ∆η and ∆φ are the differences in pseudorapidity and azimuthal angle (in radians), respectively, between the muon and the other particle's directions. Events with additional muons or electrons are rejected using a looser quality requirement of p T > 10 GeV for muons and 15 GeV for electrons, |η| < 2.5, and having I µ rel < 0.2 and I e rel < 0.15, where the electron relative isolation parameter I e rel is measured similarly to that for a muon; (iii) two or three jets with p T > 30 GeV and |η| < 4.7, and, at √ s = 8 TeV, the highest-p T jet (j 1 ) is required to satisfy p T (j 1 ) > 40 GeV. For events with 3 jets we require the secondhighest-p T jet (j 2 ) to have p T (j 2 ) > 40 GeV; (iv) at least one b-tagged jet and at least one jet that fails the combined secondary vertex algorithm tight b tagging working-point requirement [54]. A tight b tagging selection corresponds to an efficiency of ≈50% for jets originating from true b quarks and a mistagging rate of ≈0.1% for other jets in the signal simulation. Control regions containing events with 2 or 3 jets and no b-tagged jet, and events with 4 jets, 2 of which are b-tagged, are used to validate the modelling of the W+jets and tt backgrounds, respectively. The multijet events contribute background when there is a muon from the semileptonic decay of a b or c quark, or a light charged hadron is misidentified as a muon. These background muons candidates are usually surrounded by hadrons. This feature is exploited to define a control region by demanding exactly one muon with an inverted isolation criteria for hadronic activity of 0.35 < I µ rel < 1. The jets falling inside the cone of a size ∆R = 0.5 around the selected muon are removed and the remaining jets are subject to the criteria that define the signal. To suppress the multijet background, we use a dedicated Bayesian neural network (multijet BNN), with the following input variables, sensitive to multijet production: the transverse mass m ) of the reconstructed W boson, the azimuthal angle ∆φ(µ, p miss T ) between the muon direction and p miss T , the quantity E miss T , and the muon p T . The same set of variables is used for both the √ s = 7 and 8 TeV data sets, but because of the different selection criteria, different BNNs are trained for each set. In Fig. 1, data-to-simulation comparisons are shown for the multijet BNN discriminant and the m T (W) distributions for the √ s = 8 TeV data. The predictions for the multijet BNN discriminant and m T (W) agree with the data. The normalization of the multijet background is taken from a fit to the multijet BNN distribution, and all other processes involving a W boson are normalized to their theoretical cross sections. To reduce the multijet background, the multijet BNN discriminant is required to have a value greater than 0.7. Using the value of the discriminant rather than a selection on m T (W) increases the signal efficiency by 10%, with a similar background rejection. This requirement rejects about 90% of the multijet background, while rejecting only about 20% of the signal, as determined from simulation. The observed and predicted event yields before and after the multijet background suppression are listed in Table 1. Signal extraction with Bayesian neural networks Events that pass the initial selection and the multijet BNN discriminant requirement are considered in the final analysis, which requires the training of the BNN (SM BNN) to distinguish the t-channel single top quark production process from other SM processes. The sand tWchannels, tt, W+jets, diboson, and Drell-Yan processes with their relative normalizations are Figure 1: The distributions of the multijet BNN discriminant used for the QCD multijet background rejection (left) and the reconstructed transverse W boson mass (right) from data (points) and the predicted backgrounds from simulation (filled histograms) for √ s = 8 TeV. The lower part of each plot shows the relative difference between the data and the total predicted background. The vertical bars represent the statistical uncertainties. and right-handed ( f R T ) tensor couplings from the left-handed vector coupling ( f L V ) expected in the SM. The physical meanings of these couplings are discussed in Section 7. The FCNC processes with anomalous tcg and tug vertices are assumed to be completely independent of the SM contribution. In addition tcg BNN and tug BNN are trained to distinguish the corresponding contributions from the SM contribution. The kinematic properties of the potential tcg and tug contributions are slightly different owing to the different initial states and the discussion of the couplings appears in Section 8. The input variables used by each BNN are summarised in Table 2. Their choice is based on the difference in the structure of the Feynman diagrams contributing to the signal and background processes. Distributions of four representative variables for data and simulated events are shown in Fig. 2. Several variables in the analysis require full kinematic reconstruction of the top quark and W boson candidates. For the kinematic reconstruction of the top quark, the W boson mass constraint is applied to extract the component of the neutrino momentum along the beam direction (p z ). This leads to a quadratic equation in p z . For two real solutions of the equation, the smaller value of p z is used as the solution. For events with complex solutions, the imaginary components are eliminated by modifying E miss T such that m T (W) = M W [4]. The data-to-simulation comparisons shown in Fig. 3 demonstrate good agreement in the control regions enriched in top quark pair events (4 jets with 2 b tags) and W+jets (no b-tagged jets), as well as in the signal regions, as discussed in Section 4. In Fig. 3, the simulated events are normalized to the results obtained in the fit to the data. Systematic uncertainties and statistical analysis The analysis extracts the parameters of single top quark production and any signs of beyond the SM behaviour based on the BNN discriminant distributions. It follows the same methodology for estimating the uncertainties as used in previously CMS measurements of single top quark production [58,59]. Bayesian inference is used to derive the posterior probability. A signal strength µ s is the central value of the posterior probability distribution p( µ s |d) with a certain data set d. This posterior probability can be obtained as the integral Table 2. The lower part of each plot shows the relative difference between the data and the total predicted background. The hatched band corresponds to the total simulation uncertainty. The vertical bars represent the statistical uncertainties. Plots are for the √ s = 8 TeV data set. Table 2: Input variables for the BNNs used in the analysis. The symbol × represents the variables used for each particular BNN. The number 7 or 8 marks the variables used in just the √ s = 7 or 8 TeV analysis. The symbol "tug" marks the variables used just in the training of the tug FCNC BNN. The notations "leading" and "next-to-leading" refer to the highest-p T and second-highest-p T jet, respectively. The notation "best" jet is used for the jet that gives a reconstructed mass of the top quark closest to the value of 172.5 GeV, which is used in the MC simulation. Variable vector sum of the p T of the leading and the next-to-leading jet × × p T (∑ i =i best p T (j i )) vector sum of the p T of all jets without the best jet 7 scalar sum of the p T of the leading and the next-to-leading jet azimuthal angle between the muon and p miss where µ b are the background yields, θ are additional nuisance parameters, which are the systematic uncertainties of the analysis, π( µ s ), π( µ b ), and π( θ) are the prior probabilities of the corresponding parameters, π(d) is a normalization factor, and p(d| µ s , µ b , θ) is the probability to obtain a given d with given µ s , µ b , and θ. Uncertainties considered in the analysis are discussed next. For the variation of the background normalization, scale parameters are introduced in the statistical model, and the corresponding variations of these parameters are the same as for the SM measurement in Ref. [59]. All background processes and their normalizations are treated as being statistically independent. To estimate the uncertainty in the multijet distributions, two different isolation criteria are used (0.3 < I µ rel < 0.5 and 0.5 < I µ rel < 1). Also, a comparison is made between data and events generated with the PYTHIA 6.4 simulation. The impact of the changes in the multijet template are well within the range of −50% to +100%, and this is included as a prior uncertainty in the statistical model. To estimate the uncertainties in the detector-related jet and E miss T corrections, the four-momenta of all reconstructed jets in simulated events are scaled simultaneously in accordance with p T -and η-dependent jet energy correction (JEC) uncertainties [60]. These changes are also propagated to E miss T . The effect of the 10% uncertainty in E miss T coming from unclustered energy deposits in the calorimeters that are not associated with jets is estimated after subtracting all the jet and lepton energies from the E miss T calculation. Parameters in the procedure to correct the jet energy resolution (JER) are varied within their uncertainties, and the procedure is repeated for all jets in the simulation [60,61]. The variations coming from the uncertainty in the b quark tagging efficiency and mistagging rate of jets are propagated to the simulated events [54]. The uncertainties for c quark jets are assumed to be twice as large as for b quark jets. The scale factors for tagging b and c quark jets are treated as fully correlated, whereas the mistagging scale factors are varied independently. The integrated luminosity in the √ s = 7 and 8 TeV data-taking periods is measured with a relative uncertainty of 2.2% [62] and 2.6% [63], respectively. In the combined fits, all experimental uncertainties, including these from the integrated luminosity, are treated as uncorrelated between the data sets. The uncertainty in the pileup modelling is estimated by using different multiplicity distributions obtained by changing the minimum-bias cross section by ±5% [64]. Trigger scale factors, muon identification, and muon isolation uncertainties are introduced in the statistical model as additional factors, Gaussian-distributed parameters with a mean of 1 and widths of 0.2%, 0.5%, and 0.2%, respectively. The uncertainties from additional hard-parton radiation and the matching of the samples with different jet multiplicity are evaluated by doubling or halving the threshold for the MADGRAPH jet-matching procedure for the top quark pair and W+jets production, using dedicated MADGRAPH samples generated with such shifts in the parameters [65]. The renormalization and factorization scale uncertainties are estimated using MC simulated samples generated by doubling or halving the renormalization and factorization scales for the signal and the main background processes. Uncertainties in the parton distribution functions (PDF) are evaluated with the CT10 PDF set according to the PDF4LHC formulae for Hessian-based sets. We follow this recommendation and reweight the simulated events to obtain the uncertainty, which is about 5% on average. The uncertainty from the choice of the event generator to model the signal is estimated using pseudo-experiments. These pseudo-experiments are used to fit simulated events, obtained from the COMPHEP signal sample, and from the POWHEG signal sample. Half of the difference between these two measurements is taken as the uncertainty (5%). Previous CMS studies [66,67] of top quark pair production showed a softer p T distribution of the top quark in the data than predicted by the NLO simulation. A correction for the simulation of tt production background is applied. The small effect of this reweighting procedure (0.8%) is taken into account as an uncertainty. The uncertainty owing to the finite size of the simulated samples is taken into account through the Barlow-Beeston method [68]. The BNN discriminant distributions can be affected by different types of systematic uncertainties. Some of these only impact the overall normalization, while others change the shape of the distribution. Both types of systematic uncertainties are included in the statistical model through additional nuisance parameters. Systematic uncertainties related to the modelling of JEC, JER, b tagging and mistagging rates, E miss T , and pileup, are included as nuisance parameters in the fit. The variations in these quantities leads to a total uncertainty of about 6%. Other systematic uncertainties, i.e. those related to the signal model, renormalization and factorization scales, matching of partons to final jets, and choice of PDF, are handled through the pseudo-experiments to determine the difference between the varied and the nominal result. The total uncertainty from these sources is about 8%. We include uncertainties in the statistical model by following the same approach as described in previous CMS measurements of the single top quark t-channel cross section [24,58,59]. The SM BNN discriminant distribution after the statistical analysis and evaluation of all the uncertainties are shown in Fig. 4 for the two data sets. As the √ s = 7 and 8 TeV data sets have similar selection criteria, reweighting, and uncertainties and the physics is expected to also be similar, the data sets are combined by performing a joint fit. The previously described systematic uncertainties and methods of statistical analysis are used in the combination. In the statistical model, the experimental uncertainties are treated as uncorrelated between the data sets. The theoretical uncertainties (from the choice of generator, scales, and PDF) are treated as fully correlated between the data sets. The sensitivity of the separate √ s = 7 and 8 TeV analyses and their combination is limited by their corresponding systematic uncertainties. Therefore, the combined statistical model does not necessarily provide the tightest exclusion limits. In order to validate the analysis strategy and the statistical treatment of the uncertainties, we measure the cross sections in the SM t channel, and find values and uncertainties in agreement with previous measurements [58,59] and with the prediction of the SM. Modelling the structure of the anomalous Wtb vertex The t-channel single top quark production is sensitive to possible deviations from the SM prediction for the Wtb vertex. The most general, lowest-dimension, CP-conserving Lagrangian for the Wtb vertex has the following form [69,70]: where P L,R = (1 ∓ γ 5 )/2, σ µν = i(γ µ γ ν − γ ν γ µ )/2, g is the coupling constant of the weak interaction, the form factor f L V ( f R V ) represents the left-handed (right-handed) vector coupling, and f L T ( f R T ) represents the left-handed (right-handed) tensor coupling. The SM has the following set of coupling values: The same analysis scheme proposed in Refs. [71,72] is used to look for possible deviations from the SM, by postulating the presence of a left-handed vector coupling. Two of the four couplings are considered simultaneously in two-dimensional scenarios: , where the couplings are allowed to vary from 0 to +∞, and ( f L V , f R T ) with variation bounds from -∞ to +∞. Then, considering three couplings simultaneously leads to the three-dimensional scenarios ( . In these scenarios, the couplings have the same variation range of (0; +∞) for f R V and f L T , and (-∞; +∞) for f L V and f R T . In the presence of anomalous Wtb couplings in both the production and decay of the top quark, the kinematic and angular distributions are significantly affected relative to their SM expectations. It is therefore important to correctly model the kinematics of such processes. Following the method of Ref. [73], the event samples with left-handed (SM) interactions and a purely right-handed vector (left-handed tensor) interactions are generated to model the Simulated event samples with the left-handed interaction in the production and the right-handed vector (left-handed tensor) interaction in the decay of the top quark, and vice versa, are also generated. The scenarios with f R T couplings are more complicated because of the presence of cross terms, such as ( f L V · f R T ), in the squared matrix element describing the single top quark production process. Special event samples are generated for such scenarios. Owing to the presence of the cross terms with odd power of f L V and f R T couplings, the analysis is sensitive to negative values of these couplings. The details of the simulation approach are provided in Ref. [73]. All signal samples are simulated at NLO precision following Ref. [36]. Exclusion limits on anomalous couplings Following the strategy described in Section 5, in addition to the SM BNN, the anomalous Wtb BNNs are trained to distinguish possible right-handed vector or left-/right-handed tensor structures from the SM left-handed vector structure in the t-channel single top quark events. The set of variables chosen for the different Wtb BNNs are listed in Table 2. The first twodimensional scenario considers a possible mixture of f L V and (anomalous) f R V couplings. The corresponding Wtb BNN ( f L V , f R V ) is trained to distinguish the contribution of these two couplings. For the ( f L V , f L T ) scenario, another Wtb BNN is trained to separate the left-handed vector interacting single top quark SM events from events with a left-handed tensor operator in the Wtb vertex. For the third scenario, ( f L V , f R T ), the last Wtb BNN is trained to separate lefthanded-vector-interacting single top quark SM events from events with a right-handed-tensor operator in the Wtb vertex. Figure 5 shows the comparison between the data and simulation for the outputs of the Wtb The SM BNN and one of the Wtb BNN discriminants are used as inputs in the simultaneous fit of the two BNN discriminants. One-dimensional constraints on the anomalous parameters are obtained by integrating over the other anomalous parameter in the corresponding scenario. The results of the fits are presented in the form of two-dimensional contours at 68% and 95% CL exclusion limits, and as given in Table 3, as one-dimensional constraints in different scenarios. Both the one-and two-dimension limits are measured for the individual data sets and their combination. The combined observed and expected two-dimensional contours in the Fig. 6. As the interference terms between f L T and f R T or f R V and f R T couplings are negligible [20], it is possible to consider three-dimensional scenarios with simultaneous variation of The three-dimensional statistical analysis is performed using the SM BNN, Wtb BNN ( f L V , f R T ), and either the Wtb BNN ( f L V , f L T ) or Wtb BNN ( f L V , f R V ) discriminants to obtain the excluded regions at 68% and 95% CL for f L T and f R T , again by integrating over the other anomalous couplings. The combined √ s = 7 and 8 TeV results in the three-dimensional simultaneous fit of f L V , f L T , and f R T couplings are presented in Fig. 7 (left) in the form of observed and expected 68% and 95% exclusion contours on the ( f L T , f R T ) couplings. The corresponding results for the f L V , f R V , and f R T couplings are shown in Fig. 7 (right) as two-dimensional exclusion limits in the ( f R V , f R T ) plane. The measured exclusion limits from the three-dimensional fits with the combined data sets are f L V > 0.98, | f R V | < 0. 16, and | f L T | < 0.057. For f R T we take the more-conservative limits from the three-dimensional fits The plots on the left (right) correspond to √ s = 7 (8) TeV. The Wtb BNNs are trained to separate SM left-handed interactions from one of the anomalous interactions. In each plot, the expected distribution with the corresponding anomalous coupling set to 1.0 is shown by the solid curve. The lower part of each plot shows the relative difference between the data and the total predicted background. The hatched band corresponds to the total simulation uncertainty. The vertical bars represent the statistical uncertainties. of −0.049 < f R T < 0.048 as our measurement. These limits are much more restrictive than those obtained by the D0 Collaboration in a direct search [72], and agree well with the recent results obtained by the ATLAS [74] and CMS [75,76] experiments from measurements of the W boson helicity fractions. Table 3: One-dimensional exclusion limits obtained in different two-and three-dimensional fit scenarios. The first column shows the couplings allowed to vary in the fit, with the remaining couplings set to the SM values. The observed (expected) 95% CL limits for each of the two data sets and their combination are given in the following columns. Theoretical introduction The FCNC tcg and tug interactions can be written in a model-independent form with the following effective Lagrangian [1]: where Λ is the scale of new physics (≈1 TeV), q refers to either the u or c quarks, κ tqg defines the strength of the FCNC interactions in the tug or tcg vertices, λ a /2 are the generators of the SU(3) colour gauge group, g s is the coupling constant of the strong interaction, and G a µν is a gluon field strength tensor. The Lagrangian is assumed to be symmetric with respect to the left and right projectors. Single top quark production through FCNC interactions contains 48 subprocesses for both the tug and tcg channels, and the cross section is proportional to (κ tqg /Λ) 2 . Representative Feynman diagrams for the FCNC processes are shown in Fig. 8. Since the influence of the FCNC parameters on the total top quark width is negligible for the allowed region of FCNC parameters, the SM value for the top quark width is used in this analysis. The COMPHEP generator is used to simulate of the signal tug and tcg processes. The FCNC samples are normalized to the NLO cross sections using a K factor of 1.6 for higher-order QCD corrections [77]. q q g g Figure 8: Representative Feynman diagrams for the FCNC processes. Exclusion limits on tug and tcg anomalous couplings FCNC processes are kinematically different from any SM processes, therefore, it is reasonable to train a new BNN to discriminate between FCNC production as the signal and the SM background, including the t-channel single top quark production. Owing to the possible presence of a FCNC tug or tcg signal, two BNNs are trained to distinguish each of the couplings. The variable choices for these BNNs, shown in Table 2, are motivated by analysis of the Feynman diagrams of the FCNC and SM processes. The comparison of the neural network output for the data and model is shown in Fig. 9. Output histograms from the tug and tcg FCNC BNN Figure 9: The FCNC BNN discriminant distributions when the BNN is trained to distinguish t → ug (upper) or t → cg (lower) processes as signal from the SM processes as background. The results from data are shown as points and the predicted distributions from the background simulations by the filled histograms. The plots on the left (right) correspond to the √ s = 7 (8) TeV data. The solid and dashed lines give the expected distributions for t → ug and t → cg, respectively, assuming a coupling of |κ tug |/Λ = 0.04 (0.06) and |κ tcg |/Λ = 0.08 (0.12) TeV −1 on the left (right) plots. The lower part of each plot shows the relative difference between the data and the total predicted background. The hatched band corresponds to the total simulation uncertainty. The vertical bars represent the statistical uncertainties. discriminants for the SM backgrounds are used as input to the analysis. The posterior probability distributions of |κ tug |/Λ and |κ tcg |/Λ are obtained by fitting the histograms. The combined √ s = 7 and 8 TeV observed and expected exclusion limits at 68% and 95% CL on the anomalous FCNC parameters in the form of two-dimensional contours are shown in Fig. 10. The two-dimensional contours reflect the possible simultaneous presence of the two FCNC parameters. Individual exclusion limits on |κ tug |/Λ are obtained by integrating over |κ tcg |/Λ and vice versa. These individual limits can be used to calculate the upper limits on the branching fractions B(t → ug) and B(t → cg) [78]. The observed and expected exclusion limits at 95% CL on the FCNC couplings and the corresponding branching fractions are given in Table 4. These limits are significantly better than those obtained by the D0 [22] and CDF [21] experiments, and Figure 10: Combined √ s = 7 and 8 TeV observed and expected limits for the 68% and 95% CL on the |κ tug |/Λ and |κ tcg |/Λ couplings. Summary A direct search for model-independent anomalous operators in the Wtb vertex and FCNC couplings has been performed using single top quark t-channel production in data collected by the CMS experiment in pp collisions at √ s = 7 and 8 TeV. Different possible anomalous contributions are investigated. The observed event rates are consistent with the SM prediction, and exclusion limits are extracted at 95% CL. The combined limits in three-dimensional scenarios on possible Wtb anomalous couplings are f L V > 0.98 for the left-handed vector coupling, | f R V | < 0.16 for the right-handed vector coupling, | f L T | < 0.057 for the left-handed tensor coupling, and −0.049 < f R T < 0.048 for the right-handed tensor coupling. For FCNC couplings of the gluon to top and up quarks (tug) or top and charm quarks (tcg), the 95% CL exclusion limits on the coupling strengths are |κ tug |/Λ < 4.1 × 10 −3 TeV −1 and |κ tcg |/Λ < 1.8 × 10 −2 TeV −1 or, in terms of branching fractions, B(t → ug) < 2.0 × 10 −5 and B(t → cg) < 4.1 × 10 −4 . Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science
2017-02-10T19:27:50.000Z
2016-10-11T00:00:00.000
{ "year": 2016, "sha1": "a021f36b655026b146da2d28dc88f88e01a8c8d8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/jhep02(2017)028", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a021f36b655026b146da2d28dc88f88e01a8c8d8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235284540
pes2o/s2orc
v3-fos-license
Effects of the concentration of various bacillus family bacteria on the strength and durability properties of concrete: A Review It is almost impossible to avoid the development of cracks on the surface of the concrete even after using the best quality material and workmanship. These cracks may result into the degradation of concrete in terms of its strength and durability. Therefore it becomes utmost important to seal these cracks so that the devastating effects of the degrading agencies that may enter into the concrete through these cracks can be reduced or eliminated. This paper presents a review on the effects of the concentration of various bacillus family bacteria on the strength and durability properties of concrete. The bacteria with a concentration of 100 CFU to 108 has been considered in this review. This paper also outlines the self-healing ability of different bacillus family bacteria. Self-healing refers to the cracks in concrete, through addition of the bacillus family bacteria to the concrete mix and checks the impacts of bacillus family bacteria on the strength and durability properties of the concrete. Introduction One of the most used material in construction industry is concrete [1]. Concrete is strong in compression and weak in tension so cracks are developed on the surface of concrete [2]. Micro cracking is unavoidable on the surface of concrete that can enhance the permeability and can reduce the life span and weakens the concrete [3] [4]. This problem of cracking is also very much prominient in cement mortars [5]- [7]. Early age cracking of concrete structures is due to humidity fluctuations and temperature thereby making space that allows the harmful degrading agencies to enter into the concrete which tend to degrade the strength and durability of concrete gradually over time [8]. Many research works have tried to overcome the development of cracks on the surface of the concrete so as to reduce or eliminate its effect on the concrete by using various approaches [5], [9]- [12]. Autogenous healing is one such approach in which formation of calcium carbonate take place that helps in sealing of the cracks [13]. Positive results of calcite precipitation based methods have led to several researches on the use of bacteria in concrete. A variety of microorganisms have been used to improve the strength and durability properties of cement concrete [14]. Bacillus family bacteria are the most useful microbial mineral approach to improve the mechanical and durability properties of concrete by producing calcite precipitation to fill the void and pores present in the concrete [14]- [17]. E. Madhavi (2016) have shown that the mechanical properties of the fly ash and GGBS based concrete gets improved when a bacteria has been used in a concentration of 10 6 cell/ml [18]. Sanjay et al. (2016) have studies the adaptability of the bacteria in nutrient broth medium and urea medium and the improvement in strength in both the mediums. It was reported that the bacteria is more adaptable in nutrient broth medium than urea medium and also developed better results in the nutrient broth medium [19]. In this study, investigation has been done to study the effect of bacillus family bacteria like subtilis, megaterium, sporosarcina pasteurii, sphaericus on the durability and mechanical properties of concrete. The bacteria concentration of up to 10 8 cells/ml has been considered in the current study. Methods Biological agents can serve as an excellent means for developing self-healing property in the concrete. A number of bacteria such as bacillus subtilis, bacillus sphaericus, bacillus pasteurii and bacillus megaterium have been utilized by the researchers to study their effect on the mechanical and durability properties of the concrete. The best self-healing system is that which gets triggered immediately after sensing the development of cracks in the concrete. The cracks developed in the existing structures can also be easily repaired or retrofitted by the use of the self-healing technique. The superficial microcracks can be easily and efficiently healed with the use of the autogenous healing techniques. Calcium carbonate layer gets formed in the cracks, on addition of bacteria in the concrete, which confirms the calcite precipitation [21], [22]. The high alkalinity of the concrete is maintained by the addition of bacteria [23], [24]. The bond among the ingredients of the concrete such as cement gel, sand and aggregates become stronger due to the bacteria induced calcite precipitation [25]. Additionally, the durability of the concrete also gets improved due to the filling of the voids and micro-cracks. This bacterial precipitation can effectively fill the cracks of width less than 0.2 mm but if the width becomes more than 0.2 mm then the self-healing mechanism becomes ineffective in filling the microcracks. In bacteria induced concrete the development of cracks of any size triggers the action of the bacteria from its stage of hibernation. As soon as the calcite precipitation starts the cracks gets filled by the calcium carbonate thereby causing the self-healing of the concrete. Once the cracks gets filled the bacteria again gets back to their hibernate stage. This process gets repeated every time a crack is developed in the concrete. Bacteria perform as a long lasting healers and this mechanism of healing is called as Microbiologically Induced Calcium Carbonate Precipitation (MICP). Effect of Bacteria on concrete properties The bacillus family bacteria have been utilized for enhancing the characteristic properties of the concrete in a number of ways. The bacteria have been used in various concentrations up to a maximum of 10 8 colony forming units (CFU). In most of the cases the mechanical and the durability properties of the concrete got enhanced on introducing bacteria in the concrete. However the properties showed variations with the varying bacteria concentration. Effect of Bacillus subtilis bacteria Bacillus subtilis bacteria have been utilized for improving the properties and durability of the concrete. The bacterial concentration has been used up to 10 8 cells/ml. Shradha Jena et al. [27] used the various concentrations of bacillus subtilis bacteria at the rate of 10 0 , 10 2 , 10 3 , 10 4 , 10 5 and 10 6 cells/ml in the concrete. The results showed that strengths were increased up to 10 5 cells/ml. The increases in compressive strength were 27.27%, 29.59% and 32% as compared to standard concrete at 7, 14 and 28 days and strength were deceased after the concentration of 10 5 cell/ml. Use of Bacillus subtilis bacteria in concrete shows the enhancement in strength due to formation of calcite precipitation. Calcium carbonate precipitate filled the voids and heals the cracks presents in concrete. Chereddy Sonali Sri [30] used bacillus subtilis bacteria with the concentration of 10 7 cells/ml and checked the durability properties with the help of chloride penetration test, water absorption, carbonation depth and water penetration depth test. The test results showed a reduction in the chloride penetration, water absorption, carbonation depth and water penetration depth at the rate of 20.5%, 13.1, 27.2% and 44.3% respectively with the use of bacteria in concrete as compared to controlled concrete. Wasim Khaliq [2] used the bacillus subtilis bacteria with the concentration of 3 x 10 8 cells/ml and checked the effect of bacteria on the strength and durability properties of the concrete. The maximum increase in compressive strength was found to be 12% in bacterial concrete as compared to controlled concrete. The increase in strength was due to self-healing of concrete by the bacteria. [32] used the bacillus megaterium bacteria with the concentration (10 3 , 10 5 and 10 7 cells/ml) and checked the strength properties of the concrete. It was found that maximum increase in strength was obtained at 10 5 cell/ml concentration and after that the concentration was varied from 10 x10 5 cells/ml to 50 x 10 5 cells/ml and variation on the strength were observed. The best results were obtained at 30 x 10 5 cells/ml concentration. V. Nagarajan et al. [33] used the bacillus megaterium bacteria with the concentration (10 3 , 10 5 , 10 7 cells/ml) and checked the strength of bacterial concrete by using of compressive strength, flexural strength and split tensile strength test. The results indicate that flexural, compressive and split tensile strength enhanced up to 10 5 cells/ml concentration after that strengths started decreasing. Bacillus sp. N. Chahal and R Siddique (2013) [34] presented the method of self-healing in concrete. In this study sporosarcina pasteurii bacterial stain has been used. The conventional cement has been replaced by a combination of 10%, 20% and 30% of fly ash and 5% and 10% of silica fume. This combination was replaced in a medium containing bacterial solution of 10 3 ,10 5 and 10 7 cells/ml concentrations. Not only this, the experiment was supplemented by tests regarding porosity and water absorption, compressive strength and chloride permeability test for a period of 91 days. Finally, they resolved that the presence of S. Pasteurii has several positive impacts on concrete like it enhances the compressive strength, reduces the permeability and porosity when used in a combination with silica fume and fly ash concrete. It was also found that the recently formed cracks are sealed due to the presence of the bacteria. P. Ingle et al. (2017) [35] have analyzed the bio concrete in various aspects. They have also performed the qualitative tests like permeability test, compressive strength test etc. The species of bacteria that is used in this experiment is B. Pasteurii. The concentration of bacterial solution used for the production of concrete was 10 3 ,10 5 and 10 7 cells/ml. Lastly the observation found out to be the enhancement in strength and durability characteristics of concrete like rise in permeability and compressive strength and reduction in porosity of rice husk concrete. N. Balam [36] have presented IOP Publishing doi:10.1088/1757-899X/1116/1/012162 5 the tests regarding water permeability and rapid chloride, compressive strength and water absorption. They have used bacterial based light weight aggregate concrete (LWAC) with 10 6 cells/ml bacterial culture of S. pasteurii stain. As a result, they found out that chloride permeability and water absorption reduced by 21.1% and 10.2% respectively. They also observed the enhancement in compressive strength by 20.1% in experimental sample associated with analogous properties in the control ones. They also added, in LWAC sample with bacteria the porosity is less and condensed when compared to the concrete assorted with only bacteria. Navneet Chahal et al. [37] used the fly ash as the partially replacement (10%, 20%, 30%) with the cement and bacillus sp. bacteria with the concentration of (0, 10 3 , 10 5 , 10 7 ) cells/ml and check the strength and durability with the help of Rapid chloride penetration, compressive strength and water absorption test. The test results show the enhancement in the compressive strength up to 10 5 cells/ml after that compressive strength was reduced so 10 5 cells/ml was optimum concentration of bacteria. Optimum reduction in water absorption was obtained at concentration of 10 5 cells/ml. Bacillus sphaericus B. Madhu Sudana Reddy et al. [38] used the bacillus sphaericus bacteria with the concentration (10 0 ,10 3 ,10 5 ,10 7 cells/ml) and found that 10 5 cells/ml concentration gave the best result of strengths. This is due to calcium carbonate precipitation and better packing of pores in the concrete. The test results of strength, on M20 grade of concrete at the concentration of 0,10 3 ,10 5 ,10 7 cells/ml of bacteria, in compression were determined as 27MPa, 32.4MPa, 35.5MPa and 31.6MPa respectively, in flexure were found as 3.71MPa, 3.92MPa, 5.21MPa and 4.52MPa respectively and in tension were determined as 3.78MPa, 4.23MPa, 4.52MPa and 4.28MPa respectively. Prince Akash Nagar et al [39] used the bacillus sphaericus bacteria with the concentration of 10 8 cells/ml and calcined clay (10%, 15% and 20%) with the replacement of cement and checked the strength and durability properties of concrete using compressive strength test and water absorption test. The test result showed that strength was decreased with increasing proportions of calcined clay. Compressive strength were approx. decreased by 3.88% ,7.38% and 17.17% and water absorption was also decreased by 16.84%, 8.42% and 5.41% at replacement of 10%, 15% and 20% respectively with the calcined clay as compared to standard concrete. The bacillus sphaericus bacteria were then mixed in calcined clay concrete and the compressive strength increased by 21%, 24% and 24% and water absorption was decreased by 30.97%, 14.52% and 11.14% at 10%, 15% and 20% replacement of cement with Calcined clay. Jagannathan et al. (2018) [40] used the bacillus sphaericus and pasteurii bacteria and fly ash as a replacement of cement at the rate of 10%, 20% and 30% by weight of cement and found that the mechanical properties of the concrete got enhanced. Compressive, flexural and split tensile strength were increased by 10.82%, 5.25% and 29% respectively as compared to standard concrete. Gavimath et al. [41] used bacillus sphaericus bacteria as a self-healing agent in concrete mix. The compression strength was found to be enhanced by 31.05%, 45.98%, and 31.80% after 3, 7 & 28 days respectively. The strength of split tensile was also enhanced by 14.10%, 13.90% and 19.01% at 3, 7 & 28 days of testing respectively. Miscellaneous bacteria Nasrin Karimi et al. (2019) [42] used different types of fiber with the concentration 10 7 cells/ml of bacillus subtilis bacteria. Bacterial culture was used as a replacement of water in the concrete mix and in a surface treatment gel. The bacteria-incorporated samples, with the highest decrement of 63.88%, 27.51% and 39.84%were recorded for water absorption, carbonation depths and chloride ions respectively. Thus, the use of micro-organisms in the concrete mix and its curing in the calcium lactate-urea solution have shown the capabilities of filling the voids in concrete which tends to decrease porosity in fiber-reinforced samples. H. Ling et al [43] presented the effect of bacteria on selfhealing of cracks in concrete by using chloride test. In addition to this, method of electro migration was used which resulted in hastened transmission of chloride. They witnessed the self-healing IOP Publishing doi:10.1088/1757-899X/1116/1/012162 6 capability of bacteria which can be used in healing the cracks. Bacteria can suspend the chloride transmission in cracks and hence can take protective impacts for reinforced concrete elements. The experiment also reflects a greater way of applying microbial self-healing technique in real time constructions. R. Siddque et al [44] have presented effect of bio concrete in various concentrations. The study was conducted by replacing cement in concrete with 5%, 10% and 15% silica fume and at 10 5 cells/ml bacterial concentration. Around 10-12% improvement in compressive strength in a period of 28 days was observed. The authors concluded that the properties of bacterial concrete are way better when compared to that of concrete without bacteria. The result includes reduction in chloride permeability, water absorption and porosity due to the presence of bacteria in concrete. Siddique et al [45] have observed the effects on bacterial concrete when treated with 10%, 20% and 30% of CBFD in the place of cement. The concentration of bacterial solution was taken as 10 5 cells/ml. The test regarding properties of bacterial concrete was performed after 28 and 56 days of curing. The strength and the durability properties of the concrete were found to be enhanced by the use of bacterial solution and CBFD as a replacement of cement. Table 1 below presents the effect of the different bacteria concentrations on the strength and water absorption properties of the concrete. Water absorption of bacterial concrete was deceased as compared to standard concrete. Compressive strength increased up to 10 5 cells/ml and after that it started decreasing Reduced permeability and water absorption when used in a combination with silica fume and fly ash concrete. Maximum reduction at 10 5 cells/ml. Water absorption of concrete was reduced using the bacteria in concrete. Conclusions Based on the investigation of the effect of bacillus family bacteria like subtilis, sporosarcina pasteurii, megaterium, sphaericus on the mechanical properties and durability of concrete with the concentration ranging from 10 0 cells/ml to 10 8 cells/ml, following conclusions can be made: 1. The use of bacteria in concrete reduced the water permeability by filing of void. 2. Mechanical properties of concrete enhance with the use of bacteria in concrete. 3. Durability of concrete increased with the help of self-healing process of bacteria in concrete. 4. Bacteria are also helpful in repairing cracks present in concrete. 5. Bacterial concentration 10 5 cells/ml gave most significant results as compared to other bacterial concentration.
2021-06-03T00:15:37.485Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b99dc438f95604b2aad42e0a8884b06611efe9f6", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1116/1/012162/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b99dc438f95604b2aad42e0a8884b06611efe9f6", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
5634689
pes2o/s2orc
v3-fos-license
Minority Games With Applications to Distributed Decision Making and Control in Wireless Networks Fifth generation (5G) dense small cell networks (SCNs) are expected to meet the thousand-fold mobile traffic challenge within the next few years. When developing solution schemes for resource allocation problems in such networks, conventional centralized control is no longer viable due to excessive computational complexity and large signaling overhead caused by the large number of users and network nodes in such a network. Instead, distributed resource allocation (or decision making) methods with low complexity would be desirable to make the network self-organizing and autonomous. Minority game (MG) has recently gained attention of the research community as a tool to model and solve distributed resource allocation problems. The main objective of this article is to study the applicability of the MG to solve the distributed decision making problems in future wireless networks. We present the fundamental theoretical aspects of basic MG, some variants of MG, and the notion of equilibrium. We also study the current state-of-the-art on the applications of MGs in communication networks. Furthermore, we describe an example application of MG to SCNs, where the problem of computation offloading by users in an SCN is modeled and analyzed using MG. I. INTRODUCTION The next generation of wireless networks, also known as 5G, is expected to face a thousand-fold growth in mobile data traffic due to the increased smart device usage, proliferation of data hungry applications and pervasive connectivity requirement. Since the existing traditional macro cellular networks are not designed to cope with such large data traffic, network densification using small cell base stations (SBS) and implementation of small cell networks (SCNs) are proposed. In particular, SCNs are expected to improve the efficiency of the utilization of radio resources, including energy and spectrum. Although SCNs might become the key enablers of 5G, they impose some challenges that need to be addressed. For instance, the typical wireless resource allocation problems become more complicated in a dense network. Since the SCNs are expected to be hyper-dense and multi-tier, they must be self-organizing and self-healing, to avoid high complexity and fault-intolerance of central management. In other words, network management tasks such as resource allocation are preferred to be performed in a distributed manner. Also the unavailability of global and precise channel state information in dense networks needs to be addressed. Moreover, feedback and signaling overhead should be minimized. In order to address these challenges, in this paper we focus on the minority game (MG) and its potential applications to solve the distributed decision making/control problems that arise in 5G SCNs. Minority game has recently gained attention of the research community as a tool to model congestion problems encountered in wireless networks. In simple terms, in an MG, an odd number of players select between two alternatives in the hope of being in the minority, because only the minority group receives a pay-off. Thus, an MG is able to model a congested system with a large number of agents competing for shared resources, where pair-wise communication between agents does not take place. This finds application in 5G SCNs that accommodate a large number of users, where congestion can occur due to the scarcity of the (radio and/or computational) resources. In such scenarios, users would naturally prefer to select the less-crowded option. Moreover, MG involves selforganized decision making with minimal external information available to the agents as desired in dense SCNs. When developing distributed solution schemes for wireless resource allocation problems, conventional distributed approaches (e.g. those based on traditional game theory) are not always applicable, since such models become increasingly complex for systems with large number of agents. In essence, many such game models require pairwise interactions among the agents. In contrast, the agents' interaction in MG exhibit mean field like behavior [1]; i.e., an individual agent interacts with the aggregate behavior of all other agents. This makes MG a promising technique, especially since mean field based models are widely used as fitting tools to model large systems that are often studied in distributed resource allocation problems. The rest of the article is organized as follows. In Section II, the basic concepts, some variants, and equilibrium notions of MG are discussed. Section III describes the state-of-the-art applications of minority games in communication networks and outlines potential future applications and research directions. In Section IV, an example application of MG for distributed computational offloading is presented before the article is concluded in Section V. A. Basics of a Minority Game The concept of MG stems from El Farol bar problem [2], and was initially formulated and presented in [3]. In the most basic setting of such a game, an odd number of players choose between two actions while competing to be in the minority group through selecting the less popular action, since only the minority receives a reward. After each round of play, all players are informed of the winning action, which is then used arXiv:1610.02131v1 [cs.NI] 7 Oct 2016 as history data by the players to improve the decision making in the upcoming rounds. Let us denote the two actions by 0 and 1. Moreover, the action of player i at time t is shown by a i (t). The number of players (N ) is required to be an odd number to avoid ties. Each player has a given set of decision making strategies that help her select future actions. A strategy predicts the winning action of the next round based on the previous m number of winning actions, with m being the size of memory, also known as the brain size. In other words, a strategy is essentially a mapping of the m-bit length history string (µ(t)) to an action. An example strategy table for an agent is given in Table I, where the agent has two strategies S 1 and S 2 . Since there are two actions to select from, it is clear that the strategy space consists of 2 2 m total number of strategies, which become very large even for small m. Thus, reduced strategy space (RSS) is introduced to make the strategy space remarkably smaller without any significant impact on the dynamics of the MG. RSS is formulated by choosing 2 m strategy pairs so that in each pair, one strategy is anti-correlated to the other. In other words, the predictions given by one strategy are the exact opposites of the predictions given by the other strategy [4]. An example for two anti-correlated strategies is shown in Table I. Thus RSS constitutes of 2 m+1 total number of strategies, which is much smaller than the size of universal strategy space 2 2 m . At the outset of the game, each agent randomly draws S strategies from the strategy space which remain fixed for each player throughout the game. There is no a priori best strategy. Intuitively, if such strategy exists, all agents would use it and therefore lose due to the minority rule, which contradicts the initial assumption. As the game is played iteratively, each player evaluates her own strategies as follows: The strategies that make accurate predictions about the winning action are given a point and the poorly performing strategies are penalized. In other words, strategies are reinforced as they predict the winning action over a number of plays. Note that all strategies are scored after each round regardless of being used by the agent or not. Thus the score of each strategy is updated after each round of play according to its performance and the players use the strategy with the largest accumulated score at each round. Each player's objective is to maximize her utility over the time as she plays the game repeatedly. In MG, often the players compete for a limited resource without communicating with each other. Consequently, since players do not have any knowledge about other players' decisions, the decision making becomes almost autonomous [1]. B. Properties of a Minority Game The properties of a minority game are described by the following parameters and behaviors: • Attendance: One of the most important parameters of an MG is the collective sum of the actions of all players at a given time t, known as the attendance, A(t). • Volatility: Basically, the attendance value never settles but fluctuates around the mean attendance (i.e. cut-off value) [1]. The fluctuation around the mean attendance is known as volatility, σ. Volatility is an inverse measure of the system's performance and hence, the term σ 2 /N corresponds to an inverse global efficiency. When the fluctuations are smaller, that implies that the size of the minority, thus the number of winners, is larger. Hence, smaller volatility corresponds to higher users' satisfaction levels along with better resource utilization. It is known that volatility depends on the ratio 2 m /N , which is commonly referred to as the training parameter or control parameter (Let α = 2 m /N ) [1][4] [5]. (An example follows in Section IV, in particular in Fig. 2.) • Phase transition: Using the variation of the global efficiency w.r.t. α (See also Fig. 2), it can be seen that the game is divided into two phases by the minimum value of α (denoted by α * ), namely crowded phase and uncrowded phase. MG is said to be in the crowded phase when α < α * . This is because, for smaller m, the number of strategies, 2 2 m , is quite smaller compared to the number of agents N , thus many agents could be using the same strategy, leading them to make the same decision. This then creates a herding effect, causing the MG to enter the crowded phase. Once α > α * , the m values are large enough to make the strategy space larger than the number of agents N , so that the probability of any two agents using identical strategies diminish, thus making MG enter the uncrowded phase. Note that, α * corresponds to the minimum volatility indicating the system's ability to selforganize into a state where the number of satisfied agents and the resource utilization are maximized. Moreover, it is shown that the performance of MG surpasses that of the random choice game (where all agents choose each action with a probability = 0.5) for a certain range of α values. This is referred to as the better than random regime • Predictability: This is an important physical property of MG. It measures the information content in the previous set of attendance values, that is available to agents. Predictability is denoted by H, where H = 0 corresponds to the situation in which the game outcome is unpredictable. Moreover, the predictability is the parameter that characterizes the two phases in the MG. In MG, H = 0 for α < α * and H = 0 for α > α * . This implies that during the crowded phase, the game outcome is unpredictable and when MG enters uncrowded phase, the game outcome becomes more predictable [4]. C. Equilibrium Notions for an MG In this section, we provide a brief introduction to the notion of equilibria of MG. The reader is encouraged to look further (e.g. [6], [7]) for an in-depth tutorial. Assuming the number of agents is an odd number equal to N , an MG is in an equilibrium if each of the two alternatives is selected by (N − 1)/2 and (N + 1)/2 agents. Then, no agent would gain by unilaterally deviating from its state since, if any of the agents in majority group does so, the groups would switch thus the state of the deviated agent would not improve. In an MG stage game, three types of Nash equilibria (NE) are applicable. Note that the NE corresponds to the local minima of volatility values [7]. • Pure strategy Nash equilibria: If there are N agents playing the MG and (N − 1)/2 of them choose to select one alternative with probability = 1 while the other (N + 1)/2 agents select the other alternative with probability = 1, system is said to be in a pure strategy NE. There are N number of such NEs that exist. These NEs are considered the globally optimal states [6]. • Symmetric mixed strategy Nash equilibria: There exists only a single symmetric mixed strategy NE to the MG. It corresponds to the so called random choice game where, all agents choose between each of the two alternatives with a probability of 0.5 [6]. • Asymmetric mixed strategy Nash equilibria: If (N − 1)/2 agents select one alternative with probability = 1, another (N − 1)/2 agents select the other alternative with probability = 1 and the remaining agent selects an alternative with an arbitrary mixed probability, the MG stage game is said to be in an asymmetric mixed strategy NE. There can be an infinite number of such NEs [6]. D. Solution Approaches Both qualitative and analytical approaches have been studied in the literature to solve an MG. The qualitative approach investigates how the volatility of the system varies with respect to the brain size and the population size. Moreover, it interprets the phase transitions of MG from the crowded phase to the uncrowded phase along with the volatility variation. Hence, this approach is also referred to as crowd-anticrowd theory in the literature [4]. A brief overview of the phase transition in MG in relation to the variation of volatility is given in Section II-B. The interested reader can find more rigorous explanations in [7]. The analytical solution of MG obtains its statistical characterizations of the stationary states and the NE. In MG, the stationary states correspond to the minima of predictability whereas the NE of the MG correspond to the minima of volatility. In order to derive the analytical solutions, numerous mathematical techniques are used, including reinforcement learning, replicator dynamics, and tools from statistical physics of disordered systems (e.g. Hamiltonian, replica method, spin glass model, Ising model). Reference [6] includes a rigorous analysis on the solution of MG where aforementioned techniques are employed to derive the solution. Therein, complete statistical characterizations of the stationary state of MG are realized. First, the authors use multi-population replicator dynamics technique to obtain some evolutionarily stable NEs of the stage game. Then, a generalized version of the repeated MG model is analyzed where exponential learning is used by the agents to adapt their strategies. For this scenario, analysis is done considering two different types of agents, namely naive 1 and sophisticated. 2 In their analysis the authors show that for the repeated MG with naive agents, stationary state is not a NE. Moreover, authors show that for the systems with sophisticated agents and exponential learning, the system converges to a NE. E. Variants of the Minority Game In this section, we briefly introduce few different variations of MG beyond the basic form described before. A comprehensive discussions can be found in [4], [5] and [8], among many others. 1) MG with arbitrary cut-off : A generalized version of the basic MG, referred to as MG with arbitrary cut-offs is introduced in [5]. In such games, the minority rule is defined at an arbitrary cut-off value (φ) rather than the 50% cut-off used in the seminal MG. In [5], authors show the behavioral change of MG when the cut-off value is varied. In brief, it is shown that the attendance values fluctuate around the new cut-off, exhibiting the adaptation of the population. Furthermore, the analysis shows that when the cut-off φ is decreased below N/2, the brain size yielding the minimum volatility is also decreased. This variant is particularly useful to model some resource allocation problems where the comfort value (also known as the cut-off) of the capacity of a particular resource is a value other than 50%. The MG model for the problem of computation offloading presented in Section IV in this article counts as an example. 2) Multiple-choice MG (Simplex game): This variant is introduced in [8] as a direct generalization of the basic MG where every agent might select among K different choices (K > 2). Thus, a simplex game is defined by the set of N players, the set of K choices and the history winning actions (m-bit long). Similar to the seminal MG, a strategy is a mapping of the history data to one of the K choices, and each player is given a set of S strategies. Moreover, the strategy space of the simplex game is associated with probability values p is , which indicates the satisfaction of the i th agent with her s th strategy. Each player's choice is referred to as a bid and a quantity called aggregate bid is defined as the sum of the choices of all agents. Similar to the attendance property in the basic MG, the aggregate bid contains the information about the number of agents that select a given choice and determines the pay-off that each user receives. As the game is played iteratively, after scoring the strategies, the probability values (p is ) are updated using the exponential learning 3 method. Thus, although the players start out naive, they become sophisticated as the game evolves. Therefore, unlike basic MG, the simplex game exhibits evolutionary behavior. In [8], it is shown that compared to playing an MG with few options, in a game with a large action set, the overall system performance improves, resulting in higher resource utilization. 3) Evolutionary MG (Genetic model): In this version of MG, unlike the basic case, all users apply a single strategy. Each agent i chooses the action predicted by the strategy with some probability p i referred to as the agent's gene value. Each agent selects the opposite action with probability 1 − p i . At each play, +1 (or −1) point is assigned to each agent in the minority (or majority). As the game evolves, if the accumulated score falls below a certain threshold, a new gene value is drawn (known as mutation of gene value) [4]. 4) Grand canonical MG (GCMG): In this type of MG, the number of players who participate in the game can vary since the players have the freedom of being active or inactive at any round of the game. More precisely, in a GCMG, agents would score their strategies as usual and if the highest strategy score is below a certain threshold, agents would abstain from playing the game for that round of play. Any inactive agent re-enters the game when participation becomes profitable. Consequently, the attendance is calculated based on active players only [4]. III. MG MODELS IN COMMUNICATION NETWORKS: STATE-OF-THE-ART AND FUTURE POTENTIAL APPLICATIONS In this section, we first provide a brief summary of the stateof-the-art, where MG is applied to solve the problems that arise in communication networks. Future research directions and open problems are discussed as well. A. MG Models in Communication Networks: State-of-the-Art 1) Interference management: In [9], the authors investigated distributed interference management in Cognitive Radio (CR) networks using a novel MG-driven approach. They propose a decentralized transmission control policy for secondary users, who share the spectrum with primary users thus causing interference. In this work, secondary users play an MG, selecting between two options, namely transmit or not transmit. The winning group is determined based on the interference experienced by the primary user. For instance, if the majority transmits, the interference power measured at the primary receiver exceeds the threshold so that the minority who does not transmit become the winners and vice versa, ensuring that the minority always wins. At each round of play, the primary receiver announces the winning group through sending a control bit to secondary transmitters. 2) Wireless resource allocation and opportunistic spectrum access: An example of wireless channel allocation using MG can be found in [10] where an MG-based mechanism for energy-efficient spectrum sensing in cognitive radio networks (CRNs) is presented. The authors emphasize how the MG, due to its self-organizing nature, befits to model such problems to achieve cooperation and coordination gain without causing a large signaling cost in a CRN. In the applied MG model, the agents are the secondary users, who choose between sensing or not sensing, in the process of detecting an idle channel. Two different distributed learning algorithms are then developed that are applied by the agents to converge into equilibrium states characterized by pure and mixed strategy NE. In [11], a multiple-choice MG (simplex game) is used to model the resource allocation problem in heterogeneous networks, where a large number of non-cooperative users compete for limited radio resources. The existence of correlated equilibrium was proved. Moreover, the authors compared the equilibria with the optimal states using the concept of the price of anarchy. 3) Coordination in delay tolerant networks: In [12], an MG-based model was applied to coordinate the relay activation in delay tolerant networks in order to guarantee an efficient resource consumption. In the MG model, relays act as the players who decide to transmit (participate in relaying) or not to transmit (not to participate in relaying). In their work, the authors developed a stochastic learning algorithm that converges to a desired equilibrium solution. B. Potential Future Applications and Open Problems As discussed in the previous section, most of the existing work is focused on the application of MG models in cognitive radio networks. The applications of MG for 5G SCNs remain however unexplored. In what follows, we discuss some possible applications as well as open theoretical issues. 1) Computation offloading in small cell networks: With the emergence of new mobile applications, it has become common for small user devices to have computationally-intensive tasks, such as image processing to perform. However, due to limited computational capability and limited battery capacity, user devices are not always capable of performing the desired task, or doing so might become inefficient. This gives rise to the idea of offloading such tasks to a remote server (e.g. the cloud), which typically has much higher computational capability than local devices. This idea is referred to as computational offloading. Computational offloading is expected to save the energy cost spent for local execution, thereby saving the battery life of end user devices. Despite great benefits, computation offloading also imposes certain challenges. These include large communication costs in terms of energy and latency, caused by the long distance between users and cloud servers, which are typically located outside the local network. Moreover, especially in dense SCNs, excessive back-haul traffic might arise as a result of the large number of offloading requests being sent to cloud. Thus, in such cases, computational offloading to nearby SBSs (known as mobile-edge offloading) can be a better alternative for users. On the other hand, utilizing the SBS resources for computational offloading is an efficient way to make use of the idle resources located in the widely available SBSs, especially in dense networks [13]. The applicability of MG to study computational offloading problem is well-justified: It is essentially a dynamic resource allocation problem where users, non-cooperatively and selfishly, try to utilize the limited computational resources located in the SBS. Moreover, users cannot communicate with each other or observe each others actions. While any centralized approach to solve such problem might be very inefficient, an approach based on MG model is distributed and of low cost. Besides, in an MG model, all players eventually exhibit cooperative behavior as a collection, despite being selfish individually. In Section IV, we will present a novel MG-based model to address the computational offloading problem in dense SCNs. 2) Transmission mode selection: Device-to-Device (D2D) communication is considered as a building block in 5G networks. In D2D network underlaying an SCN, users have two modes of transmission: (i) direct communication without using the core network infrastructure, (ii) communication with the aid of the base station as in regular cellular networks. Clearly, the transmission mode selection problem can be modeled as an MG, where users are modeled as agents and the two options correspond to transmission via D2D mode or cellular mode. Given limited resource, the reward of each mode (for instance the throughput) then depends on the number of users selecting that mode. Thus, the cut-off value that defines the minority depends on factors such as number of interferers, number of available direct channels, etc. 3) Multiple-choice games: It is clear that the current state-of-the-art mostly use the basic MG, which limits the agents to select between only two alternatives. Nonetheless, in many practical resource allocation problems, the decision is made among multiple choices. Examples include the channel selection or computation offloading problems where a channel/computational server is selected from many potential options. 4) Evolutionary variations of MG: In a basic MG, the agents' selected set of strategies do not evolve. In other words, they stay fixed through out the iterations and are only scored in each play so that the agents can learn the best strategy for them. Hence, the basic iterated MG cannot be classified as an evolutionary game, making it inapplicable to model the problems where users should have the capability of altering their given strategies. On the other hand, complex dynamic systems require the agents to not only learn the best strategy but also to adjust the strategies as the game advances in a dynamic manner. To accomplish this, the modified versions of the seminal MG such as evolutionary MG (EMG) can be used. Other options include MG models that use learning methods such as exponential learning to adjust the strategies. The current state-of-the-art consists of very few applications of such evolutionary variations of the MG, thus it can be noted as a potential research direction. 5) MGs for players with heterogeneity: In a practical point of view, it is very likely for the SCN users to be heterogeneous and to have diverse QoS requirements (e.g. in terms of delay, rate and energy efficiency). However, in the basic MG, every player is assumed to have similar capabilities and uses identical information and learning methods. Thus, generalization of the basic MG model to include such heterogeneities among users can be considered as a potential line of research. A. System Model and Assumptions We use a minority game with an arbitrary cut-off. Consider an SBS serving N number of homogeneous (with respect to both computational capability and the task potentially to be offloaded) users. Each computational offloading period t is considered to be a round of play of the MG. All users participate as the players of the game and individually decide whether they offload the task to the local SBS or they execute it locally using their own resources. Users select one of these two options simultaneously within each offloading period and they have no information about other users' actions. Note that users naturally prefer to offload to the local SBS rather than executing the tasks locally, provided that the local SBS does not become crowded with offloading requests. The reason is as follows. Analogous to the original bar problem [2] where the customers naturally like to go to the bar than staying home if the bar is uncrowded, we assume that by offloading to an uncrowded SBS, users can experience lower latency. The SBS supports all of the computation offloading requests it receives, by completing all tasks in a TDMA manner. (This can be done using virtual parallel processing, where the time slot given to each task is small enough to assume that all tasks are performed simultaneously.) Therefore, if the number of offloading requests exceeds a certain threshold, the latency experienced by users might increase, making local computation to become the preferred option. Note that for the sake of simplicity, we assume the amount of energy required for the local computation is approximately equal to the amount of transmission energy required for the offloading. Thus we omit the energy parameter in this simplified model. The problem described above is a distributed resource allocation problem where we analyze how to optimally utilize the computational resources of the local SBS while the latency remains below a specific threshold. As conventional, we assume that the local SBS has a fixed computational capability. In each round, users have the two options of either offloading or locally computing. The number of offloading requests that the SBS can handle is an arbitrary cut-off value denoted by φ. Clearly, this offloading threshold (φ) counts as the minority rule of the game. Note that φ remains unknown to the users throughout the game. For every user, L th is the maximum tolerable latency, which is experienced if the computation is performed locally. Then the cut-off value φ is defined such that, when the number of offloading users approaches φ, the latency for offloading users reaches the threshold, L th . Thus, for a user to benefit from offloading, the number of offloading users should not exceed the limit of φ. As a result, being in the population minority (defined by φ) is always desired. After each round of play, one of the outcomes mentioned below would occur. • If minority chooses to offload and majority chooses to compute locally: In this case, the minority receives a reward since offloading yields lower latency than local computation since the local SBS is uncrowded. • If minority chooses to locally compute and majority chooses to offload: In this case, the number of offloading requests exceeds the threshold φ so that the SBS becomes too crowded. Consequently, the latency for the offloading users would exceed the allowable latency threshold. Thus minority wins and receives a reward. B. MG Model 1) Attendance: Conventionally in MG, the winning choice is announced to all users after each round of play, so that users take advantage of this information to score their strategies. Accordingly, in our model, after each round of play, the SBS broadcasts the winning group by sending an one-bit control information b(t) defined as where n(t) = number of offloading users at offloading period t. In accordance with the MG terminology, we refer to this n(t) as the attendance. Given the control information, users evaluate their strategies to improve decision making in the next round of play. 2) Reward: The reward of each winning user is defined based on the computation latency experienced by the user. Note that the transmission delays and propagation delays are considered negligible compared to computation latency. Thus the reward depends on the number of other users who select the same option, be it offloading or local computation. • L(t) = Latency experienced by an offloading user at offloading period t, • C b = Computation capability of SBS (in number of CPU cycles per unit time), • C u = Computation capability of local user device (in number of CPU cycles per unit time), • M = Number of CPU cycles required to complete the task. Thus the latency experienced by an offloading user yields L(t) = n(t) · M/C b , and the latency experienced by as locally computing user is given as L th = M/C u . For L(t) = L th , n(t) = φ, hence φ = C b /C u . If L(t) ≥ L th (equivalent to n(t) ≥ φ), offloading users (majority) lose and locally computing users (minority) win. In contrast, when L < L th (i.e. n(t) < φ), offloading users (minority) win and locally computing users (majority) lose. Let U o (t) and U l (t) be the utility each user, respectively receives in case of offloading and local computing. Thus we have and 3) Distributed Learning Algorithm: To solve the designed MG, we use the reinforcement technique [3] [4]. For the comparison purposes, MG-based method is compared with a random selection scenario. Algorithm 1 Distributed learning algorithm to solve offloading MG [3] Initialization: Each user i randomly draws S strategies. for t = 2 : T do Each user i selects action a i (t) predicted by the best strategy. SBS broadcasts the control information b(t). for s = 1 : S do Each user i updates the score of the strategy s, . end if end for Each user i selects best strategy s i (t), defined as argmax It is shown in [6] that using this basic MG model results in naive behavior of agents since they do not account for their market impact 4 , which makes the system unable to attain NE. In this work, we simply use the self-organizing capability of basic MG around the cut-off, even with its naive agents. Future work can explore more generalized versions of MG models that attain NE with sophisticated agents who account for market impact. C. Simulation Results and Discussion For numerical analysis, we consider an SBS serving N = 31 users. The task that the users have to perform is assumed to require M = 10 Megacycles of CPU cycles. The CPU capacity of each user device is C u = 0.5 GHz. Moreover, the SBS allocates C b = 10 GHz of CPU capacity to serve users' offloading requests. Hence the system's cut-off value becomes φ = C b /C u = 20. Thus, if the attendance is less than 20, the offloading users win, and vice versa. We simulate the system for different brain sizes (m) to observe the system behavior. For each m value, 32 runs are carried out, where each time, users randomly draw a new set of strategies (S = 2). In each of the 32 runs, T = 10000 offloading periods (rounds of MG) are executed. For comparison, we also implement the random choice game where users select one of the two actions (to offload or not) with equal probabilities at each round of the game. The variation of attendance over time for different brain sizes (m) are shown in Fig. 1. From the figure it is clear that the number of offloading users always fluctuate near the cut-off value, when users play an MG. It implies that the system self-organizes into a state where the number of users who experience a latency less than the threshold is near its maximum value, thereby maintaining the optimal SBS utilization. This is interesting since the users are not given any prior information about the exact cut-off value of the system. As mentioned earlier, fluctuation of the attendance (i.e., the standard deviation) is known as the volatility in MG literature. It is clear that for different m values, the amount of fluctuation differs as explained in Fig. 2 (see below). Note that the fluctuations correspond to the amount of cooperation in the MG. They provide a measure for the number of users who could have offloaded if the attendance is lower than the cut-off or for the number of agents who could have not selected offloading option, if the attendance exceeds the cut-off. From the attendance figures, it can be seen that, even though the cutoff value is not advertised to the agents, the population adapts to the cut-off value of the system. The reason for this behavior is the agents' adaptation to the environment they collectively create. [5], described in the following. As discussed, the volatility corresponds to the fluctuations of the attendance and serves as an established measure for the system performance. Lower volatility values mean that the fluctuations around the cut-off decrease. This corresponds to the size of the minority being larger, resulting in a larger number of winners, thus a better performance. Accordingly, lower volatility corresponds to better resource utilization and higher user satisfaction. From Fig. 2, for almost all values of m, volatility is lower than that of the random choice game. Hence one can conclude that the resource utilization is improved when MG-based offloading method is used. This shows the self-organizing nature of the MG, where agents coordinate to reduce the fluctuations in the absence of any communication or information other than the history data. It can also be seen that a minimum of the average volatility occurs at m = 3, where the phase transition from the crowded phase to the uncrowded phase occurs. To investigate the improvement in the latency experienced by the users, average utility is shown in Fig. 3. It is clear that for MG, the utility achieved by individual users is better than that of the random choice game. However, it can be seen that the average utility received by a user who applies MG-based offloading is still lower than that of an optimal situation, where the number of offloading users is always equal to φ − 1 (here 19). This is the price of the lack of coordination between agents and the use of minimal external information. Fig. 4 depicts the influence of the brain size on the average utility achieved per user. As expected, for larger volatility values, the achieved utility is substantially smaller. Roughly speaking, Fig. 4 is approximately an inverse of the volatility figure (Fig. 2). Thus, once more we come to the conclusion that the volatility is indeed an inverse performance measure for the MG-based system. In Fig. 5, we illustrate the benefit of MG-based offloading. When the MG-based offloading mechanism is used, the number of users who experience latency below the threshold fluctuates near its maximum value (φ − 1 = 19, in this example), compared to the two general cases where all users simply offload or compute the task locally. In the last two cases, none of the users is able to achieve a latency below the threshold. Using our defined model, if all users offload, the latency experienced by each user yields 31 milliseconds. Similarly, if all users choose to compute locally, the latency is 20 milliseconds. Since the threshold latency is 20 milliseconds, it is clear that none of the above two methods would allow the users to experience latency values below the threshold. Consequently, using an MG-based approach results in achieving V. CONCLUSION We have presented the basics of minority game models and their applications in communication networks. Also, future potential applications and open research issues have been outlined. As a potential application of minority games in 5G small cell networks, we have investigated the distributed mobile computation offloading problem and some preliminary results have been presented. In mobile-edge computation offloading, users typically have several resources to offload to; these include local device, SBS, MBS, D2D offloading or the cloud [13]. To model such cases, multi-option MG (simplex game) can be used. Also, in practice, users can have a variety of devices with varying computational and battery capacities (e.g., smart phones, tablets, laptops); thus MG models that incorporate different user types can be employed to model such scenarios. Apart from the conventional method, different learning techniques such as biased [14] and adaptive [15] strategies can be adopted to achieve better performance.
2016-10-07T03:34:07.000Z
2016-10-07T00:00:00.000
{ "year": 2016, "sha1": "e2da76dd6e4a74f18b03dfcd7d0ecec14be7d472", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.02131", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c37a8d25a5b77ba191a20a3225bd55caee824642", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
203399252
pes2o/s2orc
v3-fos-license
Search for histopathological characteristics of inflammatory juvenile conjunctival nevus in conjunctival nevi related to age: Analysis of 33 cases Purpose Conjunctival nevi in young individuals can correspond to the entity named Inflammatory Juvenile Conjunctival Nevus (IJCN), presenting clinically as a rapid growth lesion, and showing at the histopathological study an inflammatory infiltrate surrounding the lesion. All these findings can suggest a diagnosis of malignancy. Due to a case of IJCN diagnosed in our Pathology department, we realized that this entity is rarely reported in the literature and histopathological diagnostic criteria are not well defined. The aim of our study is to compare the histopathological characteristics of conjunctival nevi in patients aged thirty years or less to those in patients above 30 years, looking for the findings described in IJCN. Methods All the excisional specimens of resected conjunctival nevus in a tertiary hospital from 2000 to 2018 were retrieved from the Pathology department archives. Demographic data were recorded, and histopathological variables (histological type of nevus, lymphocytic infiltration, eosinophilic infiltration, presence of lymphoid follicles, stromal nevomelanocytic component, intraepithelial nevomelanocytic component, epithelial inclusions, quantity of goblet cells in epithelial inclusions, cellular atypia, mitoses and maturation of the lesion) were evaluated by three independent observers. Statistical analysis was performed comparing the two age groups. Results The study determined a significant predominance of the lymphocytic and eosinophilic infiltration in the group of patients aged thirty years or less respect to the elderly group. The percentage of stromal component of the lesion is larger in patients over thirty years compared to the younger group. There was no correlation between epithelial inclusions, maturation or cytological atypia and age groups. Conclusion We found some histopathological differences in conjunctival nevi related to young age, some of them coincident with the ones described in IJCN, which histopathologically could lead to a misleading diagnosis. However, we did not find significant differences related to age in many of the described histopathological findings described in IJCN. Larger series with a greater number of cases would be of interest to characterize more precisely this lesion. Inflammatory juvenile conjunctival nevus (IJCN) is a benign lesion located most often on the juxtalimbal conjunctiva, that occurs in children and adolescents (average age of surgery 11-12 years). In children is usually a lightly pigmented or amelanotic lesion that become pigmented at puberty or pregnancy [4]. Clinically, it may grow rapidly or increase its pigmentation, becoming suspicious of malignancy. In these cases, excision of the lesion is recommended. It is suggested that the growth of the lesions may be due to inflammatory infiltration and cystic degeneration. IJCN are associated with systemic allergy, allergic conjunctivitis and vernal conjunctivitis [5,6]. Levi-Schaffer et al. studied this issue and suggested an association between IJCN and allergic inflammation. They described an increased presence of nerve growth factor (NGF), eosinophils and mast cells in IJCN and demonstrated higher production of NGF by fibroblasts related to the lesion respect normal fibroblasts, which modulate eosinophil properties through NGF [6,7]. Histologically, most IJCN are described as compound nevi (97%) with intraepithelial and subepithelial melanocytic nests and solid or cystic epithelial inclusions, the latter being more frequently cystic with PAS stain-positive goblet cells. They show a remarkable stromal inflammatory infiltrate with lymphocytes, plasma cells and eosinophils [4]. Some authors describe a ''reverse'' subepithelial maturation in this entity, with melanocytes displaying nuclear and cytoplasmic size greater in depth than in the junctional component, a pattern of confluent growth in the junctional component, and certain degree of atypia [8]. Sometimes the prominent inflammatory component produces a distortion of the architecture and an apparent cytological atypia. It is described as a benign entity with some ''atypical'' features that can be associated with melanoma in cutaneous melanocytic lesions [4]. Classically, it corresponds to an enlarging lesion in the bulbar conjunctiva. Young age and cystic component are indicators of a benign lesion. The differential diagnosis of IJCN includes malignant conjunctival melanoma, lymphoma and primary acquired melanosis. The most important clinical and histological differential diagnosis is conjunctival malignant melanoma, whose clinical presentation can be similar to IJCN. However, cysts are rarely seen in melanomas, and unlike malignant melanoma, in IJCN there is no marked cytological atypia or mitotic activity in the stromal component [3]. The lymphocytic inflammatory infiltrate may suggest the diagnosis of a conjunctival lymphoma, but it is rare in young population, and the lymphocytic population in IJCN corresponds to an admixture of B and T cells. [4] We received an excisional biopsy of a conjunctival melanocytic lesion in a 28 year old man, with a diagnosis compatible with IJCN. It is an uncommon lesion and there are few case series reported in the literature. The aim of this study is to describe the features of conjunctival nevi received in our Pathology department (Corporació Sanitaria Parc Taulí Hospital, Sabadell, Spain) in the last eighteen years, and com-pare the histopathological characteristics of young patients (aged thirty or less) to patients aged over thirty. Materials and Methods This is a retrospective observational study that analyses the histopathological characteristics of conjunctival nevus. Two age groups of patients were compared: children/young adults up to 30 years old and adults over thirty years old. All patients with diagnosis of conjunctival nevus in the database of Corporació Sanitaria Parc Taulí, Hospital, Sabadell (Spain), who were excised between years 2000 and 2018 were included in the study. The specimens fixed with formaldehyde and paraffin embedded were sectioned at 3 mm thick. Slides stained with Hematoxylin-Eosin were reviewed by three independent observers (two pathologists and a pathologist trainee). The definite value for each item was the one determined by most of the observers. Statistical analysis was carried out using Statistical Package for the Social Sciences (SPSS) software. Descriptive statistics were presented as percentages and frequencies. Chi-square test (w 2 ). was applied to compare the categorical variables of each histopathological Results Thirty-three cases of conjunctival nevus were identified, 32 of them being evaluable histologically, while a remaining case was considered insufficient for its valuation. The series included 17 cases of patients aged thirty years or less (range: 5-30 years old), and 15 cases of patients over thirty years old (range: 31-71 years old). The histopathological features of each age group are summarized in Table 2. There are 17 cases in the group of patients aged thirty years or less with histopathological characteristics of conjunctival nevus. Of these, 15 cases corresponded to compound nevi, 1 to a junctional nevus and 1 to subepithelial nevus. Of the 15 cases over thirty years old, 11 were compound nevi and 4 corresponded to subepithelial nevi. There were no sig-nificant differences in the distribution of the type of nevus according to the age group (p = 0.191). Lymphocytic infiltration in the group of younger patients was considered grade 3 in 9 cases (52.9%), grade 2 in 3 cases (17.6%), grade 1 in 4 cases (23.5%) and grade 0 in 1 case (5.9%). In the group of older patients, the lymphocytic infiltration was grade 2 in 3 cases (20%), grade 1 in 9 cases (60%) and grade 0 in 3 cases (20%), with any case of grade 3, with statistically significant differences in lymphocytic infiltration between the two age groups (p = 0.008). Eosinophilic infiltration was found only in the group of younger patients, in 8 of 17 cases (47.1%), with statistically significant difference (p = 0.024); it was considered grade 3 in 2 cases, grade 2 in 1 case, and grade 1 in 5 cases. In our series, only two cases showed lymphoid follicles and both were patients under 30 years of age. Five cases showed no epithelial inclusions, solid or cystic. Four of these belong to the group until thirty years and one case belong to the oldest group. Solid and cystic epithe- lial inclusions were present in 10 and 12 cases respectively in the group until thirty years, and in 7 and 14 cases respectively in group of patients older than thirty years. In the group until thirty years, grade 3 solid and cystic epithelial inclusions were present in 2 (11.8%) and 3 cases (17.6%) respectively, while in the older group there wasn't any case of solid epithelial inclusion grade 3, although there were 5 cases (33.3%) of cystic inclusions grade 3. In our series, older patients had a greater stromal nevomelanocytic component with statistically significant difference (p = 0.006). The stromal component constitutes more than 50% of the lesion in 93.3% of cases in the oldest group. Cellular atypia was evaluated considering the usual atypia in nevus of young patients. Only 3 cases presented greater grade of atypia than expected, two of them in the group of patients until thirty years and the other in the group older than thirty. Deep maturation of the lesion was absent in 4 cases (23.5%) in the youngest group and in zero cases (0%) in the oldest group of patients. The statistical analysis showed no differences between age and the presence of cystic inclusions, solid inclusions, goblet cells, maturation or cytological atypia. (Figs. 1 and 2) Discussion Inflammatory juvenile conjunctival nevi are described as compound nevus with intraepithelial and stromal nevomelanocytic component [4,8]. In our series, in the group of younger patients ( 30 years old), compound nevus represented 88.2% of cases, junctional nevus 5.9%, and subepithelial nevus 5.9%, while in the group of patients over thirty years old, compound nevus represented 73.3%, junctional nevus 0% and subepithelial nevus 26.7% of cases. The group of patients under thirty years had mainly compound nevus, while in the group of patients over thirty years there was a higher percentage of nevus of the subepithelial type. However, analyzing statistically these results we did not find statistically significant differences in the distribution of the types of nevus between the two age groups. It has been described that conjunctival nevi appear in the first and second decades of life with the nevomelanocytic cell nests located in the epidermal-stromal junction, and during the second and third decade the nests migrate to the stroma forming the compound nevus. In the third and fourth decade the lesion usually is totally located in the subepithelial stroma [1,3]. This fact could explain why in our series the older group presented higher percentage of stromal component in the lesion. Comparing the two age groups, there are significant differences related to inflammatory infiltrate between them. Nearly fifty-three percent (52.9%) of patients aged thirty years or less had severe lymphocytic infiltrate with prominent cell aggregates (grade 3), and two cases (11.8%) showed lymphoid follicles, while any patient of the group of older patients had severe lymphocytic infiltrate. Furthermore, patients aged over thirty years old had more frequently only isolated lymphocytes (60%) than the group of patients ages until thirty years old (23.5%). So the cases of the youngest group showed more prominent lymphocytic infiltrates compared to the group over thirty years. Patients under 30 years of age presented eosinophilic infiltration in eight cases (47.1%). No eosinophilic infiltrate was found in patients older than 30 years age. There is a statistically significant high level of eosinophilic infiltration among the age groups. Zamir et al. found that 75% of conjunctival nevus of a serie of 63 patients younger than 20 years were compound nevus with a prominent inflammatory infiltrate constituted by lymphocytes, eosinophils and plasma cells with features of inflamed conjunctival nevus. Of these cases, 30% presented germinal centres and 77 % eosinophilic infiltrate [5]. Cystic inclusions are considered a sign of chronicity and are caused by the migration of subepidermal melanocytes [3]. Of the five cases that do not show intraepithelial inclusions (solid or cystic), four correspond to the group of patients of thirty or less years old. However, we did not find a statistically significant association between age and epithelial inclusions grade. Cellular atypia higher than the atypia expected in a conventional nevus was found in 2 cases in the youngest group and in one case of the oldest group. In addition, the two lesions belonging to the group of patients until thirty years old presented absent deep maturation of nevomelanocytic cells. Maturation of nevomelanocytes was the evaluated trait with the most discordance among the three different observers. Colarossi et al. described a case of atypical juvenile conjunctival compound nevus with marked atypia and inflammation. It corresponded to a compound nevus with atypical cells in deep nests, with focal pagetoid spread but without mitotic activity. The final diagnosis was juvenile conjunctival atypical nevus [2]. Costea et al. also made the final diagnostic of inflammatory juvenile atypical compound nevus in a lesion with chronic inflammation, lack of maturation in the deep of the lesion, pagetoid spread and atypical nevomelanocytes without mitotic activity [1]. We consider that our three cases with certain degree of atypia did not have enough cytological atypia to make the diagnosis of atypical compound nevus of the conjunctiva. None of the three observers detected mitotic activity, as the cases of inflammatory juvenile atypical nevus reported in the literature [1,2]. Finally, no significant correlation was obtained between the age group and the atypia or maturation of the lesion. To summarize, in an attempt to contribute to the characterization of IJCN, we described the histopathological characteristics of all conjunctival nevi received in our Pathology department from 2000 to 2018, looking for differences between two age groups. The younger group aged thirty or less, most frequently presented lymphocytic and eosinophilic infiltrates. We noticed also greater stromal component of nevus cells in the oldest age group. After reviewing the scant studies and reports about IJCN, we realized that there are not clearly defined diagnostic criteria for this entity. Most of the authors define this entity as a lesion with ''reverse'' maturation, confluent growth pattern in the junctional component, chronic inflammation and variable but low grade of cytologic atypia [3,8,9]. After the review of our cases and considering the lack of unification of diagnostic criteria reported in the literature, it is unclear if any of our cases could finally be diagnosed as IJCN. Our study is limited due to the small sample size. Larger series with a greater number of cases would contribute to a best characterization of this tricky and scarcely described lesion.
2019-09-17T03:03:00.809Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "772b008ec1f4f2262f9dffbab5d71c792da779c3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sjopt.2019.07.011", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c2cb23b3338f5ed8d466734cd98b7e78342ebbc7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19225905
pes2o/s2orc
v3-fos-license
Type 2 diabetes mellitus affects eradication rate of Helicobacter pylori AIM: To study the eradication rate of Helicobacter pylori ( Hp ) in a group of type 2 diabetes and compared it with an age and sex matched non-diabetic group. METHODS: 40 diabetic patients (21 females, 19 males; 56 ± 7 years) and 40 non-diabetic dyspeptic patients (20 females, 20 males; 54 ± 9 years) were evaluated. Diabetic patients with dyspeptic complaints were referred for upper gastrointestinal endoscopies; 2 corpus and 2 antral gastric biopsy specimens were performed on each patient. Patients with positive Hp results on histopathological examination comprised the study group. Non-diabetic dyspeptic patients seen at the Gastroenterology Outpatient Clinic and with the same biopsy and treatment protocol formed the control group. A triple therapy with amoxycillin (1 g b.i.d), clarithromycin (500 mg b.i.d) and omeprazole (20 mg b.i.d.) was given to both groups for 10 days. Cure was defined as the absence of Hp infection assessed by corpus and antrum biopsies in control upper gastrointestinal endoscopies performed 6 weeks after completing the antimicrobial therapy. RESULTS: The eradication rate was 50 % in the diabetic group versus 85 % in the non-diabetic control group ( P <0.001). CONCLUSION: Type 2 diabetic patients showed a significantly lower eradication rate than controls which may be due to changes in microvasculature of the stomach and to frequent antibiotic usage because of recurrent bacterial infections with the development of resistant strains. Type 2 affects eradication rate of INTRODUCTION Helicobacter pylori (Hp) is the most prevalent infection all over the world and has been considered as the causative agent of many gastrointestinal diseases [1,2] . Type 2 diabetes mellitus can present with many protean gastrointestinal symptoms and Hp can play a role in this context [3,4] . Although a number of studies has been performed on the association of Hp and diabetes mellitus, the results have been controversial. In a large study performed by Xia et al., the seroprevalance of Hp infection was not statistically different in patients with diabetes mellitus and non-diabetic controls [5] . In earlier studies, the prevalence of Hp was reported to be 62 % versus 21%, but according to Xia et al., the prevalence of Hp should be corrected for age and gender and there are no differences if an adjustment has been done for these variables [6] . The literature is even scarce about treatment regimens of Hp infection in diabetes mellitus. We also know that the eradication of Hp shows great differences between different ethnic groups and in patients with some chronic conditions [1,7] . Therefore we proposed that the eradication rate of Hp may be also different in type 2 diabetics in comparison to non-diabetic controls and we planned a prospective study to elucidate the eradication rate of Hp infection in type 2 diabetic subjects. Patients Diabetic patients with dyspeptic complaints from Diabetes Outpatient Clinic were referred for upper gastrointestinal endoscopies in the Gastroenterology Department. Upper gastrointestinal endoscopies were performed in a standard fashion with a videoendoscope (Pentax G-2940, Japan) by the same endoscopist. Endoscopic findings were noted and Hp infection was assessed using 2 gastric antrum and 2 gastric corpus biopsy specimens, which were evaluated with the rapid urease test and the pathological examination (Haematoxylin-Eosin staining and Giemsa if the first stain was negative). Only patients with positive results for Hp in pathological specimens were included in the study. The study population consisted of 40 patients with type 2 diabetic (21 females and 19 males; mean age 56±7 years) and 40 non-diabetic dyspeptic patients as a control group from Gastroenterology Outpatient Clinic (20 females and 20 males; mean age 54±9 years) matched for sex and age (Table 1). All patients had detailed information about the study and written informed consent. Methods At enrolment and at the end of the treatment, each patient completed a dyspepsia questionnaire proposed by Buckley et al., which had been slightly modified [8] . A triple therapy with amoxycillin (1 g b.i.d), clarithromycin (500 mg b.i.d) and omeprazole (20 mg b.i.d) was given for 10 days. After 10 days, the patients received 20 mg omeprazole for 5 weeks if a gastric or duodenal ulcer was identified in the initial endoscopy or 40 mg of famotidin if there was gastritis. Cure was defined as the absence of Hp infection assessed by corpus and antrum biopsies in control upper gastrointestinal endoscopies performed 6 weeks after completing the antimicrobial therapy. Endoscopic findings were evaluated again in control endoscopy and compared with initial endoscopic findings. Any side effects due to the treatment were reported. During the same study period, dyspeptic patients seen at the Gastroenterology Outpatient Clinic were taken as the control group if there was no history of type 2 diabetes, and their fasting plasma glucose levels were in normal limits (between 80-110 mg/dl) and pathological Hp positivity was found in gastric antrum and corpus specimens. The same triple therapy and a control upper gastrointestinal endoscopy after 6 weeks were also applied to the control group. Statistical analysis Results were expressed as means ± SEM. Statistically significant differences between groups were assessed using either Student t test, Fischer's exact test or ANOVA test, as appropriate. P<0.05 was considered to be significant. RESULTS All enrolled patients completed the study. Hp was eradicated in 50 % (20/40) type 2 diabetic patients and in 85 % (34/40) non-diabetic dyspeptic patients. The eradication rate was significantly lower in diabetics in comparison to the controls (P<0.05). There were no side effects in both groups, which led to discontinuation of the treatment. At baseline, type 2 diabetic patients infected with Hp showed a high prevalence of gastrointestinal symptoms. There was a statistically significant decrease in epigastric pain, nausea and belching after Hp eradication treatment ( Table 2). Denotes: NS=not significant Age, duration of the diabetes and Haemoglobin A1c levels were not significantly different between the diabetics in whose Hp was eradicated and whose Hp was not eradicated (Table 3). DISCUSSION Hp infection is responsible for up to 90 % of upper gastrointestinal diseases and is linked to the development of gastric carcinoma, MALT associated lymphoma and has to be eradicated whenever it's possible [9,10] . Standard triple therapy (Omeprazole, Clarithromycine and Amoxycillin) has been shown to be highly effective in the eradication of Hp in non-diabetic subjects in many previous studies (91 %) [11,12] . In our control group, we found an eradication rate of 85 %, which was compatible with the results in the literature. Many authors have extensively explored the relationship between Hp and diabetes mellitus. There has been controversial results in previous studies but in a larger, well-designed study of Xia et al., there was no difference of the seroprevalence of Hp infection between patients with diabetes mellitus and nondiabetic controls [5] . But there were no studies which explored the efficacy of anti Hp protocols in type 2 diabetics, whereas in a study of Gasbarrini et al. in type 1 diabetics, the Hp eradication rate was 65 % in comparison to 92 % in controls [13] . In another study performed on type 1 diabetics, the eradication rate was 62 % with different triple antibiotic regimens and this could be increased by quadruple regimen to 88 % [14] . In the present study performed on type 2 diabetics, a much lower eradication rate of Hp (50 %) was found. Histopathological examination was used in this study for the detection of pre and post treatment Hp and as the gold standard, it was more reliable and reproducible than the 13 C urea breath test, which has been used in the studies by Gasbarrini et al [13,14] . Immunosuppression in diabetes might predispose to the low eradication rate of Hp infection but other mechanisms may also explain the low eradication rate of Hp in type 2 diabetics. Type 2 diabetics are more susceptible to many bacterial and mycotic infections, which may lead to frequent use of antibiotics, and to the development of resistance [15][16][17][18] . Due to absorption problems in gastric mucosa, the extent of antibiotic absorption may be less [19] . This study showed a high rate of pathological endoscopic findings in type 2 diabetics, which may lead to disorders in gastrointestinal motility and to insufficient absorption of the drugs. Autonomic neuropathy has also been accused as a culprit. But studies in the literature suggested that there was no correlation between Hp positivity and delay in gastric emptying. A standard 10 days triple therapy with conventional antibiotics seems not to be warranted in diabetics. Due to problems of absorption and motility, alternative regimens with longer duration seem to be necessary for a higher eradication rate. In particular, if we take into consideration that gastrointestinal symptoms, which are quite frequent in diabetics, are significantly improving when it is possible to eradicate Hp, we should try to eradicate Hp in diabetic subjects. But this is a new area of research and the larger prospective studies with different anti Hp regimens for type 2 diabetics are needed.
2018-04-03T05:24:25.199Z
2003-05-15T00:00:00.000
{ "year": 2003, "sha1": "59a7ba17df2d0c28d27a6db19314f8c32723ea46", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v9.i5.1126", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b4d9a416f21c67d3ba985612ef826c9ff53a89c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
91178763
pes2o/s2orc
v3-fos-license
Phytochemical Content , Radical Scavenging Ability & Enzyme Inhibiting Activities of Selected Spices ( Cinnamon , Cardamom and Cloves ) Cinnamon, cardamom and cloves have been widely used for medicinal purposes as well as essential cooking ingredients for flavor. The objective of the research was to investigate the antioxidant content, antioxidant capacity, and inhibition of lipid and carbohydrate metabolizing enzyme activities of selected spices (cinnamon, cardamom & cloves) methanol (ME) and water extracts (WE). The phytochemical content was determined by total phenolic and total flavanoid content. The antioxidant potential was determined by measuring 2,2-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity and Ferric Reducing Antioxidant Power (FRAP) in spice’s (ME) and (WE) extracts. Total phenolic (GAE mg/100g dry weight) and flavonoid (mg CE/100g dry weight) content were the highest in Cloves (ME) 174.4 and 101.06. The lowest values for phenolic content were seen in ME and WE of Cardamom at 31.24 and 7.55. The DPPH IC50 values ranged from 0.22mg/mL (Cloves ME) to 0.60mg/mL (Cardamom ME). FRAP (μmol Fe/100g dry weight) for Cinnamon, Cardamom (ME) was 2438.5 and 325. Clove (ME) had a significantly higher reducing potential of 6888.5 which might have been attributed by the high amounts of phenolics and flavonoids in the spice. FRAP in spice extracts (WE) was lower with values of 2296.5 and 218.5 and 2310.5, respectively. The highest inhibition of the α-glucosidase was observed by Clove (ME) 86.5%, which also had the highest amylase enzyme inhibition at 71%. However, inhibition of the lipase enzyme was highest by the Cinnamon (WE) extracts 44.3%. The potential of phytochemicals in spices was studied and if consumed in high amounts could offer antioxidative properties and regulate key digestive enzymes which may lead to prevention or decreased progression of diseases such as Cancer, Diabetes and Cardiovascular diseases. How to cite this paper: Chimbetete, N., Verghese, M., Sunkara, R. and Walker, L.T. (2019) Phytochemical Content, Radical Scavenging Ability & Enzyme Inhibiting Activities of Selected Spices (Cinnamon, Cardamom and Cloves). Food and Nutrition Sciences, 10, 266-275. https://doi.org/10.4236/fns.2019.103020 Received: November 22, 2017 Accepted: March 10, 2019 Published: March 13, 2019 Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access DOI: 10.4236/fns.2019.103020 Mar. 13, 2019 266 Food and Nutrition Sciences N. Chimbetete et al. Introduction The risk of developing chronic diseases such as cancer, cardiovascular diseases, diabetes mellitus and hypertension is highly prevalent in the U.S [1].The factors which contribute to metabolic syndrome are abnormally high glucose levels, abnormally high blood pressure, large abdominal circumference, and high triglyceride levels [2].A survey conducted by Vella and others [3] showed that the awareness of functional foods in relation to health is increasing and individuals are demanding more functional foods that offer opportunities to reduce the risk of developing chronic diseases.Toda, Kawabata & Kasai [4] reported α-glucosidase inhibitors from clove, while research by Baker adn others suggests anticarcinogenic properties of the spice [5].Subash and others [6] reported the effect of cinnamon by isolating the active compounds to determine the main phytochemicals responsible for the anti-diabetic mechanism.The compound cinnamaldehyde was effective in lowering the plasma glucose levels in the STZ-induced diabetic rats used in the study.Not only did cinnamaldehyde reduce the glucose levels but it also lowered serum total cholesterol, triglycerides levels, and LDL (low-density lipoprotein).The compound was also able to increase the supply of insulin and HDL (high-density lipoprotein).The production of cardamom has increased over the years especially in Vietnam since 1990 because of its biological benefits as well as a high demand for the spice in the development of food products [7].Wojdyło, Oszmiański, & Czemerys [8] described the various phytochemicals present in Cloves such as eugonol, acetyleugenol, chavicol, acetyl salicylate, and humulenes.In a research study conducted by Krishnaiah, Sarbatly & Bono [9], various spices such as Indian gooseberry, omum, cumin, cardamom, betel leaf and brandy were examined for their phenolic and flavonoid contents.A medium content level (50 -100 mg) was found in the products and linked to the inhibitory potential on key enzymes in the human biological system.Some parts of the world that regularly consume spices have shown to have lower numbers of chronic disease cases due to the variety in phytochemicals which have been seen to affect various metabolic pathways [10].Research has been focused on the utilization of polyphenols as well as to increase their consumption in a diet [11].Cocoa beans have been reported to contain flavonoids and phenolics that may reduce reactive oxygen species (ROS).Developing chronic heart disease has been attributed to the lack of antioxidants which reduce the oxidation of LDL cholesterol [12].Data collected by NIH, 2011 also showed that lowering LDL cholesterol reduces the risk of cardiovascular disease.The method of extraction to obtain the highest amounts of phytochem-icals is very crucial.A study conducted by Exteberia and others [13] indicated that extraction by supercritical carbon dioxide yielded 85% -95% compared to extraction by petroleum ether.A study by Thorpe & Howard [2] reported a similar pattern showed an increase in extractability of the polyphenols in cloves using water extraction.This suggests that the characteristic of the solvent used could affect the type and amount of polyphenolic compounds extracted from the matrix of the spice. Phytochemicals present in spices such as cinnamon and cloves, may prevent the prevalence of chronic diseases by scavenging free radicals as well as regulating pathways in the inflammation process [14].Enzymes (α-amylase and α-glucosidase) are involved in the breakdown of starch in the digestive system. The inhibition of these key digestive enzymes by phytochemicals in spices could be very effective in reducing hyperglycemia and abnormal levels of other biological constituents [15].Therefore the objective of the research was to investigate the antioxidant content, antioxidant capacity, and inhibition of lipid and carbohydrate metabolizing enzyme (lipase activity, α-glucosidase activity and α-amylase) activities of selected spices (Cinnamon, Cardamom & Cloves) methanol (ME) and water extracts (WE). Preparation of Cinnamon, Cardamom and Cloves Extracts The spices were purchased from a local store (Huntsville Alabama, USA).The three spices were ground to powdered form using the (Laboratory Blender, Warring commercial, Torrington, CT, USA).All extracts for chemical and enzymatic assays were prepared in 80% methanol and boiling water.Five grams of each spice was mixed with either 80% methanol at room temperature or water for 1 hour in a hot water bath at a temperature range of 90˚C -100˚C.The supernatant was dried in Rotary Evaporator (Safe Aire, Fischer Hamilton, Gaithersburg, MD, USA).The final volume was made to 10ml with either solvent and stored at −20˚C until analysis [16]. Determination of Total Phenolics, Flavonoids and Antioxidant Potential Total phenolic content was determined using the Folin Ciocalteu colorimetric method described by [17].The results are expressed as mg GAE/100g dry weight.The total flavonoid content was determined using methods by [18].Catechin was used as the standard and results are expressed as mg CE/100g dry weight.To determine the radical scavenging ability of the selected spices, 2,2-diphenyl-2-picrylhydrazyl (DPPH) scavenging ability was conducted using the method described by [19].0.1 mM of DPPH was utilized in the sample mixture and results were expressed as IC50 of DPPH.Determination of Ferric Reducing Antioxidant Power (FRAP) assay was determined by following the method described by [20].The samples were analyzed in triplicates and the results are expressed as mmol Fe 2+ /g dry weight. Determination of Enzyme Inhibition of Potential The inhibition of pancreatic lipase (in-vitro) was measured using p-nitrophenyl butyrate (p-NPB) as a substrate as described by [21].The reaction was started by the addition of substrate 25 mM p-NPB in dimethylformamide (DMF).Inhibition of α-amylase activity was carried out as described by [22].Different concentrations of spice extracts and α-amylase solution (4 units/ml) were incubated at 25˚C for 10 min.After pre-incubation, 50 µl of a 1% starch substrate was added to the solution to initiate the reaction.To determine the inhibition of the α-glucosidase enzyme, the protocol as described by [23] was used.The sample was pre-incubated with phosphate buffer (pH 6.9) containing α-glucosidase solution (1.0 U/ml).After pre-incubation, 50 µl of substrate p-nitrophenyl-a-dglucopyranoside solution in 0.1 M phosphate buffer (pH 6.9) was added to each well. Statistical Analysis All experiments were conducted in triplicates.Data was analyzed using the SAS 9.1 (2011).Means were separated using Tukey's standardized range test.Level of significance was set at p ≤ 0.05. Phytochemical Content in Cinnamon, Cardamom and Cloves The phytochemical content of selected spices cinnamon, cardamom and cloves (methanol-ME and water-WE) was shown in as at the lowest concentration (0.22 mg/ml), inhibition of DPPH was 62.8%.An increase in percent DPPH inhibition was observed with an increase in concentration of all selected spices. Inhibitory Activity of Spices on Amylase Enzyme Inhibition of the amylase enzyme was shown in Figure 1 Inhibitory Activity of Spices on α-Glucosidase Enzyme The inhibition of α-glucosidase activity by Cinnamon (ME and WE) is shown in (86.5%).Cinnamon (ME) had the second highest inhibition of the enzyme at 53.6%, whereas, at the same concentration (0.4 mg/ml) Cinnamon (WE) resulted in an inhibition of 38.4%.Cardamom (ME) resulted in the highest inhibition of α-glucosidase (26.9%) compared to WE (21.5%) at the same concentration (0.4 mg/ml). Inhibitory Activity of Spices on Lipase Activity The inhibition of lipase activity by Cinnamon (ME and WE) is shown in Figure 3. Lipase inhibition was highest at the 0.4 mg/ml concentration for all three spices.Cinnamon (WE and (ME) recorded the highest inhibition at (53.1%) and (44.3%) respectively.(ME) of Cardamom resulted in higher inhibition (42.7%) of lipase as compared to Clove (ME and WE).Cloves (WE) resulted in the higher lipase inhibition (40.2%) as compared to (ME) (34.2%) at the highest concentration. Discussion The health benefits that are attributed to phytochemicals are being further researched for their effects against chronic diseases [10].A number of phytochemicals such as catechins have been identified in cinnamon, cardamom and cloves.Research suggests these spices have cardioprotective, chemo preventive as well as anti-inflammatory effects [14]. In this study, the results showed varied extractability of phytochemicals by the use of either water or methanolic solvents.The phenolic and flavonoid content observed in Cloves ME (174.40 ± 2.0 mg GAE/100g) and (94.50 ± 13.53 mg CE/100g) may be attributed to the polarity of the methanol solvent.Similar evidence in high extractability in Cloves (ME) has also been shown in research done by Hollingsworth [24].Fruits and vegetables have been identified to contain high amounts of phytochemicals [25].Polyphenols are known to be powerful antioxidants that have also shown the ability to scavenge free radicals in the body.The ability to scavenge free radicals can be determined by their ability to scavenge the DPPH radical [26].A higher inhibition of DPPH by cloves could again be attributed to the amount of phytochemicals which were successfully extracted. Another method used to determine the antioxidant power of phytochemicals is to observe the ability to reduce ferric ions to ferrous compounds.Cloves (ME) exhibited significantly higher (p < 0.05) FRAP activity among all spices.Despite the light color of cardamom, the spice exhibited reducing and scavenging ability which may be attributed to key compounds such as linoleic acid and quercetin which act as reducers.Eugenol is one of the main chemical compounds found in Cloves that have also have been seen to exhibit high DPPH and FRAP values [27]. The inhibition of key digestive enzymes such as α-glucosidase and α-amylase has been identified as a potential agent to lower the incidence of diabetes [28]. Results gathered in the experiment showed an increased inhibition in a dose dependent manner (0.1 -0.4 mg/ml).Cloves (ME) had a higher percent inhibition of α-glucosidase at the highest concentration compared to Cloves WE [27].The inhibition may have been caused by the spices decreasing the formation of a substrate-enzyme complex.The inhibition of α-amylase by spices has been previously studied by Cazzola and others [29].Results from the present study agree with previously published works as inhibition of α-amylase occurred in a dose N. Chimbetete et al. dependent manner.Bioactive compounds in plant materials have been attributed to the inhibition of amylase enzyme.Research by Chirumbolo [30] suggested that the bioactive compounds in fruits and vegetables attributed to the inhibition of the amylase enzyme.The inhibition of pancreatic lipase (PL) enzyme through natural bioactive compounds has attracted attention in utilizing phytochemicals as anti-obesity agents by to limiting dietary fat absorption and accumulation of lipid in adipose tissue [31]. Conclusion Cinnamon, cardamom and cloves have been shown in various reports to have beneficial phytochemical compounds that are very important in altering the activity of digestive enzymes as well as the scavenging of free radicals that are produced in the body.They have also been used for traditional medicinal purposes to cure ailments such as colds and dental carries.The ability of the spice extracts (ME and WE) to reduce ferric to ferrous and scavenge free radicals, suggests its antioxidant power.Cloves which were the darkest in color had the highest levels of phenolics and flavonoids, followed by cinnamon and finally cardamom that has a light green/yellow pigmentation.The inhibition of pancreatic lipase, α-glucosidase, and α-amylase activities by cinnamon, cardamom and clove extracts (ME and WE) may provide a potential means of developing safe therapeutic approaches of preventing and/or treating obesity, hyperglycemia and the metabolic syndrome. Table 1 . The total phenolic content (TPC) (mg GAE/100g) was the lowest in (WE) and (ME) extracts of Cardamom at 7.55 and 31.24 and higher in cloves (ME) and (WE) at 174.40 mg/100g and 101.06 mg/100g.Methanol extractions resulted in significantly (p < 0.05) higher phenolic content (1.7 -4 times) compared to water extraction in all spices.The total flavonoid content (TFC) (ME) was highest in cloves (ME) at 94.50 mg CE/100g and lowest in aqueous extracts of cardamom (0.31 mg CE/100g).4.2. Reducing and Radical Scavenging Ability of Spices (Cinnamon, Cardamom and Cloves) 5loves WE (2310.5 ± 171.5µmol Fe 2+ /100g).FRAP values for WE of Cinnamon, Cardamom and Cloves were lower (6%, 33% and 66%) compared to their ME.Cloves ME was able to scavenge DPPH more effectively than the other extracts DOI: 10.4236/fns.2019.103020269 Food and Nutrition Sciences N. Chimbetete et al. Table 1 . Phenolics and flavonoid contents of Cinnamon, Cardamom and Cloves in Methanol and Water extracts.
2019-03-29T10:10:33.078Z
2019-03-13T00:00:00.000
{ "year": 2019, "sha1": "f8ec2cfaa64461bb932452c989c4e049a30731f2", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=91111", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f8ec2cfaa64461bb932452c989c4e049a30731f2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
232352721
pes2o/s2orc
v3-fos-license
p+ip-wave pairing symmetry at type-II van Hove singularities Based on the random phase approximation calculation in two-orbital honeycomb lattice model, we investigate the pairing symmetry of Ni-based transition-metal trichalcogenides by electron doping access to type-II van Hove singularities (vHs). We find that chiral even-parity d+id-wave (Eg) state is suppressed by odd-parity p+ip-wave (Eu) state when electron doping approaches the type-II vHs. The type-II vHs peak in density of states (DOS) enables to strengthen the ferromagnetic fluctuation, which is responsible for triplet pairing. The competition between antiferromagnetic and ferromagnetic fluctuation results in pairing phase transition from singlet to triplet pairing. The Ni-based transition-metal trichalcogenides provide a promising platform to unconventional superconductor emerging from electronic DOS. I. INTRODUCTION In the past decades, novel topological states of quantum matters are the active and attractive topics in condensed matter physics. The discovery of quantum spin Hall in HgTe has boosted the search of twodimension (2D) topological insulator 1,2 . After that, three-dimension (3D) topological insulators and topological semimetals have also been verified in both theoretical calculation and experiment [3][4][5][6][7][8][9] . Meanwhile, topological superconductor [10][11][12][13][14][15][16][17] has also attracted tremendous attentions due to the development of topological insulator. The topological superconductor with particlehole symmetry can host Majorana zero mode which may be potentially employed in realizing topological quantum computation 18,19 . Especially, it is interesting that 2D chiral p + ip wave topological superconductor in magnetic vortex cores can host Majorana zero modes 20 . Recently, the outstanding studies are experimental evidences for Majorana bound states in an ironbased superconductor FeTe 0.55 Se 0.45 by scanning tunneling spectroscopy 13,21,22 . Thus, searching for intrinsic topological superconductor is the active and prominent field in condensed matter physics. 2D materials include not only quantum spin Hall insulators and Chen insulators, but also unconventional superconductor as doped twisted bilayer graphene [23][24][25][26] . Among them, ternary transition-metal phosphorus trichalcogenide (TMPT) compounds APX 3 (A=3d transition metals; X=chalcogens) have attracted enormous attention due to antiferromagnetic (AF) ordering as a hint for significant electronic correlations [27][28][29][30][31] . By suppressing AF order with external pressure, superconductivity emerges in iron-based TMPT compounds such as FePSe 3 , with the highest T c found at about 5.5 K 32 . The crystal structure of the TMPT family APX 3 consists of edge shared AX 6 octahedral complexes and P2 dimers. Transition metal atoms are arranged in a hexagonal lattice. In the octahedral crystal field, five 3d orbitals of transition-metal atoms split into high-energy e g orbitals and low-energy t 2g orbitals. For FePX 3 with Fe 2+ ions (d 6 ), it is an ideal system to study the high-to-low spin-state transition by pressure. For the case of NiPX 3 with Ni 2+ d 8 filling configuration, t 2g bands are fully occupied while e g bands are half filled and dominate the spectral weight near the Fermi level. Theoretical calculations suggest that charge doping can suppress magnetic order, and superconductivity can eventually be achieved for NiPS 3 33 . With increasing electron doping, the system far away from half filling and approaches type-II van Hove singularities (vHs) 34 along Γ-M and Γ-K high symmetry line. For 2D superconductor, a vHs in the density of states (DOS) was proposed to drive a substantial enhancement of interaction effects and to promote the unconventional superconductor [34][35][36][37][38][39][40][41][42][43][44][45] . In general, superconductors with type-I vHs (the saddle points locate at time reversal invariant momentums (TRIMs)) favor singlet pairing. For type-II vHs superconductors (the saddle points at general k points), the triplet pair can compete with singlet pair 34 . By random phase approximation (RPA) calculation in honeycomb lattice, we find that the p + ip-wave (E u ) pairing is enhancing and then overcomes an I-wave (A 2g ) state and a chiral d-wave (E g ) state when electron doping from half filling to type-II vHs. In this paper, the pairing symmetry of Ni-based transition-metal trichalcogenide superconductor is studied near type-II vHs, away from half filling 0.14eV. Within RPA pattern, we use two-sublattice two-orbital Hubbard model to calculate the pairing symmetry of 2D van der Waals (vdW) material NiPS 3 . We find that the chiral even-parity d + id-wave (E g ) state is suppressed by odd-parity p + ip-wave (E u ) state when electron doping approaches the type-II vHs. In the lower doping level, chiral even-parity d + id-wave (E g ) is the dominant pairing in the system 46 . There exist pairing phase transition for this 2D superconductivity material, from singlet pairing to triplet pairing. The increasement of DOS from type-II vHs will strengthen the triplet pair. The pairing result from RPA is consistent with the analysis from spin susceptibility calculation. The peak of spin susceptibility in Γ implies ferromagnetic fluctuation will be responsible for triplet pairing. The Fermi surface nesting from intra β pockets promotes the instability of ferromagnetic arXiv:2103.13753v1 [cond-mat.supr-con] 25 Mar 2021 fluctuation. The paper is organized as follows. In Sec. II, we present the band structure, DOS and Fermi surface from two-sublattice two-orbital tight-binding model based on e g orbitals (d xz and d yz orbitals). We find that type II vHs are only 0.14eV above the Fermi level. In Sec. III, we show the formalism of RPA approach for superconductor pairing based on multi-orbital Coulomb interactions. In Sec. IV, we analyse the spin susceptibility and pairing symmetry when electron doping is closing to vHs. The triplet pairing p + ip (E u ) is the leading superconductivity state, which is caused by ferromagnetic fluctuation. Finally, we summarize and discuss these results in Sec V. II. ELECTRONIC STRUCTURE The nickel phosphorous trichalcogenides compounds NiPX 3 (X=S,Se) are 2D vdW materials, which consist of layered hexagonal structures 33 . Each layer is constructed by M X 6 edge-shared octahedral complexes. In this octahedral environment, the five d orbitals of Ni atom are split into t 2g and e g groups. The e g orbitals are close to half-filling while the t 2g orbitals are fully filled. The physics near Fermi surface are mainly from the d xz and d yz orbitals. In order to capture the lowenergy physics, we use two-sublattice two-orbital tightbinding model on honeycomb lattice. The corresponding tight-binding model 33 is given by Here α, β are the sublattice indices (A,B) and µ, ν are the orbital indexes (d xz ,d yz ). c + αµσ creates a spin σ electron with momentum k in µ orbital on α sublattice. The matrix elements of h αβ µν (k) are provided in the Appendix of Reference [46]. It is interesting that the leading hopping parameter is third nearest neighbor (TNN) hopping term. The TNN Ni cations formed superexchange antiferromagnetic state is favored in NiPX 3 (X=S,Se) parent compounds. In Fig. 1(a), we show the orbital resolved band dispersion (d xz and d yz orbitals) from the tight-binding model. At pristine filling, there are eight Dirac points protected by D 3d symmetry near the Fermi level. Due to charge conversation, hole pocket and electron pocket appears at K/2 and K respectively. Based on the two-fold rotational symmetry along Γ − M , the mixture of d xz and d yz orbitals can be found along Γ − K and K − M , but not Γ − M . The strongest orbital mixture occurs near the Fermi level and also around the Dirac points (K and K/2). In order to analyse saddle points above the Fermi surface, we calculate the corresponding DOS in Fig. 1(b). We find DOS peak above the Fermi level near 0.14 eV, which verify the existence of vHs. We plot related orbital resolved Fermi surface with δ = 0.3 and δ = 0.35 per Ni atom with respect to the half-filling in Fig. 1(c) and (d). For hexagonal symmetry in two-dimensional honeycomb lattice, there are six type-II vHs (saddle points not at TRIM points) along Γ − M or Γ − K direction. When changing the doping level around vHs, there accompany Lifshitz transition of Fermi surfaces. Two pockets around K/2 make together to become one Fermi surface in Fig. 1(c). The outer pocket around Γ is fusing with K pocket as shown in Fig. 1(d). The DOS peak is closely related with topology of Fermi surface, and also make great influence on superconductor pairing. Generally, triplet pairing could compete with singlet pairing in the system with type-II vHs 34 , which also be verified by our RPA calculation in the following section. III. RANDOM PHASE APPROXIMATION Based on two-sublattice two-orbital tight-binding model, we consider onsite multi-orbital Hubbard interaction for superconductor pairing as where n i,α = n i,α,↑ + n i,α,↓ . U , U , J and J represent the intra-and inter-orbital repulsion, the Hund's rule and pair-hopping terms. We adopt Kanamori relations U = U + 2J and J = J in the next calculation, which is required by the lattice symmetry. Considering RPA approximation 47,48 , the multi-orbital susceptibility is defined as, In momentum-frequency space, the multi-orbital bare susceptibility is given by where µ and ν are the band indices, n F is the usual Fermi distribution, l i (i = 1, 2, 3, 4) are the orbital indices, By diagonalizing the above multi-orbital tightbinding Hamiltonian, we obtain the l i orbital component of the eigenvector for band µ (a li µ (k)) and eigenvalue E µ (k). After that, we take the multi-orbital Hubbard interactions into consideration for calculating RPA susceptibilities. The corresponding RPA spin and charge susceptibilities 41,49,50 are given by whereŪ s (Ū c ) is the spin (charge) interaction matrix In this process, we only consider the electron scattering from Fermi surfaces near the Fermi level. The effective Cooper scattering interaction is written as, where the momenta k and k is confined in different FSs with k ∈ C i and k ∈ C j . The orbital vertex function Γ l1l2l3l4 in spin singlet and triplet channels [47][48][49] are where v F (k) = |∇ k E i (k)| is the Fermi velocity on a given Fermi surface sheet C i . IV. SUSCEPTIBILITY AND PAIRING SYMMETRY Based on the multi-orbital RPA method (weak coupling approach) 47,48 , we investigate the pairing symmetry of electron doped nickel phosphorous trichalcogenides compounds NiPS 3 . Due to the existence of type-II vHs peak in DOS near Fermi level, we mainly discuss the electron doping level up to δ = 0.3 and δ = 0.35 per Ni atom separately. In order to analyse the pairing symmetry results, we first calculate the bare susceptibility χ 0 and spin susceptibility χ s RP A along high-symmetry lines at two different doping levels in Fig. 2(a) and 2(c). For bare susceptibility at δ = 0.3 per Ni atom doping level (blue dash-dot line in Fig. 2(a)), there is a prominent peak at Γ, a smooth plateau around M and a broaden peak at K/2. The first one is mainly contributed by the intrapoc-ket nesting between rather flat bands in NNN pockets β (Q 1 in Fig. 1(c)). The intrapocket nesting Q 2 between NNN pockets α is responsible for the second peak around M . The third peak at K/2 is ascribed to the intrapocket nesting Q 3 between NN pockets β. Then, we consider the RPA spin susceptibility for superconducting instability with U = 0.3 eV and J/U = 0.2 in Fig. 2(a). All the mentioned above peaks are enhanced significantly. Especially, the sharp peak at Γ near divergence indicates the ferromagnetic fluctuation between unit cells is dominant. By checking the eigenvectors of susceptibility matrix corresponding to the largest eigenvalue, we find all the signs of eigenvectors are positive which implies ferromagnetic fluctuation in a unit cell. Compared with lower doping level at δ = 0.1 per Ni atom, the emergence of spin susceptibility peak at Γ means that the ferromagnetic fluctuation is competing with antiferromagnetic fluctuation. Undoubtedly, the type-II vHs could strengthen the ferromagnetic fluctuation. In order to make the spin susceptibility clearly, we plot the corresponding susceptibility in 2D Brillouin zone. From 2D pattern in Fig. 2(b), it is clear that C 6 symmetry is maintained, a sharp peak at Γ and another peaks around M . With further doping to δ = 0.35 per Ni atom, the peak at Γ becomes sharper and the peaks around M move toward Γ and K in Fig. 2(c) and 2(d). Due to the underestimation of interaction parameter in RPA method, we adopt intraorbital repulsive interaction parameter U = 0.3 below the critical point U c = 0.35 (to avoid magnetic instability) and Hund's coupling J/U ≤ 0.2. In this section, we mainly focus on the doping level near type-II vHs. Based on the irreducible representation of D 3d point group in this material, we classify the pairing states into subgroups of this point group. Fig. 3(a) shows the leading pairing strengths in singlet and triplet channels with different electron dopings at fixed U = 0.3 and J/U = 0.2. From lower doping (δ = 0.1) to type-II vHs (δ = 0.35), the system exists pairing phase transition from singlet pair to triplet pair. In the low doping level, the nearly degenerate singlet pair states E g (x 2 − y 2 , xy) and A 2g overcome the triplet pair states E u (x, y) and A 2u . When the doping level near type-II vHs, these singlet parings are suppressed by triplet pair state E u (x, y). Undoubtedly, the type-II vHs peak in DOS enhances the strength of triplet pairing. This result has also been anticipated by above RPA susceptibility analysis that the dominating ferromagnetic fluctuation favors triplet pairing. In Fig. 3(b), we plot the pairing eigenvalues as a function of U with a fixed J/U = 0.2 with δ = 0.35 per Ni atom. The leading pairing state is still E u (x, y) and the pairing strength is increasing with increased interaction U . In Fig. 4, we plot the gap functions of leading two-fold degenerate pairing state E u (x, y) with U = 0.3 and J/U = 0.2 at δ = 0.3 ((a) and (b)) and δ = 0.35 ((c) and (d)) per Ni atom respectively. For p x (p y ) with δ = 0.3, the pairing nodes along y (x) axis with mirror symmetry M x (M y ). The gap function on β pocket is comparable to that of α pocket. The superconduct-ing orders connected by nesting vector (Q 1 ) between flat bands in NNN pockets β have the same sign. For gap functions with δ = 0.35 in Fig. 4(c) and (d), there have the similar phenomenon in system. From above RPA calculation, type-II vHs peak in DOS induces odd-parity p x + ip y -wave (E u ) pairing state in electron doped nickel phosphorous trichalcogenides compounds NiPS 3 . V. CONCLUSION In this paper, we have investigated the pairing symmetry for Ni-based transition metal trichalcogenide NiPS 3 based on two-sublattice two-orbital Hubbard model. By applying multi-orbital RPA method, we find that the odd-parity p+ip (E u ) pairing state overcomes chiral evenparity d + id (E g ) state. The enhancement of ferromagnetic fluctuation induced by type-II vHs peak in DOS is responsible for triplet pairing E u . The nesting vector Q 1 between NNN pockets β results in the instability peak of RPA spin susceptibility at Γ. This implies triplet pairing is the leading state which is consistent with our RPA's pairing calculation. The competition between ferromagnetic and antiferromagnetic fluctuation makes the transition from singlet to triplet pairing while doping approaches the type-II vHs. The effect of electronic DOS on unconventional superconductor's pairing may be realized in the layered Ni-based transition-metal trichalcogenides.
2021-03-26T01:16:23.230Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "429c051792089173d471902d15e539e6b3201d98", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2103.13753", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "429c051792089173d471902d15e539e6b3201d98", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231573453
pes2o/s2orc
v3-fos-license
A Generalization of Renault's Theorem for Cartan Subalgebras We prove a generalized version of Renault's theorem for Cartan subalgebras. We show that the original assumptions of second countability and separability are not needed. This weakens the assumption of topological principality of the underlying groupoid to effectiveness. Introduction and statement of Results Jean Renault, in [9], characterises Cartan subalgebras of separable C * -algebras via certainétale twisted groupoids. Specifically, we have: Theorem 1.1 (Renault's Theorem, 5.2 and 5.9 in [9]). Let (G, Σ) be a twistedétale Hausdorff locally compact second countable topologically principal groupoid. Then C 0 (G 0 ) is a Cartan subalgebra of C * r (G, Σ). Conversely, if B is a Cartan subalgebra of a separable C * -algebra A, then there exists a twistedétale Hausdorff locally compact second countable topologically principal groupoid (G, Σ) and an isomorphism which carries A onto C * r (G, Σ) and B onto C 0 (G 0 ). In this paper we generalize this theorem. Specifically, for the first statement of Theorem 1.1, we remove the second countability assumption and also weaken topological principality to merely assuming effectiveness of the groupoid. For the converse statement we remove separability, but pay the price by obtaining a groupoid that is not necessarily second countable and thus not necessarily topologically principal. We obtain: Theorem 1.2. Let (G, Σ) be a twistedétale Hausdorff locally compact effective groupoid. Then C 0 (G 0 ) is a Cartan subalgebra of C * r (G, Σ). Conversely, if B is a Cartan subalgebra of a C * -algebra A, then there exists a twisted etale Hausdorff locally compact effective groupoid (G, Σ) and an isomorphism which carries A onto C * r (G, Σ) and B onto C 0 (G 0 ). The following definition of a Cartan subalgebra is due to Renault in [9]: Definition 1.3. A C * -subalgebra B of a C * -algebra A is a Cartan subalgebra if • B contains an approximate unit for A, • B is a masa (maximal Abelian subalgebra) in A, 1 • B is regular in A (in other words the normalizer set of B, N (B) = {a ∈ A : aBa * ⊂ B and a * Ba ⊂ B}, generates A as a C * -algebra), and finally, • there exists a faithful conditional expectation P : A ։ B. Remark 1.4. There exists a recent result by Pitts [6] showing that the approximate unit condition in Definition 1.3 is redundant as it follows from the other conditions. Section 2 of this paper is devoted to summarising Renault's proofs from [9]. Here we will highlight the main ideas in the construction, especially those ideas that we will eventually generalize. Section 3 provides the argument for Theorem 1.2. Here we develop the techniques required to generalize the relevant sections of Renault's proof, and highlight where they are used. Our approach includes using an Urysohn type lemma for locally compact Hausdorff spaces that are not necessarily second countable, in order to obtain certain separation results which are used throughout Renault's proofs in [9]. Of course, if one assumes second countability, then paracompactness follows and hence so does normality, which yields the more standard version of Urysohn's lemma, and obtaining certain separation functions becomes trivial in this setting. We modify slightly some of Renault's proofs in [9] in order to not assume second countability. By removing the assumptions of second countability of the groupoid and separability of the C * -algebra one obtains groupoids that are not necessarily topologically principal, but effective. One of the advantages of Theorem 1.2 is that it may be applied to Cartan subalgebras of C * -algebras that are not necessarily separable. An important class of non-separable C * -algebras are the uniform Roe algebras, which are of interest as they build a link to coarse geometry (see Section 1 in [5]). These have Cartan subalgebras (see Section 6 in [4]) which fall outside that which Renault's theorem can capture. In addition, the authors of [4] obtain a distinguished Cartan subalgebra by using a slight modification of Renault's theorem, where second countability of the groupoid is weakened to σ-compactness. Of course with the more general Theorem 1.2, this is not necessary. We would like to point out Corollary 7.6 in [3] where the same conclusion as Theorem 1.2 is obtained, but via a different approach. Our approach aims at following and directly generalizing the steps in [9]. Summary of Renault's Proof This section serves as a summary of the constructions in [9], which yield Theorem 1.1. We assume throughout that the reader is familiar with the basic notions in the theory ofétale groupoids. Information on this may be found, amongst other sources, in Chapter 3 of [7], Chapters 1.1 and 1.2 of [8], and/or Chapters 2 and 3 of [12]. The first statement in Theorem 1.1 is that C 0 (G 0 ) is a Cartan subalgebra of C * r (G, Σ) for a twistedétale Hausdorff locally compact second countable topologically principal groupoid. In order to explain how this arises, we start by defining the notions of topological principality, the twist Σ, and how one obtains a C * -algebra C * r (G, Σ) from such groupoids. Thereafter we show that C 0 (G 0 ) is a Cartan subalgebra of C * r (G, Σ), which involves showing that it satisfies the requirements of Definition 1.3. Definition 2.1. Anétale groupoid G is topologically principal if the set of points in G 0 with trivial isotropy is dense in G 0 . We now summarize pages 39-41 of [9] and pages 975-976 of [2], which define the twist and show how to obtain a C * -algebra. Let Σ be a locally compact groupoid that is also a principal T-space. Let G := Σ/T, which is made into a topological groupoid in the natural way. This gives rise to a T-bundle: We say Σ is a twist over G. In the language of exact sequences this is equivalent to a central extension: It is convenient to consider another T-bundle: The T-action is given by t(z, γ) = (tz, tγ), and of course the projection onto the base space is the canonical projection onto orbit classes [z, γ]. Set L := (C × Σ)/T and form the complex line bundle: The projection map onto the base space is given by [z, γ] →γ, where˙is the canonical projection Σ → G. Continuous sections of this line bundle have a representation via T-equivariant continuous maps Σ → C (maps satisfying f (tγ) = tf (γ) for all t ∈ T). Let G be a locally compact Hausdorff groupoid with Haar system {λ x : x ∈ G 0 }, and let Σ be a twist over G. We denote this pairing of a groupoid and its twist by (G, Σ). Consider the space of compactly supported continuous sections G → L, which we denote by C C (G, Σ). Define a multiplication and involution on C C (G, Σ) which turns it into a *-algebra, as follows: for f, g ∈ C C (G, Σ), let Finally, define the norm f := sup , and complete C C (G, Σ) with respect to this norm, obtaining the reduced twisted groupoid C * -algebra C * r (G, Σ). It can be shown that for every |f * |dλ y , which can in turn be shown to be a norm on C C (G, Σ), known as the I-norm. This whole construction does not assume second countability or topological principality of G. For thorough details, consult Chapter 2, Section 1, of [8]. Let supp ′ (f ) = {γ ∈ G : f (γ) = 0}. Renault shows in Proposition 4.1 and its consequences in [9] that we obtain the following properties: Proposition 2.2. Let (G, Σ) be a twistedétale locally compact Hausdorff groupoid. Then: • The elements of C * r (G, Σ) can be represented as continuous sections of the line bundle L. • The multiplication and involution equations in (1) are valid for elements of C * r (G, Σ). Renault proves that starting with a Hausdorffétale locally compact second countable topologically principal twisted groupoid (G, Σ), one can obtain that C 0 (G 0 ) is a Cartan subalgebra of C * r (G, Σ) (we will say that (C * r (G, Σ), C 0 (G 0 )) is a Cartan pair). The first thing to check (as per Definition 1.3) is that the subalgebra contains an approximate unit for the C * -algebra. In the second countable case, one can construct a countable approximate unit using the σ-compactness of the unit space. In view of Remark 1.4 this is of course unnecessary. Theorem 4.2 in [9] shows that an element of C * r (G, Σ) commutes with C 0 (G 0 ) if and only if its open support is contained in the isotropy bundle G ′ which then yields that C 0 (G 0 ) is a masa in C * r (G, Σ) (since G is topologically principal). Proposition 4.3 in [9] asserts the existence of a unique faithful conditional expectation P : C * r (G, Σ) → C 0 (G 0 ) defined by restriction. That this is a faithful conditional expectation can be checked directly by definitions, but uniqueness is obtained through topological principality and second countability of the groupoid. Indeed, Renault shows that any other conditional expectation Q would have to agree with P on C C (G, Σ), by dividing the argument into two cases. First, by considering elements in C C (G, Σ) whose compact support is contained in an open bisection that does not meet G 0 , and second by considering an arbitrary element f in C C (G, Σ) but reducing to the first case by covering the support of of f by those bisections that do not meet G 0 , and one that does, and then using a partition of unity subordinate to such a finite cover. In this argument, Renault makes crucial use of Urysohn's lemma which allows him to find elements in C C (G 0 ) that separate closed subsets from disjoint points. Of course with the space assumed second countable and locally compact, it is regular hence paracompact hence normal, and so Urysohn's lemma applies. Finally, one needs the regularity of C 0 (G 0 ) in C * r (G, Σ). Renault shows this in Proposition 4.8 and Corollary 4.9 in [9]. He proves that the elements of the normalizer set N (C 0 (G 0 )) are exactly those elements of C * r (G, Σ) whose open support is a bisection. Since an element in C C (G, Σ) can be written as a finite sum of elements each of whose open support is an open bisection (this is because G isétale and so has a basis of open bisections, and so one can use a partition of unity with respect to a finite cover), and each summand is a normalizer, the result follows. This proves how one goes from a Hausdorffétale locally compact second countable topologically principal twisted groupoid (G, Σ) to a Cartan pair (C * r (G, Σ), C 0 (G 0 )), which is the first statement in Theorem 1.1. For the reverse statement, Renault starts with an arbitrary Cartan pair (A, B), and constructs a Hausdorffétale locally compact second countable topologically principal twisted groupoid (G(B), Σ(B)). The construction is given in [9]. We may identify the set B : |b(x)| , x . Proposition 4.14 in [9] gives an algebraic extension It makes use of the fact that given a point and an open neighbourhood around it in X, we may find a compactly supported continuous function on X with compact support inside U . The topology is recovered in Lemma 4.16 in [9], so that Σ(B) becomes a locally trivial topological twist over G(B). Let L(B) be the complex line bundle arising, as we described in this section, as C → L(B) → G(B). The aim is to construct continuous sections of this line bundle, which is equivalent to having T-equivariant continuous maps Σ(B) → C. This is done in Lemma 5.3 in [9], where one defines, for a ∈ A and (x, n, y) ∈ D, a(x, n, y) = P (n * a)(y) √ n * n(y) (P is the conditional expectation associated to the Cartan pair), showing that this is independent of choice of representative for a class in Σ(B) (so this map can be defined on the quotient), and is continuous and T-equivariant. The lemma also shows that the mapˆwhich sends a toâ is injective and linear. Injectivity makes use of the fact that elements in B can separate closed sets from points, which is automatic when A is separable as it implies that X is second countable and so one may apply Urysohn's lemma. Proposition 5.7 in [9] argues that because the elements {â : a ∈ A} separate the points of G(B), this groupoid is Hausdorff. Being a groupoid of germs it is alsó etale. Again this makes use of Urysohn's lemma. Lemma 5.8 in [9] proves that the mapˆis a *-algebra isomorphism from A C to C C (G(B), Σ(B)), where A C is the linear span of elements of N (B) whose image underˆhas compact support. Theorem 5.9 in [9] then extends this to showing that the map is an isometry with respect to the C * -algebra norms, sending A onto C * r (G(B), Σ(B)) and B onto C 0 (G(B) 0 ). Separability of A implies second countability of the groupoid. Topological principality is concluded from Proposition 3.6 in [9]. The two procedures, going from twisted groupoids to Cartan pairs and from Cartan pairs to twisted groupoids are inverse to each other. Indeed, Proposition 4.15 in [9] tells us that if (G, Σ) is a locally compact Hausdorffétale topologically principal second countable twisted groupoid and we let A = C * r (G, Σ) and B = C 0 (G 0 ) then we obtain an isomorphism of extensions: We already have noted the isomorphism of the left vertical arrow. The isomorphism of the right vertical arrow is given by Proposition 4.13 in [9], and the middle vertical arrow is discussed in Proposition 4.15 in [9]. In Section 3 we will give alternate proofs of these statements that do not rely on any second countability or topological principality assumptions. Diagram A tells us that the process is the identity (up to isomorphism). Theorem 5.9 in [9] tells us that for a Cartan pair (A, B), the process Note that the automorphism group of the twisted groupoids can thus be identified with the automorphism group of the associated Cartan pair. Generalizing Renault's Theorem This section proves Theorem 1.2. We will first prove the first statement in the theorem by focusing on the areas in Renault's construction that make use of the second countability of the groupoid or the topological principality. We will need the following definition: We will require certain separation properties for locally compact Hausdorff spaces, which are used implicitly throughout [9], and which are standard when the topological space is second countable. However, we verify that we have the required results even for non-second countable spaces. (1) Given a compact subset K of X and an open set U of X such that K ⊂ U ⊂ X, there exists b ∈ C 0 (X) with b ≡ 1 on K, and 0 outside U . (2) Given a closed subset C ⊂ X and a point x ∈ X disjoint from C, there is b ∈ C 0 (X) where b(x) = 1 and b| C ≡ 0. The following proposition is certainly well-known but we do not have a reference for it. Proof. Let (e i ) i∈I be an approximate unit for C 0 (G 0 ). Then it converges uniformly to 1 on any compact subset of G 0 . Indeed, for any compact L ⊂ G 0 , there exists, by Lemma 3.2, b ∈ C 0 (G 0 ) such that b ≡ 1 on L. Since e i b − b ∞ → 0 we have that the approximate unit converges to 1 uniformly on L. Now let f ∈ C C (G, Σ) with compact support K. It is clear that the net (e i * f ) i∈I = ((e i • r)f ) i∈I converges uniformly to f on K, as the net (e i ) i∈I converges uniformly to 1 on r(K). Hence (e i * f ) i∈I converges to f in the inductive limit topology, which by Proposition II.1.4 (i) in [8] implies that it converges in the I-norm and hence in C * r (G, Σ). The same can be said for (f * e i ) i∈I . As by definition the net (e i ) is bounded, the above holds when replacing f by any a ∈ C * r (G, Σ). In order to get that C 0 (G 0 ) is a masa in C * r (G, Σ), it suffices to assume that G is effective in Theorem 4.2 in [9] rather than topologically principal. As was stated in Section 2, in order to get a unique faithful conditional expectation in Proposition 4.3 in [9], Renault makes use of Urysohn's lemma. We may now use Lemma 3.2 (2) instead and obtain the same result. Additionally, assuming effectiveness rather than topological principality suffices. To show regularity, note that for anétale and locally compact groupoid G we have that every f ∈ C C (G, Σ) is the linear span of continuous sections compactly supported by open bisections, and such elements are normalizer elements (by Proposition 4.8 (i) in [9]). Together, this gives the first statement of Theorem 1.2. For the reverse statement, recall from Section 2 that Renault proves that there is an extension by making use of the fact that one can find a compactly supported continuous function with support inside a given open set. We may now instead use Lemma 3.2 (3) for this. The rest of the arguments all use separation properties that are found in Lemma 3.2 without having to allude to second countability. Of course in Theorem 5.9 in [9] one would not get a second countable groupoid G(B) if separability of A is removed, and hence G(B) might not be topologically principal. However, being ań etale groupoid of germs, it is automatically effective. Thus the second statement of Theorem 1.2 is obtained. In order to get Diagram A without second countability or topological principality assumptions, Proposition 4.13 of [9] must be slightly modified. Renault proves that given an open bisection S, there exists n ∈ N (B) for which S = supp ′ (n). In order to achieve this, Renault claims the existence of an element in C 0 (G 0 ) whose open support is exactly s(S). Of course when the space is second countable and locally compact, it is σ-compact and it is easy to obtain such an element. Given that we are not assuming second countability, we can only work with a slightly weaker version of this argument. To obtain Proposition 4.13 in [9] without second countability it suffices to localise to an open set contained in S, as the groupoid of germs induced by α(S), where S is the set of all open bisections, is the same as that induced by α(S ′ ), where S ′ is a refinement of S (for the definition of α see Section 3 in [9]). Theorem 12 in the Appendix of [1] ensures that we have a non-vanishing continuous section for the associated line bundle L on a neighbourhood T of g ∈ S, contained in S. Hence L| T is trivializable. We may use Lemma 3.2 to say that there exists a c ∈ C 0 (G 0 ) with compact support inside s(T ). Hence U = supp ′ (c) is an open set inside s(T ), and by the fact that s : T → s(T ) is a homeomorphism we can pull U back to an open bisection V ⊂ T . Restricting attention to V we have that L| V is trivializable. Let u : V → L be a non-vanishing section, and without loss of generality assume u(g) = 1 for all g ∈ V . Define n : G → L by n(g) = u(g)c(s(g)) if g ∈ V , and 0 otherwise. There exists a net {h j } j∈J in C C (U ) converging uniformly to c. Hence uh j ∈ C C (G, Σ) converges uniformly to n. Hence it converges in the I-norm as this coincides with the supremum norm on C 0 (G 0 ), and hence in the C * -algebra norm. Hence n ∈ A with supp ′ (n) = V , and hence n ∈ N (B). The remaining parts of the proof work by just assuming effectiveness of the groupoid. To get the isomorphism of the middle arrow in Diagram A we offer an alternative proof to Proposition 4.15 of [9], thanks to Xin Li: Proposition 3.4. Let A = C * r (G, Σ) and B = C 0 (G 0 ), for a twistedétale Hausdorff locally compact effective groupoid (G, Σ). Then we have a canonical isomorphism of extensions: We have already shown the left and right vertical arrows to be isomorphisms. We just need an isomorphism of the middle vertical arrow making the diagram commute. This is achieved as follows. We define a map Σ(B) → Σ by [x, n, y] → n(σ) |n(σ)| σ, where σ ∈ Σ chosen so thatσ ∈ supp ′ (n) with s(σ) = y and r(σ) = x. The inverse Σ → Σ(B) is defined by sending σ → x, n(σ) |n(σ)| n, y where n ∈ N (B) chosen so that n(σ) = 0, and y = s(σ), x = r(σ). It is a tedious but straightforward task to check that these maps are well-defined groupoid homomorphisms, and are inverse to each other. Finally, we remark that since the proof of Proposition 5.11 in [9] does not use the separability of the C * -algebra nor the second countability of the groupoid, it follows that for any Cartan pair (A, B) we have that B satisfies the unique extension property if and only if the groupoid G(B) is principal.
2021-01-12T02:15:32.472Z
2021-01-09T00:00:00.000
{ "year": 2021, "sha1": "1943629ce5d34519391a1b394aec0bd3af79ab4f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.03265", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1943629ce5d34519391a1b394aec0bd3af79ab4f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
90459259
pes2o/s2orc
v3-fos-license
Functional limb muscle innervation prior to cholinergic transmitter specification during early metamorphosis in Xenopus In vertebrates, functional motoneurons are defined as differentiated neurons that are connected to a central premotor network and activate peripheral muscle using acetylcholine. Generally, motoneurons and muscles develop simultaneously during embryogenesis. However, during Xenopus metamorphosis, developing limb motoneurons must reach their target muscles through the already established larval cholinergic axial neuromuscular system. Here, we demonstrate that at metamorphosis onset, spinal neurons retrogradely labeled from the emerging hindlimbs initially express neither choline acetyltransferase nor vesicular acetylcholine transporter. Nevertheless, they are positive for the motoneuronal transcription factor Islet1/2 and exhibit intrinsic and axial locomotor-driven electrophysiological activity. Moreover, the early appendicular motoneurons activate developing limb muscles via nicotinic antagonist-resistant, glutamate antagonist-sensitive, neuromuscular synapses. Coincidently, the hindlimb muscles transiently express glutamate, but not nicotinic receptors. Subsequently, both pre- and postsynaptic neuromuscular partners switch definitively to typical cholinergic transmitter signaling. Thus, our results demonstrate a novel context-dependent re-specification of neurotransmitter phenotype during neuromuscular system development. Introduction The features that define a specific neuronal phenotype are generally conserved between species, are specified early during development, but can also undergo adaptive plasticity related to activity after maturation (Demarque and Spitzer, 2012;Borodinsky et al., 2014). Amongst the large variety of neuronal phenotypes, the developmental properties of the motoneuronal class are particularly well established and are shared by all vertebrates. In mammals, motoneuron (MN) specification begins in the ventral neural tube with the induction of progenitors by sonic hedgehog proteins (Roelink et al., 1995), the graded concentration of which triggers the subsequent expression of specific post-mitotic transcription factors (Goulding, 1998;Jessell, 2000). The homeodomain-containing protein Islet1 is the first molecular marker of MN differentiation (Ericson et al., 1992) and induces the later expression of homeobox Hb9, which consolidates the motoneuronal phenotype and participates in MN migration and central motor column formation (Arber et al., 1999). Soon after their specification, MNs express the two typical proteins associated with cholinergic neurotransmission, choline-acetyltransferase (ChAT) and the vesicular acetylcholine transporter (VAChT), enabling them thereafter to activate their muscle targets (Phelps et al., 1991;Chen and Chiu, 1992). The muscles, which develop concomitantly and contribute to motoneuron axon path-finding, also play a fundamental role in MN phenotyping and survival (Yin and Oppenheim, 1992;Kablar and Belliveau, 2005). Finally, MNs are considered to be fully functional once they have become affiliated to a corresponding central motor network and provide impulse-elicited excitation to muscle fibers using acetylcholine (ACh) as the neurotransmitter (Davis-Dusenbery et al., 2014). In vertebrates, axial MNs innervating trunk muscles are distributed rostro-caudally along the spinal cord in the medial motor column (MMC), whereas fore-and hind-limb MNs are located in the brachial and lumbar lateral motor columns (LMC), respectively. In the amphibian Xenopus laevis, the axial and appendicular MNs controlling tail and limb muscles respectively are generated during two separate developmental periods. The former develop during pre-hatchling embryonic stages and control larval undulatory tail-based swimming by projecting to and exciting axial myotomes via nicotinic ACh receptor activation (van Mier et al., 1985;Sharpe and Goldstone, 2000). Thus, axial neuromuscular ontogeny takes place under conditions equivalent to those in mammals where the MNs and target muscles of a primary motor system develop simultaneously. In contrast, MNs of the neuromuscular system responsible for later limb-based locomotion differentiate and the limb buds start developing during early metamorphosis (Marsh-Armstrong et al., 2004), when the axial MNs and tail muscles are already entirely developed and operational. Given that the primary axial and secondary appendicular MNs of Xenopus initially share the same anatomical and neurochemical environment, and because such features influence neuromuscular development (Yang and Kunes, 2004;Menelaou et al., 2015), it is conceivable that the emergence of the secondary limb MNs and their target muscles is susceptible to influences exerted by the already fully established and functional axial neuromuscular system. We thus hypothesized that the axial myotomes and their innervation constitute a potential interfering environment for the newly developing appendicular neuromuscular system, and that consequently, the latter may have to follow particular developmental rules adapted to this unusual context. In the present study, therefore, we investigated the developmental strategy employed by the early developing limb MNs and associated neuromuscular junctions, with a specific interest in exploring their neurochemical phenotype and functional capability, both in terms of spinal locomotor circuit interactions and limb bud muscle control. Our results show that during a brief pre-metamorphic period, the emerging limb MNs transiently express a non-cholinergic transmitter phenotype that involves glutamate, while exhibiting all other characteristics of typical vertebrate MNs. Results Spatial organization of the appendicular spinal motor column during early metamorphosis Appendicular MNs already project into the hindlimb buds as soon as the latter begin to emerge at stage 48 (van Mier et al., 1985). At stage 50, the nascent limb consists of a tissue protrusion that is visible on the ventral side of the rostral tail myotomes, next to the larva's abdomen (Nieuwkoop and Faber, 1956). The limb bud then continues to grow and differentiate throughout the pre-metamorphic period, during which time the animal triples in size ( Figure 1A 1 ). At stage 56/ 57, the adult-like hindlimb is formed with differentiated thigh and leg segments along with the appearance of five webbed toes ( Figure 1A 2 ). The limb extensor and flexor muscle groups are also differentiated at this time. To investigate the central spatial organization of the motoneurons innervating the hindlimb muscles during this developmental period, we injected two different retrograde tracers into a hindlimb bud and ipsilateral axial myotomes of stage 50 to 57 animals to label the somata and dendrites of appendicular and axial MNs, respectively. Each population is located centrally in separate motor columns. Specifically, the axial motor column is distributed ventro-medially along the entire length of the spinal cord, whereas the hindlimb motor column is located more medio-laterally and restricted to spinal segments 7 to 9, identified by counting caudally the number of ventral roots from the obex ( Figure 1B-D), as reported previously (Hulshof et al., 1987). In early stages 50-52 ( Figure 1B 1 , B 2 ), retrograde labeling from the limb bud revealed a high density of MNs with relatively small cell bodies (<10 mm) and a reduced dendritic arbor, located more dorso-laterally than the axial MNs. From stage 55 onward ( Figure 1C) the appendicular motor column is positioned more laterally due to the enlargement of these spinal segments with a lower neuronal density. The appendicular MN somata then become bigger (20-50 mm) and acquire a characteristic elongated cell body shape and extended dendritic arbor, extending from a ventro-medial to dorso-lateral region of the hemicord ( Figure 1C). Delayed cholinergic transmitter phenotype expression in the developing appendicular motor column The time course of ACh neurochemical ontogeny in appendicular MNs was first investigated using fluorescence immunochemistry for ChAT and VAChT expression in spinal cord cross-sections at different developmental stages (Figure 2A,B). At stage 53 (Figure 2A), appendicular MNs were not labeled at all with the ChAT and VAChT antibodies, whereas in the same spinal slices, axial MNs were strongly positive for the two proteins. In contrast, at later stages (e.g., stage 57 in Figure 2B) both ChAT and VAChT were clearly expressed in appendicular MNs (see lower left insets) as well as in axial MNs. Overall, ChAT and VAChT were not immuno-detected in appendicular MNs until stage 55, in contrast to axial MNs which were immunoreactive throughout the early pre-metamorphic stages. Semi-quantitative analysis of ChAT/VAChT fluorescence intensity in axial and appendicular MNs (see Materials and methods) was next performed on confocal image stacks in which appendicular MNs were identified by retrograde tracer labeling. The fluorescence variation (DF/F) measured at stages 49-54 showed that the immuno-signal detected in the appendicular motor column was not significantly higher than the background signal level (ChAT: 13.5 ± 5.5 at stage 49-51; 1.2 ± 1.0 at stage 52-54; VAChT: 1.3 ± 0.8 at stage 49-51; 5.2 ± 2.0 at stage 52-54; Figure 2C, right). Immunolabeling in the LMC became more robust with further larval development, with the appendicular/ axial fluorescence variation increasing to~20% (at stage 55-57; ChAT: 120.4 ± 31.8; VAChT: 72.4 ± 3.9; Figure 2C, right), which was consistent with the clear detection of both ChAT and VAChT in appendicular MNs at the later pre-metamorphic stages. In contrast, the fluorescence variation expressed in axial MNs ( Figure 2C, left) for both ChAT and VAChT immuno-signal was significantly higher than the background signal level throughout the entire developmental period examined (ChAT: 644.7 ± 205.4 at stage 49-51; 849.1 ± 135.1 at stage 52-54; 1358.1 ± 270.9 at stage 55-57; VAChT: 273.9 ± 57.5 at stage 49-51; 267.5 ± 36.8 at stage 52-54; 872.5 ± 134.5 at stage 55-57; Figure 2C left). Figure 1. Developmental stages of X. laevis and anatomical organization of the spinal motor columns. (A) Relative size differences between stage 50 and 57 larvae (A 1 ) and associated morphological changes of the hindlimb bud during metamorphosis onset (A 2 ), from Nieuwkoop and Faber (1956). (B, C) Segmental (B 1 , C) and rostrocaudal (B 2 ) organization of the spinal motor column region containing limb MNs at stages 50-52 (B 1 , B 2 ) and 55 (C). Inset in B 2 shows the location of labeled MNs in the mid-region of the spinal cord. (D) Schematic representation of the segmental organization of the appendicular and axial motor columns in the larval spinal cord. In situ hybridization (ISH; n = 16) was performed for ChAT mRNA detection in a larval group ranging from stages 49 to 57. No signal was detected in the appendicular motor column area in stages 49-52 ( Figure 2D left; n = 7). A ChAT ISH signal was then weakly detectable from stage 53-54 (n = 2), to become strongly evident from stage 55 onwards ( Figure 2D right; n = 7). In contrast, a strong ChAT ISH signal was observed in axial MNs throughout the entire developmental period examined. Because ISH detects mRNAs that precede protein synthesis, our immunochemistry and ISH results are consistent and together confirm that appendicular MNs do not exhibit the cholinergic molecular phenotype prior to stage 55, although they innervate the hindlimb bud muscles from stage 48. One possibility was that the non-cholinergic and cholinergic appendicular MNs observed at different developmental stages were in fact distinct populations. To address this possibility we first applied a fluorescent retrograde dye to the hindlimb bud at stage 51/52, prior to the appearance of the ACh phenotype in limb MNs ( Figure 2E). Thereafter, the tracer-treated larvae were raised until they reached metamorphosis climax, by which time the appendicular system is presumably fully developed. ChAT immunolabeling then performed on spinal cross sections taken from these stage 62 animals strongly labeled the appendicular motor population ( Figure 2E, middle panel). Significantly, some of these ChAT-positive MNs were also co-labeled with the retrograde fluorescent dye previously applied at earlier stage 51/52 ( Figure 2E, right panel). Thus, these double-labeled MNs that innervated the hindlimb at stage 62 were already present at stage 52, before they expressed the cholinergic molecular phenotype. This finding therefore demonstrated that early non-ACh appendicular MNs constitute at least a sub-population of the cholinergic MNs that constitute the future mature appendicular motor innervation after metamorphosis. Molecular and functional identification of early non-cholinergic appendicular neurons The molecular identity of the non-ACh neurons innervating the newborn limbs at early pre-metamorphic stages was verified by immunochemistry against the transcription factor Islet1/2, a protein marker of motor neurons (Ericson et al., 1992). Double immunostaining in larvae younger than stage 55 revealed that the ChAT-negative neurons labeled with retrograde dye applied in the hindlimb bud were also strongly Islet1/2-positive, similar to the mature axial MNs present in the same slices ( Figure 3A). Comparable results were obtained with ISH for Islet1 ( Figure 3-figure supplement 1). The electrophysiological properties of these non-ACh appendicular MNs were then tested at stage 52 by performing patch-clamp intracellular recordings of neurons identified by RDA retrograde labeling from the hindlimb bud ( Figure 3B). These recorded MNs had average resting membrane potential values of À40.2 ± 8.2 mV (n = 12) and À54.8 ± 2.3 mV (n = 5) with low and high [Cl À ] patch solutions, respectively, and were able to produce action potentials either in response to increasing steps of depolarizing current injection ( Figure 3C, left), on rebound after release from experimental hyperpolarization ( Figure 3C, right) or even spontaneously ( Figure 3D). Significantly, moreover, during spontaneous episodes of locomotor-like activity monitored from a more caudal spinal ventral root, correlated synaptic fluctuations were observed in all identified appendicular MNs, thereby indicating a functional synaptic coupling with other components of the spinal locomotor network ( Figure 3E). In addition, the use of an elevated [Cl À ] solution in the intracellular electrode (see Materials and methods) revealed a capability for locomotor-related action potential firing in all recorded MNs ( Figure 3F), and was consistent with the idea that chloride-mediated signaling was still not fully mature in limb MNs and remained excitatory at early developmental stages (Hanson and Landmesser, 2003;Akerman and Cline, 2006). Altogether, these findings demonstrate that the retrogradely-labeled, non-ACh spinal neurons that prematurely innervate hindlimb buds express the motoneuronal marker Islet1/2 and present basic biophysical characteristics of functional MNs, being able to produce impulses as a function of membrane potential that in turn can be influenced via synaptic influences from central premotor circuitry. Locomotor-related activation of hindlimb muscles by non-cholinergic motoneurons Whether the early appendicular MNs are actually capable of driving muscle activation in the hindlimb buds before the emergence of the cholinergic phenotype was next investigated by making EMG recordings from both limb bud muscles and axial myotomes in semi-intact larval preparations ( Figure 4A left panel). During episodes of spontaneous axial fictive swimming in a stage 53 preparation (n = 7; Figure 4A, right panel), locomotor commands monitored from a caudal spinal ventral root elicited rhythmic EMG activity in both the segmental myotomes and limb bud muscles. The ventral root bursts occurred in phase with EMG bursts in the recorded ipsilateral hindlimb muscle and in alternation with bursts in the contralateral myotome ( Figure 4B, control). Under subsequent bath application of d-tubocurarine to block any nicotinic receptors and thereby prevent ACh-dependent synaptic transmission (Sillar and Roberts, 1992;, the expression of swimming-related ventral root burst activity persisted, but associated tail myotome EMG activity was completely abolished after 15 min of antagonist perfusion ( Figure 4B,d-tubocurarine). In contrast, even after >30 min dtubocurarine perfusion the hindlimb bud muscles continued to exhibit rhythmic EMG discharges that, although reduced in amplitude (see below), remained strictly coordinated with the ongoing axial pattern ( Figure 4B,d-tubocurarine; see black arrow in lower middle plot). The ability of both the tail myotomes and limb bud muscles to express locomotor-related activity recovered fully after 2 hr washout with normal saline ( Figure 4B, wash). Semi-intact preparations older than stage 54 similarly exhibited EMG burst activity in their hindlimb buds and tail myotomes during fictive axial locomotion in control conditions (n = 5; e.g., stage 56 in Figure 4C, left). In these cases, however, a 15 min bath application of d-tubocurarine decreased the frequency of the centrally-generated axial swim pattern, and completely abolished, albeit reversibly, associated EMG activity in both the hindlimb bud muscles and tail myotomes ( Figure 4C,d-tubocurarine; wash). The substantial reduction in limb EMG activity by d-tubocurarine at developmental stages earlier than 55 (see Figure 4B) could have been due to a peripheral influence on transmission at the neuromuscular junction and/or result from an upstream action on the central activation of the limb motoneurons themselves. To distinguish between these two possibilities, we used immunolabeling and calcium imaging on completely isolated CNS preparations to assess whether functional cholinergic synapses are present on limb MNs within the spinal cord and are directly affected by nicotinic antagonist perfusion. Co-localized sites of VAChT and synapsin labeling were found on the somata of identified limb MNs ( . These findings thus indicated that the reduction in EMG activity observed in semi-isolated preparations (as seen in Figure 4B) might not have resulted from a peripheral action of the antagonist at the level of the neuromuscular junction (NMJ) itself, but rather, was largely a consequence of the antagonist's central nervous effects on actual limb MN activation. In this case, the unidentified premotor source of rhythmic drive to limb motoneurons can be presumed to use cholinergic signaling (Figure 4-figure supplement 1), although as suggested by the unmasking of depolarizing synaptic events by high electrode chloride concentrations ( Figure 3F) and earlier findings that d-tubocurarine can block GABA-A receptors (Bixby and Spitzer, 1984), it is possible that GABA neurotransmission is also involved. The differential effects of d-tubocurarine at early and later larval stages also suggested that a developmental shift in the mechanism by which the appendicular MNs activate their target muscles during fictive locomotion occurs around intervening stages 54-55. Whereas neuromuscular transmission appears to be initially independent of ACh signaling in the pre-metamorphic tadpole, it evidently changes to a completely ACh-dependent process in older tadpoles. This in turn suggests that developmental modifications in parallel with the switch in MN transmitter signaling must also occur in the receptor phenotype of the hindlimb muscles themselves. Developmental switch in hindlimb neuromuscular transmission In the Xenopus embryo, immature axial MNs have been shown to co-express ACh and glutamate, although glutamate is reported to have no postsynaptic bioelectrical effects (Fu et al., 1998). Since our present data show that the appendicular MNs are able to activate hindlimb muscles before using ACh as a neurotransmitter, we asked whether glutamate may be involved in the early NMJ transmission process. First, we explored in whole-mount hindlimb buds at various developmental stages (from 51 to 57) the putative expression and relative distribution of nicotinic and glutamate receptors with fluorescent a-bungarotoxin and the anti-NR2b antibodies, respectively. At stage 52, there was a total lack of labeling for a-bungarotoxin in hindlimb buds ( Figure 5A, upper panels), indicating an absence of muscle ACh receptors at this stage. In contrast, at the same developmental stage, diffuse NR2b fluorescence, indicative of glutamate receptor labeling, was observed in the vicinity of MN axons visualized with neurofilament immuno-detection. A similar expression of NR1, another glutamate receptor subunit, was found which also paralleled synaptophysin labeling ( Figure 5B). In addition, at stage 52, both the axons ( Figure 5C α-bungaroTx Figure 5. Switch from glutamate to acetylcholine receptors in hindlimb muscles. (A) Examples of hindlimb innervation patterns and distribution of ACh nicotinic receptors in a whole-mount hindlimb bud, revealed by fluorescence immunolabeling of neurofilament associated protein (Neurofil., red), glutamate receptor (NR2b, blue) and a-bungarotoxin labeling (a-bungaroTx, yellow) respectively, at stages 52 (upper panels) and 57 (lower panels). Inset drawings at bottom left of each panel show bud morphology at the two representative larval stages. Scale bars = 200 mm for stage 52, 100 mm for stage 57. (B) Examples of fluorescence immunolabeling against neurofilament associated protein (red), synaptophysin (Synaptoph., green) and NMDA glutamate receptor subunit 1 (NR1, blue) in whole-mount limb bud at stage 52. White arrowheads indicate sites of apposition of all three markers. Scale Figure 5 continued on next page present in hindlimb MNs, and concomitantly, glutamate receptors are expressed on hindlimb muscle fibers, close to synaptophysin-marked presynaptic terminals, consistent with the existence of functional glutamatergic NMJs. The first a-bungarotoxin positive signal was detected at stage 55 with characteristic dispersed dots (data not shown), as previously described (Marsh-Armstrong et al., 2004). By stage 57, typical clusters of nicotinic receptors were observed close to appendicular nerve terminal branches, and in further direct contrast to younger stages, NR2b ( Figure 5A, lower panels) and NR1 receptor subunits ( Figure 5-figure supplement 1B2) in hindlimb bud muscle as well as VGluT1 in innervating motor axons ( Figure 5-figure supplement 1B2) were now absent. These findings therefore indicate that glutamate receptor subunits are no longer expressed in hindlimb muscle (and at least one vesicular transporter in motor axons) from stage 55 onwards, but rather, are superseded by nicotinic receptors in likely correspondence to the switch in NMJ transmission to a cholinergic phenotype. In a final step, the electrophysiological response properties of neuromuscular transmission were tested on isolated hindlimb preparations over a similar range of developmental stages (from 52 to 57; Figure 6). Single pulse electrical stimulation applied to a limb bud motor nerve elicited EMG responses in the hindlimb bud with a mean latency of 7.8 ± 1.9 ms at stages 52-54 (n = 6) and 5.1 ± 2.1 ms at stages 55-57 (n = 6), and a mean amplitude of 0.30 ± 0.01 mV at stages 52-54 and 2.50 ± 0.99 mV at stages 55-57 in normal Ringer's saline. Either d-tubocurarine or CNQX/AP5 cocktail was then bath-applied to test whether these neuromuscular responses were ACh-or glutamatemediated, respectively. At stages 52-54, the appendicular nerve stimulus-evoked EMG response was not diminished by subsequent d-tubocurarine application, but rather, was increased by 35 ± 6% ( Figure 6B, left; Fig; 6C, left black bar, p>0.01, U = 642.0). [Note that such a d-tubocurarine-induced enhancement of post-junctional responses has already been reported in other animal models and is unrelated to its antagonistic effects on nicotinic receptors themselves (Egan et al., 1993;Baron et al., 1996)]. Conversely, the hindlimb muscle response was decreased by 44 ± 3% following bath-application of CNQX/AP5 ( Figure 6B, left; Figure 6C, left grey bar, p<0.001, U = 39.5). At stages 55-57 on the other hand, the nerve stimulus-evoked EMG response was decreased by 80 ± 2% under d-tubocurarine perfusion (p<0.001, U = 74.0) but was not significantly affected (p=0.5, U = 522.0) in the presence of CNQX/AP5 ( Figure 6B, right; Figure 6C, right black and grey bars, respectively). Altogether, these immunochemical and pharmacological results showed that glutamate, but not ACh, is initially involved in hindlimb muscle activation, whereas NMJ transmission becomes AChmediated and increasingly efficient (increase in response amplitude, reduction in transmission delay) from stage 55 onwards. This therefore confirms the occurrence of a functional switch in Xenopus hindlimb muscle properties that is commensurate with the switch in appendicular MN neurotransmitter phenotype during the pre-climax phase of metamorphosis. Discussion In this study, we report that as early as pre-metamorphosis (i.e., prior to stage 55), de novo appendicular MNs express the Islet transcription factor, exhibit characteristic MN electrical properties and synaptic influences, are involved in locomotor bouts of activity, and project to limb bud muscles to evoke typical post-junctional responses. Unexpectedly, however, these MNs do not at this premature stage use standard cholinergic neurotransmission to activate their target muscles, but it is not until stage 55 that the limb MNs and their NMJs undergo a simultaneous switch from non-cholinergic to acetylcholine-dependent signaling. Thus our data show for the first time that in metamorphosing Xenopus, MNs that innervate the newly emerging hindlimbs first develop through the employment of a transient, but functional, alternative transmitter mechanism before a conventional and definitive cholinergic phenotype appears. Classically in vertebrates, developing MNs are perpetually cholinergic and extend their axons out of the spinal cord towards target muscles to form functional NMJs (Phelps et al., 1991;Goulding, 1998). However, in Xenopus the ontogeny of appendicular MNs does not obey this well-established developmental pattern, since the present data indicate that before the acquisition of a cholinergic phenotype, these MNs are able to transmit centrally-generated motor commands to the developing limb bud muscles using a different transmitter effector(s). Several lines of evidence indicate that glutamate plays a major role in this precursor signaling at early metamorphic stages. Immunolabeling of stage 52 preparations revealed the presence of the vesicular glutamate transporter VGluT1 in identified (retrogradely-labeled) hindlimb-bud MNs, both in their cell bodies, as has been reported in mammalian CNS neurons (Nakamura et al., 2005;Yang et al., 2014), and peripheral axons (Melo et al., 2013). Significantly, we found the axonal VGluT1 to be co-localized with the synaptic protein, synaptophysin, indicative of a presence at the actual NMJ presynaptic terminals. Also essential to synaptic signaling is the expression of appropriate receptors at the post-junctional level in correspondence with the type of pre-synaptic neurotransmitter released. Accordingly, we observed that the use of glutamate prior to ACh as a peripheral neurotransmitter is matched by the presence of glutamate receptors prior to the appearance of cholinergic receptors at the limb NMJs. Such a parallel developmental sequence is therefore consistent with a neurotransmitter phenotype switching, whereby intrinsic molecular processes lead to a change of one (or several) transmitter(s) to another within the same presynaptic neuron and a concomitant modification in associated postsynaptic receptor subtype in order to ensure synapse functionality. This loss/gain in pre-synaptic neurotransmitter/post-synaptic receptor phenotype has been reported to occur in developmental, post-lesional or activity-dependent contexts and is considered to be a major underlying feature of neuronal plasticity (Spitzer, 2017). It can either maintain or invert synaptic sign (Borodinsky et al., 2004;Borodinsky and Spitzer, 2007) and is generally associated with observable behavioral changes (Sillar et al., 1998;Demarque and Spitzer, 2010). Our immunohistochemical evidence for an early role of glutamate at the developing limb NMJ was also supported by electrophysiological data on the effects of CNQX/AP5 on stimulus-evoked EMG responses in isolated motor nerve/limb muscle preparations. In direct contrast to older, poststage 55 preparations where these glutamate antagonists were without any effect on evoked muscle potentials, at younger stage 52-54, EMG amplitudes in response to nerve electrical stimulation were strongly reduced. However, that this blockade remained only partial (ca. 50%, in the presence of CNQX/AP5 alone or in combination with the cholinergic receptor antagonist d-tubocurarine) might have been due to the low concentrations of glutamate ionotropic receptor antagonists used in our experiments (cf., Dale and Roberts, 1985) or that the antagonists were not fully effective on the still immature limb neuromuscular system. Alternatively, metabotropic glutamatergic receptors may be present (Pinard et al., 2003) and contribute to neurotransmission at the early developing limb NMJ or (an)other, as yet unidentified, neurotransmitter(s) may be involved. In Xenopus, limb MNs are born during pre-metamorphosis, then their number diminishes after the establishment of NMJs (Hughes, 1961;Prestige, 1967). This in turn raises the possibility that the initial non-cholinergic motoneuron population we identify here constitutes a short-lived subclass of LMC neurons that project transiently into the limb buds at pre-metamorphic stages, but then is totally replaced by a different cholinergic population as metamorphosis proceeds. However, our finding that limb bud MNs previously retrogradely-labeled at stage 51 are still present at stage 62 when the population of appendicular MNs is fully established and uniformly cholinergic (Baldwin et al., 1988), including our early-labeled neurons, argues against this hypothesis. The possibility that the initial non-cholinergic phenotype may be restricted to a specialized motoneuronal sub-population that innervates limb muscle spindles is also unlikely since frog spindles are mainly innervated by collateral branches of skeletal MNs (Katz, 1949;Gray, 1957). Although a specific fusimotor innervation has been reported in the bull frog (Fujitsuka et al., 1987), since these MNs appear to be restricted to the semitendinosus muscle only, they would constitute a very discrete motor subset, the small proportion of which does not correspond to the large number of early noncholinergic LMC neurons that we find in pre-metamorphic Xenopus. Thus, our data strongly indicate that the same motoneuronal population in the developing appendicular motor system of Xenopus does indeed utilize consecutively two different neurotransmitter signaling mechanisms in association with the emergence of hindlimb motility (Hughes and Prestige, 1967) and forthcoming limb-based locomotor and postural control (Combes et al., 2004;Beyeler et al., 2008). Given that limb MNs acquire their cholinergic phenotype around stage 55 and remain cholinergic throughout adulthood (Baldwin et al., 1988), the existence of an early, brief but functional non-cholinergic phenotype is at first sight puzzling. In a broader context, a number of studies have reported the co-existence of various neurotransmitters in vertebrate MNs. For instance, axial MNs of Xenopus embryos co-express glutamate and ACh (Fu et al., 1998), while developing myotome fibers initially express a variety of receptors, including both cholinergic and glutamate subtypes, until NMJ formation when solely cholinergic receptors are preserved (Borodinsky and Spitzer, 2007). Co-released glutamate regulates the development and function of cholinergic neuromuscular synapses in larval zebrafish (Todd et al., 2004) and Xenopus (Fu et al., 1998) by potentiating ACh release through an activation of presynaptic receptors on the MN terminals themselves. Moreover, mammalian spinal MNs co-release glutamate and ACh centrally to activate Renshaw cells (Mentis et al., 2005;Nishimaru et al., 2005;Lamotte d'Incamps and Ascher, 2008), as well as at their peripheral terminals (Waerhaug and Ottersen, 1993;Rinholm et al., 2007) where post-synaptic muscle fibers possess both ACh and glutamate receptors (Mays et al., 2009). Here again, however, in no such case has glutamate been reported to produce muscle activation per se, but rather, this transmitter acts indirectly by regulating ACh's own impact at the NMJ (Vyas and Bradford, 1987;Malomouzh et al., 2003;Pinard et al., 2003) via the activation of post-synaptic NMDA receptors (Pinard and Robitaille, 2008;Petrov et al., 2013). On the other hand, glutamate is the predominant excitatory neurotransmitter in the vertebrate CNS, and supraspinal glutamatergic neurons can re-specify functional glutamatergic NMJs from otherwise purely cholinergic synapses on mammalian skeletal muscles following grafting with transected peripheral motor nerve (Brunelli et al., 2005). Given such a short-term reorganizing capability and the fact that glutamate is a major excitatory neurotransmitter at the NMJ of phylogenetically distant invertebrates (Gerschenfeld, 1973), it is possible that the transient employment of glutamatergic neuromuscular transmission in premetamorphosing Xenopus is representative of a latent ancestral step in the evolutionary transition of intrinsic molecular programming of the NMJ to cholinergic-dependent signaling. A further and more appealing possibility is that the switching process is related to early appendicular MN axon path-finding and initial NMJ formation because of the unusual context of frog metamorphic development where secondary limb MNs axons must grow to their muscle targets with the primary axial neuromuscular apparatus already in place, fully functional and using ACh as the NMJ transmitter (Figure 7). Transplantation experiments in Xenopus have demonstrated the ability of MNs to innervate novel tissue targets (Elliott et al., 2013). Moreover, amongst the many guidance cues that orient axon elongation during development, both target-derived signals and axon growth cone-released ACh participate in target reaching and initial synapse formation (Yin and Oppenheim, 1992;Erskine and McCaig, 1995;Yang and Kunes, 2004). On this basis, therefore, it is conceivable that the pre-existing axial neuromuscular system in Xenopus provides a potential disturbing environment that could attract appendicular motor axons, if they were cholinergic, to make improper neuromuscular connections during pre-metamorphic development. On the contrary, a matched expression of glutamatergic neurotransmitter and receptors in appendicular MNs and limb muscles during premetamorphosis could ensure the correct orientation of the developing axon growth cones towards the limb buds and the establishment of synaptic contacts appropriate for the control of limb movements ( Figure 7A). Supporting this hypothesis are previous findings that morphologically normal neuromuscular connections can still develop despite the experimental suppression of either pre-or postsynaptic cholinergic partners (Westerfield et al., 1990;Misgeld et al., 2002) and that glutamatergic CNS neurons can replace cholinergic MNs and reinnervate skeletal muscle fibers after a peripheral nerve lesion (Brunelli et al., 2005). It is also relevant that in addition to neural signaling, glutamate has been implicated in different neurotrophic functions, including cell growth and migration, during nervous system development (Nguyen et al., 2001). In Xenopus, once the initial appendicular NMJs are established, both pre-and postsynaptic partners could thereafter be instructed to switch to their definitive cholinergic phenotype ( Figure 7B). The signal for such neurotransmitter re-specification remains to be determined, but thyroid hormones, which control the developmental expression of ChAT (Patel et al., 1987;Gould and Butcher, 1989) and probably also VAChT since both proteins share the same gene locus (Eiden, 1998), are most likely to be involved. Consistent with this possibility is that the increase in thyroid hormone levels at metamorphosis onset, which starts at stage 54-55 in Xenopus (Shi, 2000), triggers a variety of gene-switching molecular programs required for the development of limb muscles and associated motor circuitry through the regulation of spinal cord neurogenesis and functions (Das et al., 2002;Marsh-Armstrong et al., 2004;Brown et al., 2005). Sensory feedback from new proprioceptors in the developing limbs may also participate in the respecification process, as found in the adult rat brain where the occurrence of novel sensory information can trigger neurotransmitter phenotype switching in postsynaptic central neurons (Dulcis et al., 2013). In conclusion, the development of the limb neuromuscular apparatus and limb-based locomotion during Xenopus metamorphosis occurs in successive stages that involve a close functional relationship with the preexisting axial motor system until full autonomy is achieved (Combes et al., 2004) and, as shown here, is associated with changing underlying molecular patterning. Amongst the latter, neurotransmitter phenotype switching at the NMJ may enable limb motor axons to reach their appropriate muscle targets, which constitutes a novel and fundamental role for this process during motor innervation development (Spitzer, 2017) and adds to our understanding of NMJ development in general. Materials and methods Animals Experiments were conducted on the South African clawed toad X. laevis obtained from the Xenopus Biology Resources Centre in France (University of Rennes 1; http://xenopus.univ-rennes1.fr/). Animals were maintained at 20-22˚C in filtered water aquaria with a 12:12 hr light/dark cycle. Developmental stages were sorted according to external body criteria (Nieuwkoop and Faber, 1956), and experiments were performed on larvae from stage 49 to 57. All procedures were carried out in accordance with, and approved by, the local ethics committee (protocols #68-019 to HT and #2016011518042273 APAFIS #3612 to DLR). Motoneuron retrograde tracing Procedures used for neuronal retrograde tracing were as described previously (Bougerol et al., 2015). Briefly, animals were anesthetized in a 0.05% MS-222 water solution and transferred into a Sylgard-lined Petri dish. In order to backfill MNs from their muscle targets, the skin covering the muscles of interest was dried before a tiny incision was made and crystals of fluorescent dextran amine dyes were applied intramuscularly (Forehand and Farel, 1982;van Mier et al., 1985;Roberts et al., 1999). In most cases, only hindlimb MNs were labeled with either 3 kD rhodamine (RDA) or 10 kD Alexa Fluor 647 (Thermo Fisher, Illkirch, France), except in the experiments illustrated in Figure 1B,C where axial MNs were also labeled using 10 kD Alexa Fluor 488. Excess dye was washed out with cold Ringer solution (75 mM NaCl, 25 mM NaHCO 3 , 2 mM CaCl 2 , 2 mM KCl, 0.5 mM MgCl 2 , and 11 mM glucose, pH 7.4). After recovering from anesthesia, larvae were kept in a water tank for 24-48 hr to allow tracer migration into MN cell bodies and dendrites. In a series of experiments (n = 4), the hindlimb buds of stages 51-52 larvae were injected and the animals were kept for several days in a separate aquarium until reaching metamorphic climax ( Figure 2E), in order to verify that early stage MNs were preserved through later development. Generally, such a labeling approach stains a large proportion of neurons that project processes within the bud muscles and thus potentially labels both MNs and sensory neurons (Forehand and Farel, 1982;van Mier et al., 1985;Roberts et al., 1999). However, since MNs have centrally located cell bodies, our retrograde tracings allowed the confident identification of MN somata only within the spinal cord. Immunofluorescence labeling After MN retrograde labeling, spinal cords were dissected out and fixed in 4% paraformaldehyde (PFA) for 12 hr at 4˚C. Preparations were incubated in a 20% [in phosphate-buffered saline (PBS) 0.1%] sucrose solution for 24 hr at 4˚C, then embedded in a tissue-tek solution (VWR-Chemicals, Fontenay-sous-Bois, France) and frozen at À45˚C in isopentane. 40 mm cross-sections were cut using a cryostat (CM 3050,Leica,Nanterre,France). Fluorescence immunohistochemistry was carried out on these spinal cross-sections using the same protocol as described previously (Bougerol et al., 2015). Briefly, after several rinsing steps and the blocking of non-specific sites (using a solution with PBS, Triton X-100 0.3%, bovine serum albumin 1%; Sigma, St. Quentin Fallavier, France) samples were incubated with the primary antibody for 48 hr at room temperature. After rinsing, cross-sections were incubated for 90 min at room temperature with a fluorescently labeled secondary antibody, and washed again before mounting in a homemade medium containing 74.9% glycerol, 25% Coon's solution (0.1M NaCl and 0.01M diethyl-barbiturate sodium in PBS), and 0.1% paraphenylenediamide. The primary antibodies used were goat anti-ChAT (1:100; Millipore), rabbit anti-VAChT (1:1000; Santa-Cruz) and mouse anti-Islet1/2 (1:250; Developmental Studies Hybridoma Banks (DHSB), University of Iowa, Iowa city, US). Secondary antibodies were donkey anti-goat and anti-rabbit or anti-goat and anti-mouse IgGs coupled to Alexa Fluor 488 and 568 (1:500; Life Technologies). Fluorescent immunohistochemistry was carried out on entire or 20 mm sliced hindlimb buds from developmental stages 51 to 57 after overnight fixation in PFA 4%. The same labeling protocol was used as described above for spinal cross-sections. Alexa Fluor 488-conjugated a-bungarotoxin (10 mg/ml; Life Technologies) was used to label neuromuscular junctions. The primary antibody, mouse anti-neurofilament associated protein (3A10; 1:100; DHSB), was used to label nerve branches innervating the limb bud. The primary antibodies, rabbit anti-synaptophysin (1:500; abcam, Paris, France), goat anti-NMDA subunit one receptor (NR1; 1:100; abcam) or rabbit anti-NR2b (1:200; abcam), and guinea pig or mouse anti-VGluT1 (1:100; respectively from Millipore, France and Synaptic Systems, Germany), were used to label synapses, glutamate receptors, and vesicular glutamate transporters, respectively. Note that comparable results were obtained with both anti-VGluT1 antibodies. The bud preparations were then incubated with secondary antibodies donkey anti-mouse, anti-rabbit and anti-goat IgGs coupled to Alexa Fluor 488, 568 and 647 (1:500, Thermo Fisher). For microscope imaging whole-mount limb buds were mounted on cavity slides in a homemade medium (see above). Image acquisition and fluorescence quantification Whole-mount preparations and cross-sections labeled with fluorescent material were imaged using an Olympus FV1000 confocal microscope equipped with 488, 543 and 633 nm laser lines. Images were processed using Fiji and Photoshop softwares. Multi-image confocal stacks with 1 mm z-step intervals were generated using a 20x/0.75 oil objective and with 0.3 mm z-step intervals using a 60x/ 1.4 oil objective. Figure images were obtained by orthogonal projection from multi-image stacks with artificial fluorescent colors using the freeware Fiji. ChAT, VAChT and Islet1/2 fluorescence quantifications were performed on original images from 0.3 mm z-step stacks. Fluorescence intensity was measured automatically from 3 ROI in single planes with the same size, defining the slice background, and the axial and appendicular MNs fluorescence signals, respectively. The variation of fluorescence (DF/F = (F-F 0 )/F 0 ) was calculated for both axial and appendicular MNs ROI relative to background, the latter being acquired in a ventral spinal region devoid of axial and limb MN cell bodies, yet where cholinergic terminals were present. This calculation was performed on five consecutive confocal planes per slice where the appendicular motor column was identified by retrograde labeling (usually 2 to 5 40 mm slices) and where background noise was maximal. Preparations with concomitant ChAT and VAChT fluorescent labeling were combined for statistical quantification ( Figure 2C). The trunk region of tadpoles from stages 49 to 55 was dissected, fixed with 4% PFA in 0.1 M PBS overnight at 4˚C and rinsed in 0.1 M PBS. Fixed samples were cryoprotected in 15% then 30% sucrose/PBS and embedded in Tissue-Tek (Sakura, Netherlands). Frontal sections (20 mM) of the trunks were cut at À20˚C using a cryostat, collected on Superfrost Plus slides (O. Kindler, Freiburg, Germany), dried at room temperature for 24 hr and stored at À80˚C until use. The in situ hybridization protocol used in the present study was adapted from earlier studies (Buresi et al., 2012;Bougerol et al., 2015) and consisted of the following steps. Briefly, sections were rinsed 2 Â 5 min with PBS at room temperature, 15 min in five times concentrated sodium chloride and sodium citrate solution (5X SSC), then were incubated for 2 hr in prehybridization buffer (50% formamide, 5X SSC, 50 mg/ml heparin, 5 mg/ml yeast RNA, 0.1% Tween) at 65˚C. When prehybridization was complete, the prehybridization solution was removed and replaced with the same buffer containing a mix of the two heat-denaturated digoxigenin-labeled ChAT1 and ChAT2 riboprobes. Hybridization was carried out overnight at 65˚C. Sections were rinsed 3 Â 30 min in 2X SSC at 65˚C then 1 hr in 0.1 X SSC at 65˚C. Two final washes were performed for 5 min in MABT (maleic acid 100 mM, pH 7.2, NaCl 150 mM, Tween 0.1%) at room temperature. Sections were transferred to a blocking solution [5% blocking reagent (Roche), 5% normal goat serum in MABT] and incubated at room temperature for 1 hr, before addition of the alkaline phosphatase coupled anti-digoxigenin antibody (1/4000) for overnight storage at 4˚C. Sections were again washed 3 Â 10 min in MABT and 2 Â 5 min in PBS at room temperature, then incubated 10 min in staining buffer (100 mM Tris-HCl, pH 9.5, 50 mM MgCl 2 , 100 mM NaCl, and 0.1% Tween) and transferred to BM Purple (Roche) for colorimetric detection. Finally, they were washed twice in PBS to stop the reaction, and then mounted on gelatin-coated slides in Mowiol. Sections were imaged using a Leica DM5500 B microscope connected to LAS V4.1 software. The specificity of the hybridization procedure was verified by incubating sections with the sense riboprobes with which only background signals typical of this type of chemical reaction could be observed. Patch-clamp recording of appendicular motoneurons Retrograde labeling of appendicular MNs was performed using 3kD RDA dextran on stages 51, 52 larvae. The day after, patch-clamp electrophysiological recordings of labeled MNs were made on isolated brainstem-spinal cord in vitro preparations (n = 7). After anesthesia in 0.05% MS-222, the brainstem and spinal cord, including spinal segmental ventral roots, were dissected out in cold oxygenated (95% O 2 , 5% CO 2 ) Ringer solution. The preparation was then placed in a recording chamber and continuously superfused with oxygenated Ringer solution (~2.5 mL,~17˚C, rate of~2 mL/ min). Spontaneous fictive locomotor episodes were recorded from a caudal ventral root (between segments 12 and 15; Figure 3B) using a borosilicate glass suction electrode (tip diameter, 100 nm; Clark GC 150F; Harvard Apparatus) filled with Ringer solution. The recorded signal was amplified (A-M system), rectified and integrated (time constant 100 ms; Neurolog System). RDA-positive appendicular MNs were identified with a standard epifluorescent illumination system (Cy3 filter) within the whole-mount spinal cord (dorsal-side opened) and subsequently visualized using a differential interference contrast microscope with an infrared video camera to facilitate the patch electrode trajectory ( Figure 3B). Using an Axoclamp 2A amplifier (Molecular Devices, Berkshire, UK), whole-cell patch-clamp recordings were made with a borosilicate glass electrode (pipette resistance, 5-6 MW; Clark GC 150TF; Harvard Apparatus) filled with a solution containing (in mM) 100 K-gluconate, 10 EGTA, 2 MgCl 2 , 3 Na 2 ATP, 0.5 NaGTP, 10 HEPES, pH 7.3. In additional experiments (n = 2), to impose an elevated intracellular chloride concentration corresponding to that of immature neurons (Ben-Ari, 2002), including those in Xenopus (Akerman and Cline, 2006), this low [Cl À ] recording solution was replaced by a high [Cl À ] version that contained (in mM) 70 K-gluconate, 30 KCl, 10 EGTA, 2 MgCl 2 , 3 Na 2 ATP, 0.5 NaGTP, 10 HEPES, pH 7.3. Alexa Fluor 488 (Life Technologies), which was added in the patch pipette to fill the recorded neurons, allowed subsequent verification that recorded cells were indeed RDA-positive ( Figure 3B). All electrophysiological signals were computer-stored using a digitizer interface (Digidata 1440; Pclamp10 software; Molecular Devices) and analyzed offline with Clampfit software (Molecular Devices). Axial and hindlimb EMG recordings in semi-intact preparations Semi-intact preparations from stages 52 to 57 larvae were used to simultaneously record EMG activity from axial myotomes and hindlimb bud muscles (Figure 4; n = 12). Brainstem-spinal cord preparations were dissected out in the same way as for patch-clamp recording, but tail myotomes (7-10) and hindlimb buds were left attached to the spinal cord. Semi-intact preparations were fixed in a Sylgard-lined recording chamber, continuously superfused with oxygenated Ringer solution (1.3-2.1 mL/min) and maintained at 18 ± 0.1˚C with a Peltier cooling system. In some experiments, d-tubocurarine (30 mM; Sigma) was exogenously applied to block nicotinic receptor-type cholinergic synapses. Fictive locomotion sequences were generated either spontaneously or triggered by electrical stimulation of the caudal region of the brainstem (Grass stimulator S88), and the spinal swimming pattern was monitored from a caudal ventral root (between segments 12 and 15) with a suction electrode as described above. EMG activities in rostral myotomes (7-10) and hindlimb buds were recorded simultaneously using pairs of 50 mm insulated wire electrodes connected to a differential AC amplifier (A-M System). Both nerve and EMG activities were digitized at 10 kHz (CED 1401, Cambridge Electronic Design, UK), and displayed and stored on computer for offline analysis with Spike2 software (CED). Discharge rates in individual nerve and EMG recordings were measured by setting an amplitude threshold to count all impulses in such multi-unit recordings. Firing rates (in spikes/s) were averaged over 10-20 locomotor cycles. Cycle period was taken as the interval between the onsets of two consecutive ventral root bursts. These consecutive burst onsets were used as a trigger for averaging the discharge rates of each EMG channel over cycle duration. EMG recordings and drug applications in isolated nerve-limb bud preparations Isolated hindlimb-bud preparations with the sciatic and crural nerves still attached ( Figure 6; n = 17) were used to record the EMG activity evoked by electrical stimulation of either appendicular nerve branch at stages 52 to 57. Under MS-222 anesthesia, appendicular nerves were disconnected from the spinal cord and separated from tail myotomes, taking care not to detach them from the rest of the bud. The bud and attached nerves were fixed with small pins in a Sylgard-lined recording chamber and superfused with oxygenated Ringer solution. A small incision was made at the distal extremity of the bud to allowing insertion of the EMG electrodes. Either limb nerve branch was stimulated with a glass suction electrode connected to a Grass stimulator S88 through a photoelectric stimulus isolation unit (PSIU 6; Grass Instruments). Note that stimulating either branch provided similar results, and no distinction was made in this report. Single pulses (70-300 mA; 10 ms) were delivered every 100 s, and polarity inversion tests were performed in order to distinguish the stimulation artifact from the muscle response. EMG signals were amplified, integrated (time constant 10 ms) with Spike2 software and stored as described above. EMG responses were measured as the area under the integrated recording traces. 30 mM d-tubocurarine or a cocktail of 20 mM CNQX +10 mM AP5 were added to the perfusion solution to block nicotinic or glutamatergic receptors, respectively (all drugs from Sigma). Because of the typical difficulty in washing-out these drugs in such experiments, a second antagonist was generally added while the first one was still present in the bath. Control experiments where only one drug was applied showed effects similar to those of combined drug application, and thus data from either approach were pooled in this study. In some experiments on early stages, the normal Ringer solution was replaced by a solution containing half (1 mM) of the normal concentration of MgCl 2 in order to unmask any NMDA receptor-mediated component of the EMG's glutamatergic response. Despite causing a noticeable increase in the control EMG amplitude, such a low Mg 2+ solution had no influence on the effects resulting from subsequent antagonist application. Statistics After signal processing in Spike2, electrophysiological data were analyzed using Prism5 (GraphPad, USA) and OriginPro8 (OriginLab Corporation, USA). Data are shown as means and standard errors of the mean (± SEM), unless stated otherwise. For ChAT and VAChT immunofluorescence signals ( Figure 2C), differences between preparation means were tested using the Kruskall-Wallis ANOVA multi-comparison test (Student-Newman-Keuls method; statistical significance is indicated in Figure 2C). For integrated EMG signals, differences of means were tested using the unpaired twotailed Mann-Whitney U-test (statistical significance and U values are indicated in the corresponding Result section).
2018-06-07T13:35:17.338Z
2017-08-08T00:00:00.000
{ "year": 2018, "sha1": "2b75a52778632d8a3ff4af0ca8790fc0220e0549", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.30693", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b75a52778632d8a3ff4af0ca8790fc0220e0549", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
33298582
pes2o/s2orc
v3-fos-license
Preparation, Imaging, and Quantification of Bacterial Surface Motility Assays Bacterial surface motility, such as swarming, is commonly examined in the laboratory using plate assays that necessitate specific concentrations of agar and sometimes inclusion of specific nutrients in the growth medium. The preparation of such explicit media and surface growth conditions serves to provide the favorable conditions that allow not just bacterial growth but coordinated motility of bacteria over these surfaces within thin liquid films. Reproducibility of swarm plate and other surface motility plate assays can be a major challenge. Especially for more “temperate swarmers” that exhibit motility only within agar ranges of 0.4%-0.8% (wt/vol), minor changes in protocol or laboratory environment can greatly influence swarm assay results. “Wettability”, or water content at the liquid-solid-air interface of these plate assays, is often a key variable to be controlled. An additional challenge in assessing swarming is how to quantify observed differences between any two (or more) experiments. Here we detail a versatile two-phase protocol to prepare and image swarm assays. We include guidelines to circumvent the challenges commonly associated with swarm assay media preparation and quantification of data from these assays. We specifically demonstrate our method using bacteria that express fluorescent or bioluminescent genetic reporters like green fluorescent protein (GFP), luciferase (lux operon), or cellular stains to enable time-lapse optical imaging. We further demonstrate the ability of our method to track competing swarming species in the same experiment. Introduction Many bacteria move on surfaces using various means of self-propulsion. Some motility phenotypes can be researched in the laboratory using plate assays that are affected by the liquid environment associated with the semi-solid plate assay composition. A subset of useful surface motility plate assays further involve a gas phase-typically room air. Accordingly, the outcome of any particular surface motility assay, demands careful control of the interface of three phases: the local environmental solid surface, liquid environment, and gas environment properties. The most commonly studied motility mode in such a three-phase assay is known as swarming. Swarming motility is the coordinated group movement of bacterial cells that are propelled by their flagella through thin liquid films on surfaces 1 . It is typically studied in laboratories using semi-solid plate assays containing 0.4%-0.8% (wt/vol) agar 1 . An array of human pathogens exploit this motility behavior to explore and colonize the human host. For instance, Proteus mirabilis uses swarming motility to move up the urethra, reaching and colonizing the bladder and kidneys 2 . Swarming motility is generally considered a precursor step to biofilm formation, the primary cause of pathogenesis in many human pathogens 3 . The swarming phenotype is highly varied among bacterial species; experimental success and reproducibility strongly rely on factors such as nutrient composition, agar type and composition, sterilization protocol (e.g., autoclaving), semi-solid media curing, and ambient moisture (e.g., seasonal changes), among others [3][4][5] For some surface motility studies, the development of specific phenotypes is of great interest. Most, but not all, published studies to examine swarming of P. aeruginosa show the formation of tendrils or fractals radiating from an inoculation center [3][4][5][6][7][8][9] . Differences between P. aeruginosa strains have been documented 5,8 , but much of the presence or absence of tendrils can be attributed to the specific medium and protocol used for these swarm motility plate assays. Here we include details on how to promote tendril-forming swarms for P. aeruginosa. Because P. aeruginosa is just one of many swarming bacteria, we also include details for our method to examine swarming of Bacillus subtilis and gliding of Myxococcus xanthus. Like P. aeruginosa, current research on B. subtilis and M. xanthus spans an array of topics as researchers are working to discern aspects of sporulation, motility, stress response, and transitional behavior 1,10 . There is a need to quantify the patterns and dynamics of the specific behavior(s) for these cells in swarming groups. Surface motility data acquisition, analysis, and interpretation can be cumbersome and qualitative. We have developed a protocol for the detailed macroscopic analysis of bacterial swarms that provides in addition to swarm zone morphology and size (e.g., diameter), quantitative dynamic information regarding swarm expansion rate and bacterial or bioproduct density distribution 7 . Furthermore, this method can take advantage of available fluorescent proteins, luminescence, and dyes to obtain a comprehensive view of bacterial interactions 8 , as well as to track the synthesis of bioproducts (e.g., P. aeruginosa rhamnolipid 7,8 ) within a swarm. Table), 0.9 g of Noble agar, and 0.2 g of Casamino acids ( Table 1) by stirring with a magnetic stir bar. Use small volumes (100-300 ml) to improve consistency between experiments. 2. Autoclave the 200 ml agar/media mixture using an exposure time of 22 min, exposure temperature of 121.1 °C, and a fast vent option. Swarm Assay Media Preparation and The autoclave settings will allow proper sterilization and agar melting, but prevent agar caramelization. NOTE: Noble agar is prone to caramelization; bacterial motility is altered on caramelized agar. 3. Immediately after the sterilization cycle has finalized, close the cap of the media bottle to prevent water loss by evaporation. However, note that tight capping can cause a "vacuum sealing"-like effect on the bottle. 4. Cool the media to 50 °C while stirring at room temperature (RT) and add 2 ml of sterile 1.2 M glucose. Alternatively, place the media in a 60 °C incubator or water bath until ready to use (up to 15 hr later), and proceed as indicated. To prevent the formation of bubbles in the media, mix thoroughly using the magnetic stir bar; bubbles on the surface of the agar will prevent even swarming. NOTE: For other assays, add at this step heat-sensitive components that cannot be autoclaved such as additional nutrients or dyes, as needed (e.g., addition of 8 µl Invitrogen Syto 63 dye per 100 ml melted agar to image M. xanthus as shown in Representative Results, below). Addition of some dyes may affect baseline swarming behavior, which should be checked against a non-dye control. 5. In a laboratory hood, aliquot 7.5 ml of sterile media per 60 mm diameter polystyrene Petri dish and maintain the plates in a single layer (not stacked). For larger swarming surface, aliquot 25 ml of media per 100 mm diameter Petri dish. It is important to fill the dishes on an even horizontal surface. Use a bull's eye level to check if the surface is leveled. NOTE: For P. aeruginosa assays, using a specific media volume per plate will improve consistency and reproducibility. For B. subtilis and M. xanthus assays, hand pouring yields results comparable to specific volume aliquots. 2. Plate curing 1. For small plates (60 mm), allow the melted agar medium to cure (both set to semi-solid and dry excess liquid) in the hood uncovered (i.e., without lids) for 30 min. Larger plates (100 mm) require a longer curing time (see Discussion). NOTE: Alternatively, some assays may require plates to cure on the bench top overnight (20-24 hr) covered (i.e., lids on) in a single layer ( Table 1). Swarming is sensitive to both excess and inadequate moisture. The humidity, airflow, and temperature of any given lab may necessitate variation to plate curing to promote optimal swarming of you bacterium. 2. Inoculate plates immediately after the drying period is over. Do not store the plates for further use. 1. Perform the "ink spread test" by spotting a test plate with 10 µl mixture of 0.50% (vol/vol) Higgins Waterproof Black India Ink and bacterial inoculum 11 . If the ink/inoculum mixture spreads readily (i.e. does not retain droplet form) on the surface of the media, the media will need additional time to dry. NOTE: For species that are particularly sensitive to humidity (e.g., P. aeruginosa), perform a quick "ink spread test" 11 to determine if the plates are dry enough. 3. Swarm Assay Inoculation 1. Inoculate 6 ml of broth culture media (see Table 1 Table 1 for details). NOTE: This pre-imaging incubation allows swarms to start their development and become established before being moved to a new environment, which may or may not be optimal for swarming motility. Macroscopic Imaging of Surface Motility Assays 7,8 1. For time-lapse imaging, after the pre-imaging incubation period place swarm assay plates on a clear imaging plate inside a commercial in vivo imaging station. Image up to six, 60 mm diameter or four, 100 mm diameter plates at a time. Since the camera captures images from beneath the imaging plane, invert the plates so that the optical path is not obstructed 8 . Alternatively, incubate at 30 °C or 37 °C ( Table 1) for the duration of the experiment, and remove the plates to be imaged from the incubator at set time intervals. 2. Place the lids of the Petri dishes upright on top of the plate counterpart that holds the inoculated media. Fill the lids of the Petri dishes with water to prevent excessive drying during imaging, and enclose the entire set up using another clear tray to maintain humidity throughout the experiment. 3. Using Molecular Imaging (MI) software 12 , run assay(s) at room temperature using the imaging settings described in Table 2. For time-lapse imaging, set up a protocol with the necessary steps and specifications. 1. Open a single image: File > Open 2. Import time-lapse image sequence: File > Import Sequence, and select "Sort names numerically". 1. For larger time-lapse files, select "Use virtual stack" in the "Import Sequence" window to stack the exported images into appropriate categories (i.e., GFP, RFP, etc.). Data Processing and Interpretation 3. If required, change images from 16-bit files to 8-bit files: Image > Type > 8-bit NOTE: Some ImageJ tools require 8-bit images. 4. Determine if the intensity signal for an image or time-lapse sequence needs to be inverted. Place the cursor on a bright spot in the image (e.g., fluorescently labeled growth) and note the signal intensity "Value" from the ImageJ toolbar. Then, place the cursor in a dark spot outside the plate area and note the signal intensity. If the signal intensity for the dark spot is larger than the intensity for the bright spot, the image signal intensity needs to be inverted (follow substeps 1-2 below). 1. Invert the intensity signals: Edit > Invert 2. Invert the lookup table: Image > Lookup Tables > Invert LUT 5. Subtract the background: Process > Subtract Background, and use a "Rolling ball radius" with a pixel radius that is one half of one image dimension (e.g., 1,000 pixels for a 2,000 x 2,000 pixel image). 6. Artificially color an image or time-lapse sequence: Image > Lookup Tables, and select the appropriate color from the list options. 7. For movies with two or more channels, merge and balance the colors prior to saving as a movie (Image Processing, Step 8 To calculate the diameter of the plate in pixels, draw a line across the center of an assay plate with the "Straight" tool from the and measure its length: Analyze > Measure 3. The default measurement unit in ImageJ is the pixel. Obtain a conversion factor by dividing the diameter of the assay plate (e.g., 60 for a 60 mm plate) by the pixel length obtained in the previous step. 4. Change the unit of measurement from pixel to mm: Image > Properties 1. Change the "Unit of Length" to "mm", and the "Pixel Width", "Pixel Height", and "Voxel Depth" to the conversion factor calculated in the previous step. Select the "Global" box to maintain this conversion factor across multiple images. NOTE: If ImageJ is closed and reopened, or the field of view of an image is changed (i.e., one image is zoomed in more than another), the conversion factor must be recalculated. Alternatively, perform all analyses in pixels and then converted to mm. 5. For every frame, trace and measure the swarm area using the "Freehand Selections" tool in the toolbar to trace the outline of the swarm and measure the area using: Analyze > Measure. This will generate a measurements log that can be saved for further analysis in Microsoft Excel or similar programs: File > Save As 2. Acquiring Bacterial Surface Growth Intensity to Quantify Surface Growth Rate 1. Once the background is subtracted (Image Processing, Step 5), use the last frame of the sequence to determine the maximum area of swarm (Data Analysis, Step 1). Alternative to the previous section (Data Analysis, Step 2). Use the ImageJ Macros plugin to setup and run a Macros surface growth intensity measurement script. 1. Setup an automated measurement script to analyze multiple frames simultaneously: Plugin > New > Macro, and paste the provided script (below) into the box and save as an ImageJ Macros text file: File > Save, and save to the ImageJ application folder under "macros". numberOfFrames = N for(i=0; i< numberOfFrames; i++){ run('Measure'); run('Next Slice [>]'); } NOTE: Here the variable "N" is for an undefined number of frames. 2. Edit the "numberOfFrames" in the Macros plugin for each experiment to reflect the number of frames in the image sequence prior to running the script. Use: Plugin > Macros > Edit, and type in the correct number of frames in the sequence and save (File > Save). 3. Follow substeps 1-3 in Data Analysis-Step 2, and while on the first frame of the sequence run the Macros plugin: Plugin > Macros > Run. This will generate a measurements log that can be saved for further analysis in Microsoft Excel or similar programs: File > Save As Representative Results Variation in plate preparation can greatly influence swarming motility. The curing or drying time after pouring of melted agar medium affects the thin liquid film present on surface motility assays and the bacterial motility over time. Changes in nutrient composition also affect swarming for several bacteria. Figure 1A shows a short-term effect of drying time upon spreading of India Ink and spreading of an initial inoculum of Bacillus subtilis 11 . Figure 1B shows the effect of drying time and Figure1C shows the effects of ammonium sulfate [(NH 4 ) 2 SO 4 ] upon subsequent tendril development by swarming P. aeruginosa 5 . Quantifiable data can be obtained from endpoint images of surface motility using multiple imaging strategies. Figure 2 shows representative surface growth results for P. aeruginosa swarming and its associated GFP fluorescence image; B. subtilis swarming and its associated bioluminescence image; and Myxococcus xanthus surface growth and the associated red fluorescence image of SYTO 64-stained cells. Expansion of data acquisition beyond just inspection and imaging of end-point results allows for the study of dynamic behavior(s) for surface growing bacteria. Figure 3 7 shows an example of P. aeruginosa swarming (imaged for GFP expressing cells) and its associated rhamnolipid Discussion Achieving reproducible swarming in a laboratory can be challenging, as swarm assays are highly sensitive to environmental factors, such as humidity and available nutrients. The most critical aspect of a surface motility plate assay is moisture on the agar surface. Prior to inoculation, swarm media must be dry enough to prevent bacterial cells from swimming across the surface liquid, but not so dry as to inhibit swarming motility 5 . Incubation should take place in a sufficiently humid environment: too little moisture can result in the assay drying out during incubation, while too much moisture can lead to artificial or artifactual surface spreading. Unless a humidity-controlled incubator is at hand, incubator and laboratory humidity can vary dramatically. Consequently, an additional water reservoir, a humidifier, or a dehumidifier within the incubator might be required to prevent over drying or the accumulation of excess moisture while keeping the relative humidity near 80%. Maintaining this ideal humidity may prove challenging if seasonal humidity changes are significant. If this is the case, the swarm assay protocol will require some adjustments to account for seasonal changes in humidity. We have found that modifying the swarm media drying time is the simplest way to adjust for seasonal humidity changes. Constant humidity monitoring, both inside and outside of the incubator, is recommended. Further, it is recommended that researchers calibrate and validate their instruments, incubators, scales, etc. as minor errors in temperature, volume or amounts of media components can impact reproducibility of these assays. It should also be noted that the type and size of the plate used in the assay can affect plate moisture, and thus swarming. Airtight plates do not vent off excess moisture, thus encouraging swimming motility. In contrast, open-faced plates allow too much moisture to escape. A Petri dish provides an ideal environment because it vents off enough excess moisture to prevent liquid build up, but retains enough moisture to prevent the media from drying out. This method details a surface motility assay protocol that allows for high quality imaging. To keep the agar clear for imaging 60 mm diameter dishes are filled with 7.5 ml of agar media. If detailed imaging is not required, volumes up to 20 ml can also provide reproducible results. While swarming motility can be achieved on a wide array of agar concentrations, the optimal range of agar required for swarming depends on the species. Overall, higher agar concentrations inhibit swarming motility, and consequently the time needed to produce an image-ready swarm increases. P. aeruginosa generally swarms on agar concentrations between 0.4-0.7% 1 , however we find that optimal swarming occurs in a much narrower range (0.4-0.5%). Others, such as B. subtilis and S. enterica swarm at 0.6% agar, and Vibrio parahaemolyticus at 1.5% agar 10 . The required agar concentration is also determined by the type and brand of agar. Higher purity agars, like Noble agar, strongly enhance swarming in P. aeruginosa and are preferred over granulated agar 13,14 . However, these purified versions of agar are also more prone to caramelization during the autoclave sterilization cycle; depending on the instrument, a shortened/modified sterilization sequence (to possibly alter the exhaust cycle to prevent prolonged heat exposure) may be required to prepare swarm media using Noble agar. Media composition also plays a role in the observed swarm phenotype 3 . P. aeruginosa swarming motility studies are usually performed using minimal nutrient media. We prefer FAB medium 4,8 (Materials Table), but other media, such as M9, LB, or slight variations to these common media, have been used successfully 9,15,16 . Tendril formation is best achieved on FAB minimal medium supplemented with glucose as the carbon source and casamino acids (CAA), but without an additional nitrogen source (i.e., (NH 4 ) 2 SO 4 ) 6,13 . If tendril formation or morphology is not the main focus of the study, then FAB minimal medium (Materials Table; Table 1) devoid of CAA is recommended so that the effects of specific carbon sources and/or additional nutrients can be studied in detail. Other species, such as B. subtilis (presented here), are versatile swarmers, capable of swarming on LB and granulated agar. These species swarm readily, requiring only ~10 hr to develop a full swarm. This fast swarming rate makes following the progression of the swarm potentially difficult but our protocol makes such tracking very feasible. The ability to perform swarm time-lapse imaging provides a substantial ease in swarm data acquisition, particularly from such avid swarmers. We introduce a robust, comprehensive, two-phase protocol and guidelines aimed at enhancing the execution and reproducibility of bacterial surface motility research and have primarily emphasized aspects important to examine flagellar-mediated swarming. This swarm assay protocol details important aspects of media composition and handling of surface motility plates to provide for greater consistency and reproducibility within and among research groups. This will improve the basis of comparison among different research studies. In addition, the presented approach and protocol provides means to make research on swarming and surface motility less susceptible to environmental variations by making researchers aware that such factors affect their work and providing possible solutions (e.g., how small changes in agar affect swarming 4,5 ). Furthermore, the protocol provided to quantify macroscopic aspects of swarming, provides an opportunity to measure many attributes of bacterial surface growth that were previously unquantifiable. We have not examined all surface motile bacteria in the development of this protocol. As such, it is expected that protocol modifications will be required for species not presented here. The efficiency of this protocol is restricted by the inherent limits of the equipment and materials employed. For instance, temperature related studies are not possible as yet with the Bruker imaging station, since temperature control is not a feature of the equipment. In addition, the use of dyes (such as Nile Red to stain rhamnolipids) can have kinetic and concentration limitations 8 . This technique strongly relies on the processing and analysis of digital images; improved automation of data analysis (e.g., using additional Macros script function in ImageJ) would reduce the time needed for analysis and expand the usefulness of the data. Finally, due to the robustness of the imaging protocol, future applications should aim at expanding this technique to examine less uniform growth surfaces that are more relevant to surfaces colonized by environmental and pathogenic bacteria. Disclosures Publication fees for this article were partially sponsored by Bruker Corporation.
2016-05-12T22:15:10.714Z
2015-04-07T00:00:00.000
{ "year": 2015, "sha1": "c2901a3a2cd4c60a3dcac4c2c1ecbea3e3d5386b", "oa_license": "CCBYNCND", "oa_url": "https://www.jove.com/pdf/52338/preparation-imaging-quantification-bacterial-surface-motility", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ebe616754061a4a99c967c55774c19b55802c30b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15992929
pes2o/s2orc
v3-fos-license
Parental Reports of Infant and Child Eating Behaviors are not Affected by Their Beliefs About Their Twins’ Zygosity Parental perception of zygosity might bias heritability estimates derived from parent rated twin data. This is the first study to examine if similarities in parental reports of their young twins’ behavior were biased by beliefs about their zygosity. Data were from Gemini, a British birth cohort of 2402 twins born in 2007. Zygosity was assessed twice, using both DNA and a validated parent report questionnaire at 8 (SD = 2.1) and 29 months (SD = 3.3). 220/731 (8 months) and 119/453 (29 months) monozygotic (MZ) pairs were misclassified as dizygotic (DZ) by parents; whereas only 6/797 (8 months) and 2/445 (29 months) DZ pairs were misclassified as MZ. Intraclass correlations for parent reported eating behaviors (four measured at 8 months; five at 16 months) were of the same magnitude for correctly classified and misclassified MZ pairs, suggesting that parental zygosity perception does not influence reporting on eating behaviors of their young twins. Introduction Over the past century the Twin Method has been used to investigate genetic and environmental contributions to variation in complex human traits. Researchers have been using this methodology to examine a wide spectrum of aspects of human life accumulating in a total of 17,804 investigated traits, spanning disease, to behavior to opinion. Twin research is conducted worldwide and 14,558,903 twins are currently included in a multitude of studies (Polderman, et al. 2015). The classic Twin Method is based on comparing the correlations or concordance rates of traits between monozygotic (MZ) and dizygotic (DZ) twin pairs. MZs are genetic clones of one another, sharing 100 % of their genes, whereas DZs share on average only 50 % of their segregating genes. Importantly, both types of twins share their environments to a similar extent. For example, both types of twins are gestated together in the same uterus, and are raised together in one family. Any difference in resemblance between MZ and DZ pairs is therefore assumed to reflect genetic differences only. The univariate method can also be extended to understand if multiple traits share a common etiology, and to establish genetic and environmental contributions to trait stability and change over time (Rijsdijk and Sham 2002;van Dongen et al. 2012). One of the criticisms of parent reported measures of young twin behavior is that parents are biased by their belief about their twins' zygosity. For example, it is possible that parents score their twins more similarly if they believe them to be identical, or more differently if they believe them to be non-identical. If this is true, heritability estimates for these traits will be inflated because heritability is estimated by doubling the difference between the MZ and DZ correlations. This bias can be tested for directly by taking advantage of the fact that many parents are mistaken about their twins' zygosity-the so-called 'misclassified zygosity design'. Many parents of MZs mistakenly believe them to be DZs (van Dongen et al. 2012). This often results from parents being misinformed by health professionals based on prenatal scan observations or at the twins' birth if the MZ twins are dichorionic (Ooki et al. 2004). Researchers can take advantage of parental misclassification of zygosity to examine if twin correlations differ for MZs who are correctly and incorrectly classified by parents (the same approach can be used to test for differences between correctly and incorrectly classified DZs, although this occurs much more rarely (van Jaarsveld et al. 2012)). If the correlations for correctly and incorrectly classified MZ pairs are of the same magnitude, it is unlikely that parents are biased in their reporting by their belief about their twins' zygosity. Most previous studies using the 'misclassified zygosity design' have relied on self-reported zygosity by the twins themselves in order to investigate if their perception of their zygosity shapes their environmental exposure-testing the so-called 'equal environments assumption'. Results from these studies have suggested that identical twins correlate highly on behavioral traits regardless of their believed zygosity status (Scarr and Carter-Saltzman 1979;Goodman and Stevenson 1989;Xian et al. 2000;Gunderson et al. 2006). This study uses a novel application of the 'misclassified zygosity design' to test for parental bias in reporting of a range of eating behaviors in infancy and early childhood. Sample Data came from Gemini, a population-based British birth cohort of 2402 families with twins born in 2007 in England or Wales (van Jaarsveld et al. 2010). Ethical approval was granted by the University College London Committee for the Ethics of non-National Health Service Human Research. Participants included 816 families with oppositesex twin pairs (DZ by default), and 1586 with same-sex twin pairs. Parents of same-sex twin pairs completed a 20-item Zygosity Questionnaire at baseline (Q1), when the twins were on average 8 months old (SD = 2.1, range 4.1-16.7 months) (Price et al. 2000). In addition, 934 families (58.9 %) completed the same questionnaire on a second occasion (Q2) when the twins were on average 29 months old (SD = 3.3, range 22.9-47.6 months). A total of 1127 families had provided DNA samples for both twins, of which 81 pairs were randomly selected for zygosity testing. Parents also completed measures of infant and child eating behavior when the twins were on average 8 months (SD = 2.1, range 4.1-16.7 months) and 16 months (SD = 1.2, range 13.4-27.4 months) old respectively. Only data from same-sex twin pairs were used in the analyses in this study. Twin pairs with missing or inconclusive zygosity data were excluded. Zygosity questionnaire The items in the zygosity questionnaire relate to physical resemblance including: general similarity; similarity of specific features such as hair color and texture, eye color, ear lobe shape; timing of teeth coming through; and ease with which parents, friends and other family members can distinguish the twins. Other items ask about blood type, health professional's opinion, and the parents' own opinion on zygosity (Price et al. 2000). The zygosity questionnaire is scored by adding up the scores obtained for each question and dividing the total by the maximum possible score based upon the number of questions answered to create a value between 0 and 1. Lower scores indicate greater intrapair similarity with zero representing maximal similarity and one maximal dissimilarity. Scores \0.64 were classified as MZ, scores [0.70 were classified as DZ, and scores between 0.64 and 0.70 were coded as 'unclear' zygosity, as described by Price et al. (2000). DNA genotyping Hyper-variable minisatellite DNA probes are used to detect multiple tandem-repeat copies of 10-15 base pair sequences scattered throughout the human genome (Hill and Jeffreys 1985;Jeffreys et al. 1985). In MZ twins, the bands are identical, but they differ in DZ twins. 1127 families provided DNA using saliva samples for both twins. To validate the zygosity questionnaire, DNA was analyzed in a randomly selected sample of 81 twin pairs. In addition, some families elected to have their DNA used for zygosity testing (n = 118) and we tested a further 111 pairs who could not be classified using questionnaire data (or did not complete the second questionnaire) and who had provided DNA samples. Of these, 41 pairs recorded a mismatch between the two questionnaires; 59 pairs were classified as uncertain at one or both time points; and 24 pairs were missing the second zygosity questionnaire. A total of 310 pairs were therefore zygosity-tested using DNA. We also assessed the concordance between the 8-and 29-month zygosity questionnaire classification, with the DNA-classified zygosity for all of these pairs for whom DNA was available, to evaluate the relative accuracy of the 8 versus 29-month questionnaire. However, this sample largely included pairs who were not easily classified using the questionnaire. Parental beliefs about zygosity When the twins were approximately 8 months old (mean = 8.17, range 4.01-20.3) parents were asked to classify their twins as MZ or DZ, using the question: ''Do you think your twins are identical? ('yes' or 'no')''. Parental classifications were available for 1565 same-sex twin pairs. The same question was asked again when the twins were 29 months old (SD = 3.3, range 22.9-47.6 months) old, and 898 parents responded. To gain further insight into how beliefs about zygosity are formed, parents were also asked if they had ever received zygosity information regarding their twins from health professionals, using the question: ''Have you been told by a health professional that your twins are identical or non-identical?''. Baby eating behavior questionnaire The Baby Eating Behavior Questionnaire (BEBQ) (Llewellyn et al. 2011) was completed by parents when the twins were 8 months old (mean = 8.17, SD = 2.18) old. The BEBQ measures four distinct eating behaviors during the period of exclusive milk-feeding (the first 3 months after birth, before any solid food has been introduced) that have been associated with infant weight gain (van Jaarsveld et al. , 2014. Satiety Responsiveness (SR) measures an infant's 'fullness' sensitivity (e.g. how easily he or she gets full during a typical milk feed). Food Responsiveness (FR) assesses how demanding an infant is with regard to being fed, and his or her level of responsiveness to cues of milk and feeding (e.g. wanting to feed if he or she sees or smells milk). Enjoyment of Food (EF) captures an infant's perceived liking of milk and feeding in general (e.g. the extent of pleasure experienced while feeding). Slowness in Eating measures the speed with which an infant finishes a typical milk feed (e.g. his or her overall feeding pace). Parents used a 5-point Likert scale (1 = Never, 5 = Always) to report how frequently they observed their infant demonstrate a range of eating behaviors characteristic of each scale. Numbers of items per scale and example items are shown in Table 1. The BEBQ is an adaptation of the Child Eating Behavior Questionnaire (CEBQ), and has been validated in a different sample (Mallan et al. 2014). Mean scores for each subscale were only calculated if a minimum of items were entered (2/3, 3/4 or 4/5). Child eating behavior questionnaire (Toddler) The Child Eating Behavior Questionnaire for toddlers (CEBQ-T) was completed by parents when their children were 16 months old (Mean = 15.8, SD = 1.2). In keeping with the BEBQ parents used the same 5-point Likert scale (1 = Never, 5 = Always) to rate the twins for six distinct eating behaviors. The CEBQ-T measures the same four traits as the BEBQ (SR, FR, EF and SE), in relation to food rather than milk, as well as two other eating behaviors that have been associated with child weight. Food Fussiness (FF) measures a child's tendency to be highly selective what foods he or she is willing to eat, as well as the tendency to refuse to try new food items. Emotional Overeating (EOE) captures a child's the tendency to eat more in response to stress and negative emotions. The number of items per scale and example items are shown in Table 1. The CEBQ-T is a modified version of the validated CEBQ (Wardle et al. 2001) which has been validated against laboratory-based measures of eating behaviors (Carnell and Wardle 2007). The CEBQ has been widely used to establish relationships between eating behavior and pediatric weight status (Carnell and Wardle 2007;Viana et al. 2008;Webber et al. 2009;Mallan et al. 2013;Domoff et al. 2015). The CEBQ-T was modified to be appropriate for toddlers. The majority of the items between the CEBQ and the CEBQ-T are identical. However, the emotional undereating and desire to drink scale from the original CEBQ were removed as mothers reported their children not to engage in these behaviors. Furthermore, the wording of some EOE items was modified. Words describing the child's mood were changed to make them more age appropriate ('worried', 'annoyed' and 'anxious' were replaced for 'irritable', 'grumpy' and 'upset'). One item of the SR scale was extended from 'my child always leaves food on his/her plate at the end of a meal' to 'my child always leaves food on his/her plate or in the jar at the end of a meal'. Finally the item 'If given the chance, my child would always have food in his/her mouth' was omitted from the FR scale. Similar to the BEBQ, means for the CEBQ-T subscales were calculated if majority of the items were answered (2/3, 3/4, 4/5 or 4/6). Researcher classification of zygosity Zygosity results from the two questionnaires were compared in 934 pairs who had data for both, to assess the testretest correlation and percentage agreement. The questionnaire results were compared to DNA results in the random sub-sample of 81 pairs. Analyses were performed using SPSS 22 for Windows. Comparison of twin correlations for correctly and incorrectly classified pairs Concordance and discordance between parents' beliefs about their twins' zygosity and zygosity as derived from the questionnaire and DNA analyses at 8 and 29 months, were used to establish four groups for comparison: (1) parents who correctly classified their MZs as MZs (MZC); (2) parents who incorrectly classified their MZs as DZs (MZI); (3) parents who correctly classified their same-sex DZs as DZs (DZC); and (4) parents who incorrectly classified their same-sex DZs and MZs (DZI). This allowed for direct comparison of twin correlations between parents who misclassified and correctly classified MZ and DZ pairs. Scores for each of the BEBQ and CEBQ scales were regressed on age, sex and gestational age of the twins. Intraclass correlations (ICCs) were calculated and compared for each of the four separate groups and for the two time points (8 and 29 months) when data on the parents' opinion regarding their twins' zygosity was collected. Parental classification of zygosity at 8 months was used to compare the ICCs for the BEBQ scales; parental classification of zygosity at 29 months was used to compare the ICCs for the CEBQ-T scales. ICCs were calculated using SPSS Version 22 for Windows. Results All opposite sex twin pairs were classified as DZ. Zygosity questionnaire data was collected for same-sex twin pairs at 8 months (SD = 2.1; n = 1586) and 29 months (SD = 3.3; n = 934). 934 families (58.9 % of all same-sex pairs) provided questionnaire results at both time points. For the majority of pairs (n = 827, 88.5 %) zygosity assignment matched across the two questionnaires. The Spearman correlation coefficient between the zygosity questionnaire classification at 8 and 29 months (n = 934) was 0.80 (p \ 0.001) and the Kappa statistic (a measure of agreement) was also 0.80 (p \ 0.001), indicating a good test-retest reliability. A total of 1127 families had provided DNA samples for both twins; of these, 81 pairs were randomly selected for zygosity testing. 107/934 pairs (11.5 %), who had questionnaire data at both time points, could not be conclusively allocated using the questionnaire data: 41 pairs had a mismatch of classification between the two questionnaire time points (MZ then DZ; or DZ then MZ); 59 pairs fell into the uncertain range at either 8 or 29 months (i.e. uncertain at 8 months, then MZ or DZ at 29 months; or, MZ or DZ at 8 months, then uncertain at 29 months); 7 pairs fell into the uncertain range at both time points. Therefore, where available, DNA was used to classify the zygosity of these pairs. DNA was available for 87/107 pairs, and the genotyping process was successful for 86/87 pairs (34/41 mismatches; 46/59 pairs who were uncertain at either 8 or 29 months; 6/7 pairs who were uncertain at both time points). There were also 24 pairs for whom questionnaire data was only available at 8 months, but for whom DNA was also available; for these 24 pairs DNA was used for zygosity classification. Results from the questionnaire and the DNA testing were combined to provide the most accurate zygosity assignment for the Gemini sample. For 1239 pairs, questionnaire data only was used to allocate zygosity (n = 590 pairs with data at 8 months only; n = 636 pairs with data at both 8 and 29 months; n = 6 pairs with classification at 8 months but uncertain zygosity status at 29 months; n = 7 pairs with uncertain zygosity status at 8 months, but classified at 29 months). DNA was used to zygosity test (n = 310 pairs), including: a random sample of 81 pairs; 86 pairs for whom zygosity could not be classified My child refuses new foods at first conclusively using questionnaire data; 24 pairs who only had questionnaire data at 8 months; and 119 pairs whose parents requested a zygosity test. A total of 749 twin pairs (31.2 %) were classified as MZ and 1616 (67.3 %) twin pairs were classified as DZ (including 816 opposite sex DZ twins), based on the questionnaire and DNA results. For a further 37 pairs (1.5 %) zygosity could not be established, as questionnaire results were unclear and no DNA was provided. A detailed list of the final zygosity classification in this sample can be found in Table 2. Validation of the zygosity questionnaire using DNA DNA from the random sample of 81 twin pairs was used to validate the zygosity questionnaire. DNA confirmed 43 pairs as MZ and 38 as DZ; which exactly matched the results of the questionnaires. Comparing the questionnaire results with all pairs for whom DNA was available showed high concordance between the two questionnaires with DNA. At 8 months, 279 pairs had both questionnaire classified zygosity and DNA; the 8 month questionnaire matched DNA results for 87.5 % of the sample. At 29 months, 248 pairs had both questionnaire classified zygosity and DNA; the 29 month questionnaire matched DNA results for 96.8 % of the sample. Misclassified zygosity At 8 months there were 1528 pairs of twins who had both researcher-classified zygosity (using the questionnaires and DNA) and parent-classified zygosity (i.e. parents had responded to the question ''do you think your twins are identical?''). There was high concordance between parental classification of zygosity and researcher measured zygosity (85.2 %). However 30.1 % (220/731) of parents of MZ twins mistakenly believed them to be DZ. Only six parents of same-sex DZ pairs mistakenly classified them as MZs (0.75 % of parents of same sex DZs, 6/797). At 29 months there were 898 pairs of twins who had both researcher-classified zygosity (using the questionnaires and DNA) and parent-classified zygosity (i.e. parents had responded to the question ''do you think your twins are identical?''). At 29 months 26.3 % of parents of MZs (119/453) misclassified them as DZs. Again the number of misclassified DZ twins was very low (2/445 same-sex DZ pairs). These analyses used only same-sex twin pairs; opposite-sex pairs (n = 816, 33.3 %) and pairs of unknown zygosity (n = 37, 1.5 %) were excluded. All percentages and numbers of twin pairs used in the analyses are shown in Table 3 for 8 and 29 months separately. Parental belief about zygosity was stable over time. Of the parents who responded at both 8 and 29 months, 94.9 % (852/898) held the same belief at both time points. Furthermore 1427 parents stated that they were informed by a health professional about their twins' zygosity, and the majority agreed with the health professional's opinion (n = 1375; 96.4 %). Only a few parents (n = 52, 3.6 %) disagreed with the opinion of the health professional. Comparison of intraclass correlations Intraclass correlations (ICCs) of eating behaviors measured by the BEBQ and CEBQ-T were calculated separately for the different zygosity groups, based on the parental belief at 8 months and 29 months, respectively. Baby eating behavior questionnaire Scores from the BEBQ were regressed on sex, gestational age and age of the children at questionnaire completion. Only six same-sex DZ pairs were misclassified as identical by the parents; because of the small sample size for these pairs the 95 % confidence intervals were wide and reliable ICCs could not be calculated. We therefore only report the results for three groups: MZC, MZI, and DZC. Overall there was no difference in magnitude between the size of the ICCs for correctly and misclassified identical twins for any of the four eating behaviors. For SR, EF and SE the 95 % confidence intervals overlapped, indicating that the ICCs were not significantly different for MZC and MZI. The 95 % confidence intervals did not overlap for the ICCs for FR, however the difference in magnitude was very small (MZC, 0.89; MZI, 0.82) and the large sample size ensured that the 95 % confidence intervals were narrow, such that trivial differences were significant. Additionally, the ICCs for the DZC group were substantially smaller than those for the MZI group for all four eating behaviors, and none of the 95 % confidence intervals overlapped. Child eating behavior questionnaire (Toddler) A similar pattern of results was found for eating behaviors measured by the CEBQ-T at 16 months. For each of the five eating behaviors the magnitude of the ICCs for MZC and MZI was similar. For EF, SR, FR, FF and SE there was no significant difference between MZC and MZI, indicated by the overlapping 95 % confidence intervals. For EOE the 95 % confidence intervals did not overlap, but touched for the MZC and MZI groups. Again, the ICCs for the DZC group were substantially smaller than the MZI ICCs for each of the five eating behaviors, and none of the 95 % confidence intervals overlapped. All ICCs for the different zygosity groups and eating behaviors are presented in Table 4. Discussion We used the 'misclassification of zygosity' design in a novel approach to test for parental bias in reporting of similarities in infant and child eating behavior among twin pairs. We showed for the first time that parents who misclassified their MZs as DZs nevertheless scored them as similarly as the parents who correctly classified their MZs as MZs, on a range of eating behaviors. Intraclass correlations were compared for misclassified and correctly classified MZ pairs for a range of eating behaviors, measured by widely used parent-report questionnaires for infants (the BEBQ) and toddlers (the CEBQ-T). The results showed that the magnitude of the intraclass correlations was very similar across both correctly and misclassified identical twins. In addition, the intraclass correlations for the correctly classified DZs were markedly smaller than those of the incorrectly classified MZs, and none of the 95 % confidence intervals overlapped across the two groups. These results indicate that parents' perceptions of their twins' zygosity did not bias their scoring of their eating behaviors, insofar as they did not score their MZ twins less similarly if they mistakenly believed them to be DZ. The problem of parental rater bias is often raised in research with infants and children. These outcomes suggest that no parental bias was found in relation to zygosity DZI DZ pairs misclassified as MZs by parents a n is less than the total n for MZs (1549) because it only includes pairs with both classified zygosity at 8 months and pairs whose parents answered the question ''do you think your twins are identical?'' b n is less than the total n for DZs (1257) because it only includes pairs with both classified zygosity at 29 months (using questionnaire and DNA data) and pairs whose parents answered the question ''do you think your twins are identical?'' status, and supports the validity of the twin method for establishing the genetic and environmental influences on eating behaviors in infants and toddlers. Implications The twin method has been widely used to investigate the etiology of complex human behavior and constant critical analysis of the assumptions underlying this method contributes to its ongoing success. Previous studies used the misclassified zygosity methodology to test for violations of the equal environments assumption (EEA), confirming its overall validity (Felson 2014). This approach was also previously used to investigate the effect of self-reported zygosity on twin similarity of eating patterns in adulthood. Results showed that identical twins correlate higher than DZ twins on healthy eating patterns, regardless of their selfreported zygosity (Gunderson et al. 2006), indicating that measures of eating behavior can also be used reliably in adult twin samples. In comparison to previous misclassified zygosity studies (Goodman and Stevenson 1989;Kendler et al. 1993Kendler et al. , 1994Xian et al. 2000;Cronk et al. 2002;Gunderson et al. 2006;Conley et al. 2013), this research is, to our knowledge, the first attempt to utilize the design in a sample of infant and toddler twins to test for biases in relation to parental belief about zygosity. As previously reported parents can be misinformed about the zygosity of their children (Ooki et al. 2004). In this sample, of 749 MZ twins, 220 (29.4 %) were misclassified as DZ by parents when the twins were 8 months old. Previous research suggests that parental misclassification of MZs as DZs often stems from false information given by health professionals (van van Jaarsveld et al. 2012). In this study the majority (n = 1375, 96.4 %) of parents agreed with the health professional's opinion about their twins' zygosity. These results might be seen as an indicator that parents trust health professionals and base their own opinion on the judgement of a professional. However many health professionals classify twin pairs as non-identical if a prenatal scan shows that they are dichorionic (each has their own placenta), regardless of the fact that approximately one third of MZ twin pairs develop with separate placentas (Hall 2003). Knowledge gaps of obstetricians and gynecologists in twin prenatal development is suggested to be the cause of the misinformation (Cleary-Goldman et al. 2005). Using reliable measures of zygosity determination in same-sex twins is crucial for twin research. Additionally, zygosity classifications are important for medical reasons, such as prenatal diagnosis of genetic disease or disorders and transplant compatibility, as well as the identity and social development of the children (Stewart 2000;Hall 2003). Limitations In the current sample only a small number of same-sex DZ pairs were misclassified as MZ (n = 6 at 8 months; n = 2 at 29 months). Intraclass correlations were therefore often not significant and had wide 95 % confidence intervals, making them difficult to interpret and were therefore not included in the present analysis. A previous study of parental zygosity classification in 1244 Japanese families with twins born between 1960 and 2002 found a slightly higher (but still small) number of misclassified DZ twins (31/323 DZ pairs were misclassified as MZ). However, this study found higher rates of misclassification overall (Ooki et al. 2004). Future studies using the misclassified zygosity design would benefit from increased sample sizes to include more misclassified DZs. Larger samples would enable researchers to make comparisons between correctly classified and misclassified DZ twins, to provide more evidence in support of the validity of parental reports for the twin method. For the majority of the sample zygosity was ascertained using a zygosity questionnaire sent to parents when the twins were 8 and 29 months old. When comparing questionnaire results collected at 8 months with all available DNA collected, zygosity ascertainment matched for 87.5 % of the sample. For data collected at 29 months the accuracy of the questionnaire was higher at 96.8 %; indicating that the questionnaire may be slightly more accurate for toddlers than infants. As children might become more distinct as they grow up, it seems reasonable that parent rated zygosity is slightly more accurate when the twins are older. Regarding these rates of accuracy overall, it is also important to acknowledge that DNA was only used to zygosity-test a subset of the sample that included twin pairs who were difficult to classify (pairs for whom there was a mismatch between the zygosity questionnaire results, and pairs whose parents requested a DNA test, implying that they were uncertain about their twins' zygosity), as well as a random sample of 81 pairs. For the random sample only there was a 100 % match between the questionnaire and DNA zygosity classification. However, although we feel confident that zygosity can be accurately classified using a parental questionnaire for most twin pairs, DNA genotyping remains the gold standard for zygosity ascertainment and should ideally be available for more twin pairs. Nevertheless, zygosity testing using DNA remains costly and the use of questionnaire is more feasible for larger cohorts like Gemini. This study only assessed parental bias in relation to eating behavior in infancy and toddlerhood. Additional studies using a similar design could investigate the parental bias on other parent rated child behaviors, such as physical activity and personality. It would also be useful to understand if parental bias starts to emerge as children mature and naturally become more different from another. Future studies using the misclassified zygosity design assessing parental bias in school-aged children would be useful. Conclusion A potential flaw in the twin method is parental bias in reports of similarities in twin behavior, related to perceived zygosity. The outcomes of this study suggest that there was no parental bias related to zygosity in the Gemini twin cohort when parents reported on a range of infant and child eating behaviors.
2016-10-10T18:24:48.217Z
2016-07-12T00:00:00.000
{ "year": 2016, "sha1": "c8d70725aa0a225aa403fd224a387ba140bc827f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10519-016-9798-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4055226f627210e401e708a2b956fd34f0461eae", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
234786423
pes2o/s2orc
v3-fos-license
Heterogeneity and Environmental Preferences Shape the Evolution of Cooperation in Supply Networks Supply networks as complex systems are significant challenges for decision-makers in predicting the evolution of cooperation among firms. )e impact of environmental heterogeneity on firms is critical. Environment-based preference selection plays a pivotal role in clarifying the existence and maintenance of cooperation in supply networks. )is paper explores the implication of the heterogeneity of environment and environment-based preference on the evolution of cooperation in supply networks. Cellular automata are considered to examine the synchronized evolution of cooperation and defection across supply networks. )e Prisoner’s Dilemma Game and Snowdrift Game reward schemes have been formed, and the heterogeneous environment and environmental preference have been applied. )e results show that the heterogeneous environment’s degree leads to higher cooperation for both Prisoner’s Dilemma Game and Snowdrift Game. We also probe into the impact of the environmental preference on the evolution of cooperation, and the results of which confirm the usefulness of preference of environment. )is work offers a valuable perspective to improve the level of cooperation among firms and understand the evolution of cooperation in supply networks. Introduction Supply networks are composed of numerous firms from buyers and suppliers, and links are supply relationships formed by exchanging goods and services [1]. Within the scope of buyer-supplier relationships, the supply network needs consideration of many firms from broadening dimensions and factors. e complexity of the network is difficult for decision-making in predicting the consequence of the strategic decisions of firms [2,3]. Changes in a single firm strategy will influence the emerging properties of the entire network [4]. Large-scale supply networks are complex systems and not supported by simple hierarchical models [5,6]. In a dynamic environment, the supply network is constantly shifting strategies and objectives, and the emergent properties of cooperation among firms are difficult to understand and explore. In the network's cooperation strategy, the sanction is destructive for cooperative behaviour in a supply chain. Structural embedding provides an environment that facilitates adaptive collaborative behaviour for firms that are part of the supply network [7]. Adaptive and probabilistic strategies are studied to explore the universality of cooperation in Prisoner's Dilemma [8,9]. In the cooperative game, the Nash bargaining solution concept is used to study the collision behaviour of suppliers [10]. Larger groups are less cooperative in the N-person Prisoner's Dilemma but more cooperative in the Public Goods game [11]. More massive clusters have more substantial resource concentration capabilities. At the same time, firms with more resources tend to work more together [12]. e higher incomes earned may justify the risk intrinsic in cooperation, particularly for uncertain firms [13]. As long as the game revenue-cost ratio is higher than network connectivity, reciprocity and multistep prediction horizon are necessary for stable collaboration and sufficient for fixed cooperation [14]. e facilitation of cooperative behaviour mainly depends on participants' weight distribution, which is based on the formation of cooperative clusters controlled by highweight collaborators [15]. e system's general description is derived using ordinary differential equations, which provides a common framework to simulate and quantify the effects of single-node dynamics on the macroscopic state of the network [16]. e more survivable a network is, the more dependable it will be [17]. e stronger the level of asymmetry, the higher the level of cooperation [18]. Sakiyama and Arizono find that a novel spatial Hawk-Dove model generates a feature population pattern and represents the collaborators' survival, where the updated rules are fixed compared to the classical spatial Hawk-Dove model [19]. Comparing three typical rules, such as unconditional imitation, replicator dynamics, and the Moran process, finds that the Moran process can mainly improve the frequency of cooperation [20]. Although firms always pursue profit maximization, the inherent differences in member interests make it difficult to achieve full cooperation. erefore, it is significant to explore the evolution of cooperation in the context of supply networks. Heterogeneity has been investigated to promote cooperation in the recent literature [21][22][23][24]. Heterogeneous networks are more conducive to cooperative behaviour than homogeneous networks [25]. Individual heterogeneity confirms the existing evidence that heterogeneity promotes cooperative behaviour almost regardless of its origin [26]. Colon and Ghil have demonstrated that the heterogeneity of the delay value significantly affects the economic network's response to small, localized perturbations [27]. Firms in collaborative networks are characterized by being heterogeneous and autonomous. Andres and Poler propose a DDS for the collaborative selection of consistent strategies between firms belonging to the collaborative networks [28,29]. Heterogeneity in product quality decay leads to different logistics network structure [30]. As the heterogeneity of supplier base costs increases, the optimal number of suppliers required decreases [31]. Valuation heterogeneity between manufacturers can mitigate unfavourable supply and demand balances, thereby protecting some of the manufacturer's surplus and leading to higher price dispersion in the supply chain network [31]. e problem of spectrum allocation in cognitive radio networks based on combined auctions is studied while considering spectrum supply heterogeneity and demand [32]. Liu investigated the interplay between awareness and risk propagation in R&D networks considering firms' heterogeneity [33]. Preference selection has received more attention for the evolution of cooperation in networks [34,35]. Firms have a strong preference for geographic locations suitable for multimodal and multimodal transport [34]. e investment preference among firms can give rise to cooperation [35]. e firm's strategy preference can affect others' strategy selection, and the density of cooperation presents a growth within a wide range of value of strategy preference [36]. e level of cooperation decreases with an increase in the degree of risk preference [37]. e ambiguity of risk preference and potential probability distribution has been incorporated into a robust supply chain network design [38]. Increasing the level of preference can promote the behaviour of cooperation [39]. Firms like to choose neighbours with a significant degree of difference to learn, regardless of the network structure, and there is the highest preference intensity leading to the most significant level of cooperation [40]. Inspired by their work in supply networks, all of the firms are different. Firms are surrounded by neighbours with different properties, indicating that firms are in a heterogeneous environment. On the other hand, the firms' fitness is derived not only from interactions with neighbours but also in terms of the payoff of environmental heterogeneity in the supply network. erefore, we incorporate the heterogeneity of the environment into the fitness of firms. e heterogeneous environment signifies that when the focal firm's payoff is higher than the supplier's, it will increase the focal firm's fitness, for the focal firm is a more influential leader in the supply networks, which usually obtain additional payoff [41,42]. On the contrary, when the focal firm's payoff is less than the supplier's income, it will decrease. If the supplier's payoff is larger than the focal firm's, which surrounded by high-efficiency firms, the focal firm will soon bankrupt due to overcrowding and difficulty obtaining payoff. ere is no doubt that preference would affect the firm's decision during strategy updating [36]. Firms can take response according to their strategy and other strategies. In supply networks, firms are more inclined to adopt a strategy with higher fitness. e preference of the environment means that the suppliers who have higher fitness are more likely to be selected by the focal firm to adapt its strategy. e evolutionary Prisoner's Dilemma Game (PDG) and Snowdrift Game (SDG) are employed to explore the impact of heterogeneity and preference of environment on the evolution of cooperation in supply networks. e purpose of our research is to investigate the implication of the heterogeneity and preference of environment on an evolving supply network and address the following questions: (1) How does the heterogeneity of the environment in the supply network affect the evolution of cooperation in the two major types of games between firms? (2) How does the preference of environment in the supply network affect the evolution of cooperation in the two major types of games between firms? (3) How does the heterogeneity and preference of environment in the supply network affect the evolution of cooperation in the two major types of games between firms? In this paper, we use a cellular automata (CA) simulation framework to model firms' interaction within the supply networks. Simulations have been established to answer research questions using the PDG and SDG rewards schemes, representing various payoff of interaction. e experimental results show that the coevolution of firms' strategies produces interesting properties of emergency. is paper is organized as follows: Section 2 describes some settings of methodology. We present the simulation results and analysis in Section 3. After this, discussion and conclusions are given in Section 4. Methodology e methodology was developed following the process, as shown in the flowchart in Figure 1. From Figure 1, we can see that six sequential steps of the modelling steps, for simplicity, turn six steps into three main parts. Firstly, supply networks were established, which described the structure of numerous firms. Secondly, two major types of games were presented: firms play the game with their nearest neighbours and obtain their payoff. Finally, the heterogeneous environment and the preference of the environment were designed. e three main parts are introduced in this section. For convenience, we here list some symbols in Table 1. Supply Network Context. A supply network consists of numerous interacted firms engaged in suppliers, manufactures, or retailers [43,44]. We observe that the buying company is permanently embedded in the supply network and is associated with the supplier company that forms the basis of its supply [45][46][47]. In the context of the CA model, the firms are expressed as a cell. For two-dimensional CA, there are two significant types founded: the Moore neighbourhood, comprising the eight adjacent cells; the Neumann neighbourhood, incorporating the four adjacent cells. We proposed that buyers as focal cells and focal firms would have eight neighbouring (the Moore neighbourhood) firms that form a supply network [48]. Firms interact with their nearest eight neighbours which interact with buyers, and a firm's change will influence the entire supply network, as depicted in Figure 2. We can see that a firm acts as a buyer (focal firm) in one interfirm whereas the same firm acts as a supplier (N) in another interaction. erefore, it is necessary to figure out the implication of firms' decisions in the light of reciprocity. In the CA model, firms have cooperation and defection strategies, a synchronized evolution of strategies across the network [48]. e firms will have a strategy in the initial state, cooperation, or defection. In each step, the firms and the nearest eight neighbours play a game to obtain payoff. According to the payoff gained, the strategy chosen by the company for the next step is determined. By evaluating simulation behaviour, we can gather insights about the evolution of cooperation and defection in the supply network. Game Reward Schemes. e PDG and SDG are diffusely applicable games to investigate the evolution of cooperation [49]. Two players engage in a round of the game. e player will get a payoff R if they both choose cooperation and receive a payoff P if they both choose defection. If one player cooperates but the other defects, the cooperator will accept payoff S, and the defector's payoff is T and game reward schemes are used, as shown in Table 1. We have the weak PDG [39,50], if T > R > P > S and 2R > T + S, and the SDG if T > R > S > P [51], and this covers two major types of social dilemmas that players can choose between cooperation and defection. For simplicity [52,53], but without loss generality, the parameters in PDG are as follows: R � 1, P � S � 0, and Table 2; the parameters in SDG are as follows: [R � 1, P � 0, T � 1 + r, and S � 1 − r, as shown in Table 3. At the beginning of the game, each player randomly chooses strategies within cooperation and defection. At each step, each player plays the game with its nearest neighbours and obtains its payoff, and the accumulated payoff of player i can be signified as P i : where N i indicate the number of nearest neighbours of player i. S i means that the player i can choose one of the strategies: cooperation or defection, S i � (1, 0) T or(0, 1) T . M T i represents the matrix payoff of the game. e Heterogeneity and Preference of Environment. We assume that the player's fitness is also affected by the nearest neighbour's level of the environment. e environment of the player i is expressed by the average of its nearest neighbours' payoff P j , and we can compute the degree of heterogeneity of the environment in the following formulation [50,54]: We evaluate the fitness of the player i as follows: where the tunable parameter u ∈ [0, 1] signifies the contribution of the environment to fitness, when u � 0 implies that the environment has no effect on fitness, and F i equals P i , where the player fitness is homogeneous. If u ≠ 0 means that the environment incorporates into the fitness, furthermore, the environment increases (decreases) the fitness when P i > P i (P i < P i ). en, the player i will select an interaction object j with the following probability [39]: Complexity where α indicates the degree of preference for the environmental level, and the range goes from 0 to 1. When α � 0, the model will turn to the traditional occasion where an interaction object is randomly selected from neighbours in the supply network. At the same time, when α > 0, the environmental preference of players is presented [55], and the focal player prefers to adopt the strategy which has a higher level of the environment. In every time step, the firms in supply networks interact with each other and, at the end of the time step, update their strategy to cooperate or to defect in the next time step based on the strategy that resulted in the highest fitness. If F i ≥ F j , the player i keeps the strategy S i ; if F i < F j , it updates S i into S j [48]. rough an evaluation of the simulated behaviours over time, we can glean insights regarding the evolution of cooperation and defection in supply networks. Results A series of simulations have been conducted for understanding the evolution of cooperation. e first simulation concentrates on the implications of the heterogeneity of the environment, and an extended simulation was designed to investigate the correlation between environmental preference and the density of cooperation. e third simulation was used to figure out the influences of the coexistence of environmental heterogeneity and preference. Implications of the Heterogeneity of the Environment. We first examine the impact of heterogeneity of the environment on the emerging and sustainable properties of cooperation in PDG. For the time evolution of cooperation for different values of u, furthermore, we have used α � 0 and b � 1.3, and the result is shown in Figure 3. e resulting network reciprocity should be considered to have two main aspects, END and EXP, which have been defined in previous Focal firm (N 15 ) [56,57]. END refers to when the global cooperation faction, which started an evolutionary path, decreases from the initial arrangement of cooperators and defectors. EXP refers to a period when the global cooperation fraction increases. e evolutionary path is absorbed by an all-defectors state in the END, when u � 0, 0.1, and 0.2, and 0.3. e density of cooperation will rise with the increase of u, and three different types are founded: only defection, mix-strategies, and only cooperation. When u � 0, 0.1, or 0.2, all firms choose defection, and cooperators will soon die. Especially, for u � 0, the model turns into a traditional version. It should be noticed that the time steps to attain complete defection is a significant difference. e minor heterogeneity of the environment parameter u, the less time it will use. e cooperator hardly survives to keep the cooperation strategy for u � 0.3 or 0.4, and defective behaviour dominates in the supply network. e cooperators interact with cooperative neighbours can withstand the defector's invasion, and the strategy of cooperation dominates in the supply network when u � 0.5 or 0.6. e parameter value u is 0.7, 0.8, 0.9, or 1, and all firms choose to cooperate. e bigger parameter u, the less time it will take to achieve full cooperation. We have investigated the implication of the environment's heterogeneity on the density of cooperation in SDG, as shown in Figure 4. For the SDG, the density of cooperation will rise with the increase of u. Similar to the outcome of the PDG, three types are founded: only cooperation, mix-strategies, and only defection. e evolutionary path is absorbed by an all-defector state in the EXP when u � 0.8, 0.9, and 1. All firms choose to cooperate rapidly when u � 0 or 0.1, but the time step to attaining complete cooperation is a noticeable difference. e minor heterogeneity of the environment u, the less time it will take. Especially, for u � 0, we cannot consider the heterogeneity of the environment mechanism that leads to the model transit into a traditional SDG. With the increase of the environment's heterogeneity u, the strategy of firms tends to cooperate, but defective behaviour will also dominate when u � 0.2, 0.3, 0.4, 0.5, 0.6, or 0.7. Cooperative behaviour dominates when r � 0.8. In both cooperation dominates and defection dominates cases, the number of cooperating firms fluctuates less and is almost stable at a specific value. All firms will choose to cooperate when u � 0.9, or 1. We have investigated how cooperation evolves when either the parameter u or the parameter α is fixed at 0. In order to demonstrate the impact of the temptation to defect b and the heterogeneity of the environment u on the evolution of cooperation in PDG, the parameter α is fixed at 2, and the result is depicted in Figure 5. When α � 2, the firms have an environmental preference. All the firms prefer to adopt the strategy, which has a higher level of the environment. Two significant results can be found: first of all, the u − α parameter plane can be divided into three types, where the cooperation level is roughly the same. Most of the area is filled with yellow and light-yellow colour, which represents a high density of cooperation. Moreover, as the value of u becomes more significant, the density of cooperation gradually expands. For example, when b � 2, the colour gradually changes from blue to yellow with the increase of u. Especially, for u > 0.8, regardless of the value of b, the density of cooperation is 1. In order to demonstrate the impact of the cost-to-benefit r and the heterogeneity of the environment u on the evolution of cooperation in SDG, the parameter α is fixed at 2, and the result is shown in Figure 6. We can see that the complete defection occupied the smaller proportion of the plane, which expresses that cooperation dominates. When u � 0, the firm's fitness is merely depending on the game payoff, and the environmental preference mechanism is considered. e density of cooperation is 0 for r � 0.9. With the increment of u, the lager density of cooperation. Especially, for u > 0.8, all firms will choose to cooperate regardless of the value of r. Increasing the value of u enhances the degree of the heterogeneous environment, which plays a significant role in advancing cooperation. Implications of the Preference of Environment. To investigate the implication of the environment preference mechanism on the evolution of cooperation in PDG, the Figure 7. Especially, when α � 0, the model turns into the traditional type in which an interaction object is randomly selected from neighbours. e evolutionary path is absorbed by an all-defector state in the END, when α � 0 and 1. However, the evolutionary path is absorbed by an all-defector state in the EXP, when α � 2, 3, and 4, and 5. ree remarkable results can be found: first, the environmental preference mechanism between firms improves the cooperation strategy in supply networks. It is observed from the figure that the cooperators soon disappear in the traditional version. However, the density of cooperative firms increased compared to the traditional version when α > 0. e more considerable value of α, the higher level of density of cooperation. Second, there are differences in the magnitude of growth of density. e density of cooperation increases the most when α is 1 to 2. ird, the density of cooperating firms gradually increases and finally stabilizes at a particular value when α � 2, 3, 4, or 5, and it decreases when α � 0 or 1. In order to demonstrate the impact of the environmental preference on the evolution of cooperation in SDG, the parameter u is fixed at 0 and r � 0.8, and the result is shown in Figure 8. e evolutionary path is absorbed by an alldefector state in the END, when α � 0 and 1. However, the evolutionary path is absorbed by an all-defector state in the EXP, when α � 2, 3, and 4, and 5. Two relevant results can be found: first, the environmental preference can promote the density of cooperation. It is observed from the figure that the cooperators soon disappear in the traditional version. e depicted are obtained in SDG for α � 2. e yellow shading corresponds to the higher stationary density of cooperators at each particular combination of u and r. It can be observed that u promotes cooperation better than the r. 6 Complexity However, the density of cooperation increased to 0.5 when α � 2, 3, 4, or 5. Second, the density of cooperating firms gradually increases and finally stabilizes at a specific value. e time steps to attain a particular value are significant. e broader the environmental preference parameter α, the less time it will use. To thoroughly investigate the influences of the temptation to defect b and the environmental mechanism on the evolution of cooperation, Figure 9, in which the parameter u � 0.2, indicates how ρ c varies in dependence on the temptation to defect b for different values of α. e whole plane is divided into three types: complete defection, mixstrategies, and full cooperation. When α � 0, the model turns into PDG, which the strategy of cooperation based on the temptation to defect b and u � 0.2, and the firms are entirely choosing to defect when b > 1.3. e density of cooperators, shown by the blue bar, varies around 0 when b � 2. For larger values of α, we can see that the environmental mechanism has an impact on the density of cooperation, and some planes are yellow and green colour, which represents a high density of cooperators. Figure 10, in which the parameter u � 0.2, indicates how ρ c varies in dependence on the temptation to defect r for different values of α. e whole plane is also divided into three types: full defection, mix-strategies, and full cooperation. It is worth noting that the full cooperation occupied the one-third proportion of plane, which expresses that cooperation is promoted by more significant α. e large proportion of the mix-strategies plane is presented in Figure 4. In particular, defection dominates when r � 1 regardless of the value of α. In general, the environmental preference mechanism can guarantee a beneficial environment for cooperation. Implications of the Heterogeneity and Preference of Environment. We describe the implications of the heterogeneity and preference of the environment on the density of cooperation. In PDG, the temptation to defect b � 1.3, as depicted in Figure 11(a). In SDG, the cost-to-benefit rate r � 0.8, as depicted in Figure 11(b). When α � 0 and u � 0, the model turns into a traditional type, in which the cooperators soon die out as represented in the blue area. With the increment of α and u, the colour from blue to green means that the density of cooperation is around 0.5. In the plane, we can see that the combination mechanism impacts the density of cooperation, and most of the planes are yellow colour, which represents a high density of cooperators. As shown in Figure 12, the snapshot of cooperators and defectors for different values of the parameters u and α and the spatial evolution of strategies under three scenarios are depicted. As for the traditional type, the cooperator is soon invaded by a defector. All firms have chosen a strategy of defection. However, the introduction of heterogeneity of environment and environmental preference can lead to a different result, and defector initially invades cooperator; most firms have chosen a defection strategy. As the evolutionary process progresses, due to the environmental heterogeneity and preference, the firms gradually choose the cooperation strategy, but there are also defectors in this game eventually. With the larger values of u and α, the cooperative cluster rapidly expands in the system, occupies almost the entire system, and the level of cooperation remains stable. Compared with the previous two scenarios, it can promote more firms to choose cooperation strategy. It is interesting to explore the evolution of cooperation under the environmental heterogeneity and preference for different steps, and the result is displayed in Figure 13. As for the traditional type, some firms initially formed the compact cluster, and then all firms choose to defect. Considering the coexistence of environmental and heterogeneity and preference, the compact clusters of cooperators are formed quickly, making defectors not invaded. e clusters change continuously over time, and the defectors also exist in the lattice network eventually. With the larger values of u and α, the situation becomes harmonious, many firms are attracted by the payoff of cooperation, and cooperators win the evolutionary race. Discussion From the above simulation results, we propose the following propositions. Proposition 1. ere is a threshold level of heterogeneity of the environment in which the firms choose to cooperate or defect both PDG and SDG. In our study, we first investigate the correlation between the environment's heterogeneity and the density of cooperation. e heterogeneity of the environment signifies the contribution of the environment to fitness. e firm's cooperation has increased both PDG and SDG with the increased heterogeneity of the environment. Moreover, there is also a threshold level of heterogeneity of the environment in which firms' density presents three significant types. Suppose the environment's heterogeneity is more considerable than a threshold, and the increase of u leads to complete cooperation. Otherwise, the density of cooperation maintains a mix-strategies or total defection value. For example, when α � 2 and u > 0.8, all firms will choose to e depicted are obtained in the SDG for u � 0.2. e yellow shading corresponds to the higher stationary density of cooperators at each particular combination of α and r. 8 Complexity cooperate eventually regardless of the value of b in PDG ( Figure 5) and the value of r in SDG ( Figure 6). Proposition 2. As the degree of preference for environmental level increases, the density of cooperating firms gradually increases in PDG but immediately stabilizes at a specific value in SDG. e firm's fitness is also affected by the nearest neighbour's heterogeneity of the environment. e focal firm prefers to adopt the strategy, which has a higher level of the environment. As a result, we find that environmental preference improves the density of cooperation in PDG. Moreover, the more considerable degree of environment preference, the more significant density of cooperation ( Figure 7). For example, when α � 5, environmental preference increases the density of cooperation from 0 to 0.7. Compared to PDG, environmental preference has a different influence on the density of cooperation in SDG. With the increment of α, the density of cooperation will quickly reach the maximum value. e more considerable value of the parameter α, the less time it will take to attain a specific value ( Figure 8). e specific value is reached in 400-time steps when α � 2. However, it is reached in 150-time steps when α � 4. Proposition 3. Under the coexistence heterogeneity and preference of environment, the promotion of cooperating firms' density is more obvious both PDG and SDG, and the heterogeneity of environment has more significant implications than an environmental preference on the density of cooperation. With the increase of the degree of the heterogeneity of environment and the degree of preference for environmental level, the colour of the density of cooperation from blue to yellow which the area of cooperation density reaches the maximum (Figure 11). With the tremendous intensity of environmental heterogeneity and preference, a higher number of firms choose to cooperate. Proposition 4. In the heterogeneity and preference of environment, the compact clusters of cooperative firms are formed to resist the invasion of defection, both PDG and SDG. e numbers of cooperation and defection change over time, and we can see that the firms immediately formed cluster to resist defector invasion. In the practical, individual decisions and actions constitute the dynamics of groups, functions, organizations, and ultimately supply chains. Alliances are often formed to meet the challenges of the market and social forces. Effective and complete integration from suppliers to end-users will help firms improve operational performance and gain greater competitiveness. From Figures 12 and 13, the firms formed compact clusters with the same strategy, which make defectors are not invaded. Conclusions is paper presents the supply network evolution model based on game theory and CA to understand the evolution of SN. ey are capturing complex interdependencies, also known as reciprocal interdependencies. e simulation experiment results show the interconnected relationship behaviours of firms in the supply network. We have investigated the influence of environmental heterogeneity and preference on the evolution of cooperation in the supply network. e heterogeneity of the environment in which firms are located usually depends on their neighbours' average level. Moreover, we further investigate the implication of environmental preference on the density of cooperation. If the firms' neighbours have a higher level of environment, the higher probability adopts the strategy which has a higher level of the environment. e two types of games applied in the experiments include PDG and SDG. We extend the heterogeneity of environment and environmental preference to the evolutionary game to explore cooperative behaviours in supply networks. e simulations are conducted on the CA model with eight neighbours. Simulations represent that when the heterogeneous environment and environmental preference are introduced, cooperation is promoted in the PDG and SDG. e research found a positive correlation between the heterogeneous environment and cooperating firms' density both PDG and SDG. We further found that the environmental preference positively influences the density of cooperation regardless of PDG and SDG, but the density of cooperating firms gradually increases in PDG and immediately stabilizes at a specific value in SDG. Moreover, environmental preference has more influence on the density of cooperation than environmental heterogeneity. With the characteristic snapshot of cooperators and defectors for different values of the environmental heterogeneity and the environmental preference, it is found that firms that take into account the surrounding environment tend to cooperate and form the compact cluster. e implication for management of this is that decisionmakers should consider the behaviours of their firm and investigate the strategies of their suppliers and the complex relationships between focal firms and suppliers. As time goes on, decision-makers should re-evaluate the relationship, payoff, and fitness when there are shifts in suppliers' strategies. To improve the level of cooperation, environmental heterogeneity and environmental preference have been applied. It is wise to consider the level of the environment of suppliers and use the environmental preference to increase the density of cooperation in supply networks. A limitation of this study is that although payoff, environmental heterogeneity, and preference have been considered, and the CA lattice's supply network did not further involve random networks, small-world networks, and scalefree networks. Besides, there are differences in the impact of environmental preference in PDG and SDG models. From this point of view, future studies are to consider the impact of the stag-hunt game, hawk dove game, or other types of games on the number of cooperating firms with environmental preference. Secondly, it would be interesting to consider more firms' strategies and modify firms' strategies during simulation based on the evolving environmental heterogeneity and preference. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2021-05-20T13:22:32.141Z
2021-04-20T00:00:00.000
{ "year": 2021, "sha1": "da1cdd38363886d898429945f84a45b4f11af1ea", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/complexity/2021/8894887.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f2e35fcdc319f0c67ef981c8be75d30bdf2eebb4", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business", "Computer Science" ] }
252821233
pes2o/s2orc
v3-fos-license
Functional brain changes using electroencephalography after a 24-week multidomain intervention program to prevent dementia Quantitative electroencephalography (QEEG) has proven useful in predicting the response to various treatments, but, until now, no study has investigated changes in functional connectivity using QEEG following a lifestyle intervention program. We aimed to investigate neurophysiological changes in QEEG after a 24-week multidomain lifestyle intervention program in the SoUth Korean study to PrEvent cognitive impaiRment and protect BRAIN health through lifestyle intervention in at-risk elderly people (SUPERBRAIN). Participants without dementia and with at least one modifiable dementia risk factor, aged 60–79 years, were randomly assigned to the facility-based multidomain intervention (FMI) (n = 51), the home-based multidomain intervention (HMI) (n = 51), and the control group (n = 50). The analysis of this study included data from 44, 49, and 34 participants who underwent EEG at baseline and at the end of the study in the FMI, HMI, and control groups, respectively. The spectrum power and power ratio of EEG were calculated. Source cortical current density and functional connectivity were estimated by standardized low-resolution brain electromagnetic tomography. Participants who received the intervention showed increases in the power of the beta1 and beta3 bands and in the imaginary part of coherence of the alpha1 band compared to the control group. Decreases in the characteristic path lengths of the alpha1 band in the right supramarginal gyrus and right rostral middle frontal cortex were observed in those who received the intervention. This study showed positive biological changes, including increased functional connectivity and higher global efficiency in QEEG after a multidomain lifestyle intervention. Clinical trial registration [https://clinicaltrials.gov/ct2/show/NCT03980392] identifier [NCT03980392]. Introduction Quantitative electroencephalography (QEEG) is a real-time, low-cost, non-invasive functional marker that reflects synaptic activity in the brain (Smailovic and Jelic, 2019). In Alzheimer's disease (AD), the hallmarks of EEG abnormalities include a shift of the power spectrum, consisting of an increase in delta power and theta power and a parallel decrease in alpha power and beta power, and along with a decrease in the coherence of fast rhythms (Jeong, 2004). Increased theta power and decreased beta power, the earliest changes in patients with AD, were also shown in amnestic mild cognitive impairment (aMCI) (Roh et al., 2011;Han et al., 2021). People with MCI who progressed to AD had lower alpha relative power and absolute power than those with stable MCI, indicating that resting state alpha activity declines gradually as cognitive functions are progressively impaired (Lejko et al., 2020). One important feature of QEEG in AD and MCI is the loss of small-world architecture (Stam et al., 2007;Zeng et al., 2015). In addition, QEEG is correlated with scores on the Mini Mental State Examination (MMSE) (Engels et al., 2015), fluid biomarkers (Jelic et al., 1998;Smailovic et al., 2018), and structural changes (Babiloni et al., 2009(Babiloni et al., , 2013(Babiloni et al., , 2015 in AD. QEEG has proven useful in predicting the response to treatment. Previous studies have demonstrated that delta and theta activity were decreased by the use of acetylcholinesterase inhibitors (Kogan et al., 2001;Adler et al., 2004;Gianotti et al., 2008). Although several studies reported changes in QEEG after a single-domain intervention program such as cognitive training or physical exercise (Huang et al., 2016;Gandelman-Marton et al., 2017), the intervention studies that assessed biological changes using EEG had small sample sizes. Furthermore, until now, no study has investigated changes in functional connectivity using QEEG following a multidomain lifestyle intervention program. The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability (FINGER) study is representative of studies that aim to investigate the efficacy of a multidomain intervention program in preventing dementia. This 2-year, double-blind, randomized controlled trial found that multidomain interventions could improve cognitive function in at-risk older adults (Ngandu et al., 2015). This led to a major shift in the focus of dementia research to interventions with modifiable risk factors. However, the Multidomain Alzheimer Preventive Trial, a 3-year, randomized, placebo-controlled trial, failed to show prevention of cognitive decline (Andrieu et al., 2017). Furthermore, there were no differences in structural brain imaging between the intervention group and the control group in the FINGER study (Stephen et al., 2019), suggesting that this multidomain intervention program produced no demonstrable biological effects. The mixed results of a multidomain intervention studies with respect to dementia prevention point to the need for further investigation of the biological effects of such programs. We demonstrated that the multidomain lifestyle intervention program designed to be suitable for older Korean individuals was feasible and effective in the SoUth Korean study to PrEvent cognitive impaiRment and protect BRAIN health through lifestyle intervention in an at-risk elderly people (SUPERBRAIN) (Park et al., 2020;Moon et al., 2021). In this study, we aimed to investigate the impact of a 24-week multidomain lifestyle intervention on functional brain changes in QEEG using data from the SUPERBRAIN. We hypothesized that there would be a difference in the electrophysiologic changes in QEEG from baseline to the study end between the intervention and control groups. Study population A total of 152 participants aged 60-79 years from eight medical centers were enrolled in the SUPERBRAIN study, a Frontiers in Aging Neuroscience 02 frontiersin.org 24-week, multicenter, outcome assessor-blinded, randomized controlled trial. Details of the study protocol have been described previously (Park et al., 2020). The inclusion criteria were as follows: (1) 60-79 years of age; (2) at least one modifiable risk factor for dementia such as hypertension, diabetes mellitus (DM), dyslipidemia, smoking, obesity, abdominal obesity, metabolic syndrome, low level of education (≤9 years), social inactivity, and physical inactivity; (3) z score on the Korean MMSE (K-MMSE) above -1.5; (4) Korean Instrumental Activities of Daily Living score <0.4 (Chin et al., 2018); (5) ability to read and write; and (6) presence of a reliable informant. Participants were excluded if they had major psychiatric illnesses, dementia, substantial cognitive decline, other neurodegenerative diseases, cancer over the past 5 years, serious or unstable symptomatic cardiovascular diseases, stent insertion in coronary vessels within the previous year, and other serious medical conditions. In addition, if subjects were uncooperative or not able to take part in the intervention programs, they were excluded from this study. Figure 1 shows a flow chart of the study. All participants were randomly assigned to three groups, consisting of the facility-based multidomain intervention (FMI, n = 51), homebased multidomain intervention (HMI, n = 51), and control group (n = 50), in a 1:1:1: ratio using a permuted block randomization method, with block sizes of three and six, through SAS macro programming, and was stratified by the participating center. The allocation sequence was known only to the independent statistical specialist. Cognitive outcome assessors remained blind to the assigned groups; they were not involved in the intervention activities. Participants were instructed not to discuss their study involvement with the outcome assessor. A total of 45, 49, and 42 participants completed the study from the FMI, HMI, and control groups, respectively. Among them, EEG was performed at baseline and at the end of the study in 45, 49, and 36 participants in the FMI, HMI, and control groups, respectively. In this study, we included one participant in the HMI group who underwent follow-up EEG at the early termination of the study. Due to bad EEG quality, 1, 1, and 2 participants in the FMI, HMI, and control groups, respectively, were excluded from the EEG analysis. Finally, the analysis of this study included data of 44, 49, and 34 participants in the FMI, HMI, and control groups, respectively (Figure 1). Because EEG signals are very sensitive, measurements can contain various noise signals. We considered different types of noise (vertical electrooculogram, horizontal electrooculogram, electromyography, body shaking, swallowing, etc.). Nevertheless, if noise appears strongly throughout the data, it may lose its meaning as an EEG signal. Therefore, the criteria for excluded data were determined after checking the signal quality of the raw data. More information can be found in the Supplementary Figure 1. No differences in age, sex, education level, diagnosis of MCI, vascular risk factors, depression scale, and cognition were found between participants whose EEG data were analyzed (n = 127) and those who were excluded (n = 25; Supplementary Table 1). Intervention and evaluation The FMI and the HMI intervention groups received intervention consisting of five components (Supplementary Figure 2). Management of metabolic and vascular risk factors consisted of six sessions with a research nurse, including two sessions with an added study physician. At each session, blood pressure, height, weight, waist circumference, smoking, and alcohol drinking were assessed. Participants were given information about their vascular risk factors and were offered prescriptions if necessary. Cognitive training consisted of computerized cognitive training (in-house program) and workbooks targeting various cognitive domains, especially memory, for 50 min twice weekly. The physical exercise program, which consisted of aerobic exercise, muscle strengthening activities, balance training, and exercise to enhance flexibility, was provided for 60 min three times weekly by trained exercise professionals. Based on the Mediterranean-Dietary Approaches to Stop Hypertension diet Intervention for Neurodegenerative Delay diet, called the MIND diet, the nutritional intervention was designed by nutritionists to be familiar to older Koreans. Three individual sessions (tailored diet for the participant) and seven group sessions (education on the MIND diet, practical exercises via a cooking lessons) were provided by study nutritionists. The motivation enhancement program included four 50-min group sessions to educate the importance of lifestyle changes for the prevention of dementia. Participants were also encouraged to engage with the intervention program by watching pop-up pre-recorded video messages from family members. Achievement in the motivational program was assessed by participants themselves. The waitlist control group received a booklet that included lifestyle guidelines to prevent dementia. The multidomain intervention program was provided to the control group after the end of the study. Demographic and clinical factors evaluated included age, sex, education, obesity, abdominal obesity, physical activity, social activity, apolipoprotein E genotype, and family history of dementia. Medical history was assessed, including hypertension, DM, dyslipidemia, cardiac disease, history of stroke, and MCI. Current smoking and current alcohol consumption were also evaluated. The K-MMSE and the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) were used as neuropsychological tests both at baseline and at the end of the study. Blood pressure, abdominal circumference, body mass index, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, glucose, and hemoglobin A1c were measured at baseline and at the end of the study. Diagram depicting the exploratory EEG substudy in the SUPERBRAIN trial. FMI, facility-based multidomain intervention; HMI, home-based multidomain intervention; EEG, electroencephalography. Artifacts were removed in two steps. First, non-stationary bad epochs were totally rejected. Second, stationary bad components related to eye movement, electrocardiography, or electromyography were removed by adaptive mixtureindependent component analysis (AMICA). At the sensor level, the absolute power of EEG, the square of the amplitudes, was calculated using fast Fourier transform (FFT) spectral analysis in each of the following eight frequency bands: delta (1-4 Hz); theta (4-8 Hz); alpha1 (8-10 Hz); alpha2 (10-12 Hz); beta1 (12-15 Hz); beta2 (15-20 Hz); beta3 (20-30 Hz); and gamma (30-45 Hz). To calculate the relative power, the absolute power of each frequency band was divided by the total power. The band power ratios, including the theta-to-alpha (TAR), delta-to-alpha (DAR), theta-to-beta (TBR), and theta-to-beta2 (TB2R) ratios, were calculated. In the source-level analysis, the standardized low-resolution brain electromagnetic tomography (sLORETA) was used with 68 regions of interest (ROIs) based on the Desikan-Killiany atlas. The imaginary part of coherence (iCoh) was calculated as functional connectivity among 68 ROIs at eight frequencies (Nolte et al., 2004). Every EEG feature was analyzed by the cloud-based QEEG analysis platform, iSyncBrain R (iMediSync Inc., Republic of Korea 1 ). An undirected binary network was constructed using the iCoh matrix of each frequency band taking the density of the network (25%) into consideration (Hassan et al., 2014;Liu et al., 2017). Measurements of network nodes and edges, defined as the 68 ROIs, consisted of node degree, clustering coefficient, characteristic path length, and small-worldness (Xia et al., 2013). In this study, the characteristic path length was used to measure functional integration (Rubinov and Sporns, 2010). Statistical analysis The modified-intention-to-treat population were used in the analysis. The chi-square test for categorical variables and oneway analysis of variance for continuous variables were used to compare baselines characteristics. Since the triglyceride level did not show a normal distribution, the Kruskal-Wallis test was used to compare the triglyceride level between groups. Analysis of covariance (ANCOVA) was used to compare the RBANS index scores among the groups, adjusted for baseline score. The independent t-test was used for the frequency band power of each channel on the 68 ROIs and iCoh among the 68 ROIs between the intervention and control groups. Since changes in characteristic path length of each frequency band in the 68 ROIs did not show a normal distribution, Mann-Whitney U test was used to compare changes in the characteristic path length of each frequency band between groups. And to deal with missing data, multiple imputations were performed using a fully conditional specification implemented as a MICE algorithm (van Buuren and Groothuis-Oudshoorn, 2011). We performed predictive mean matching with 20 iterations of the imputation model. For this analysis, the MICE package of R statistical software version 4.0.5 (R Foundation 2 ) was used. Linear regression adjusted for 2 https://www.r-project.org age, sex, and education was used to examine the relationship between the change in the characteristic path length of each frequency band in each of the 68 ROIs and the change in the RBANS index score in each of the FMI and HMI groups. The significance of each p value in the 68 ROIs was tested by controlling the false discovery rate (FDR) with the Benjamini-Hochberg procedure for multiple testing corrections (Benjamini and Hochberg, 1995). Statistical analyses were performed with IBM SPSS version 26 (IBM, Armonk, NY, USA). Statistical significance was set at p < 0.05. Results The baseline clinical characteristics of all participants are shown in Table 1. No differences were found in demographic factors, medical history, vascular risk factors, lifestyle factors, and cognition among the three groups. Changes in the total scale index score (p = 0.002) and visuoconstruction index scores (p < 0.001) of the RBANS between pre-intervention and postintervention showed improvement in all intervention groups including FMI and HMI groups, compared to the control group. Compared with the control group, the RBANS total scale index score and the visuoconstruction index score were also significantly improved in each of the FMI and HMI groups ( Table 2). In addition, a statistical trend toward improvement was observed in the attention index score in the HMI group (p = 0.099) and in the delayed memory index score in the FMI group (p = 0.050). Changes in quantitative electroencephalography parameters in all intervention groups The sensor-level analysis of EEG showed that an accelerating pattern of rhythm with alph1 decreased at F7 (p = 0.026), F8 (p = 0.044), C3 (p = 0.049), and T6 (p = 0.040) in all intervention groups, including the FMI and HMI groups, compared with the control group (Figure 2A). In addition, increases in the relative power of the beta1 band in the occipital region (p = 0.041) and in the absolute power of the beta3 band in the right parietal region (p = 0.022) were observed in all intervention groups compared with the control group (Figures 2B,C). Although these differences were not statistical significant (p = 0.079), the intervention groups showed an increasing tendency of occipital alpha peak frequency (mean difference = 0.017) in the O2 area, whereas the control group showed the opposite pattern (mean difference = −0.242). The functional connectivity analysis showed an increase in the iCoh of the alpha1 band, the default resting-state oscillating rhythm, in all intervention groups, whereas the control group showed the opposite results ( Figure 3A). Compared to the control group, Values are shown as the mean ± SD. RBANS, Repeatable Battery for the Assessment of Neuropsychological Status; FMI, facility-based multidomain intervention; HMI, home-based multidomain intervention. *Analysis of covariance with each baseline score as a covariate. the characteristic path length of alpha1 band was decreased in the right supramarginal gyrus (p = 0.003) and right rostral middle frontal cortex (p = 0.003) in all intervention groups after multiple imputation for missing data ( Table 3). Changes in quantitative electroencephalography parameters in facility-based multidomain intervention group In the FMI group compared to the control group, sensor-level analysis showed a decrease in the absolute power of the alpha2 band (p = 0.034) in the temporal area and a decrease in the relative power of the alpha1 band (p = 0.035) in the left temporal cortex (T3). Additionally, an increase in the absolute power of the beta3 band was shown in the right parietal area (P4, p = 0.038) in the FMI group compared to the control group. FMI group exhibited an increasing tendency of the occipital alpha peak frequency in the O2 area (mean difference = 0.069) compared to the controls (mean difference = −0.242), but the difference was not statistically significant (p = 0.089). The functional connectivity analysis showed an increase in the iCoh of the alpha1 band, the default resting-state oscillating rhythm, in the FMI group compared to the control group ( Figure 3B). Brain network analysis showed a decreased characteristic path length of alpha1 band in the right rostral middle frontal cortex (p = 0.007) in the FMI group compared to the control group ( Table 3). The control group showed a decrease in characteristic path length (p = 0.002) compared with the FMI group in the right lateral occipital cortex ( Table 3). Changes in quantitative electroencephalography parameters in home-based multidomain intervention group In a comparison between the HMI and the control groups, the HMI group showed an accelerating alpha1 brain rhythm pattern in the frontal (F7, p = 0.027; F8, p = 0.048), central (C3, p = 0.047), and temporal regions (T6, p = 0.036) decreased more than in the controls whereas the relative beta1 band in the frontal region (F3, p = 0.047) increased more in the HMI than in the control group. The functional connectivity analysis showed an increase in the iCoh of the alpha1 band in the HMI group compared to the control group ( Figure 3C). Brain network analysis revealed that the characteristic path length of the alpha1 band was decreased in the right supramarginal gyrus (p = 0.009) and left temporal pole area (p = 0.007) in the HMI group compared with the control group (Table 3). Associations between characteristic path length change and Repeatable Battery for the Assessment of Neuropsychological Status change The improvement in the RBANS total scale index score was associated with a decrease in the characteristic path length of alpha1 band in the left medial orbitofrontal cortex in the FMI group and in the right posterior central cortex in the HMI group ( Table 4). The improvement in the visuoconstruction index score of the RBANS was associated with a decrease in the characteristic path length of the alpha1 band in the left parahippocampal cortex and right frontal pole in the FMI group. There was no association between the change in the characteristic path length of the alpha1 band in each of the 68 ROIs and the change in the index score of other cognitive domains of the RBANS in the FMI group. There was no association between the change in the characteristic path length of the alpha1 band in each of the 68 ROIs and the change in the index score of all five cognitive domains of the RBANS in the HMI group. Discussion This is the first study that has used QEEG to investigate functional brain changes following a multidomain lifestyle intervention program to prevent dementia. This study found that the intervention group exhibited increases in the iCoh of the alpha1 band and in the relative power of the beta1 band and the absolute power of the beta3 band as well as a decrease in the characteristic path length of alpha1 band compared to the controls. Additionally, a negative association between changes of the RBANS total scale index score and changes in the characteristic path length of the alpha1 band was shown in the FMI and HMI groups, respectively. The increased iCoh of the alpha1 band in the intervention group may be an important biological marker for improved cognition after a 24-week multidomain lifestyle intervention program. Coherence is a measure of the degree of synchronization among EEG signals from different brain regions (Hogan et al., 2003), and the iCoh has been interpreted as a measure of brain connectivity (Nolte et al., 2004). Increased iCoh of the alpha1 band in this study implied increased connectivity of the alpha1 band, in contrast to earlier findings showing a reduction in alpha coherence in AD patients (Locatelli et al., 1998;Adler et al., 2003;Hogan et al., 2003). AD is a cortical disconnection syndrome, which refers to disruptions of structural and functional connectivity in topographically dispersed brain regions (Brier et al., 2014). In addition, the cholinergic system plays a role in the modulation of intracortical connectivity; therefore, it is not surprising that functional connectivity is disrupted in AD patients with a cholinergic deficit. Furthermore, interhemispheric coherence decreases with advanced age in normal older adults (Duffy et al., 1996;Kikuchi et al., 2000). In this regard, the increased iCoh among at-risk older adults in the intervention group suggested a positive change in functional connectivity and might be associated with improvement in the RBANS total scale index score in the intervention group. The intervention group showed a 5-point increase in the total scale index score of the RBANS and an increase in the iCoh of the alpha1 band. In contrast, the control group showed no change in the same index score and exhibited a decrease in the iCoh similar to that in normal older adults. Therefore, an increase in the iCoh of the alpha1 band may be the earliest change and may serve as a neurophysiological marker, providing evidence for biological effects on cognition in response to a multidomain lifestyle intervention. The increase in the relative beta1 power and absolute beta3 power may be an electrophysiological marker for improvement in cognition in this study. In AD patients, a decrease in alpha power and beta power as well as an increase in theta power and delta power were shown previously (Garn et al., 2014). Furthermore, relative parietal beta1 power showed a negative correlation with amyloid deposition and a positive correlation with anterograde memory in MCI patients (Musaeus et al., 2018). Decreased relative power in the beta1 band could be a predictive marker for progression in MCI patients (Musaeus et al., 2018). In contrast to previous findings in AD and MCI patients, the present study found increased relative power in the beta1 band in the intervention group, which also showed improvement in the visuoconstruction index score on the RBANS. An increase in beta band power reflects topdown attentional modulation between brain areas by promoting feedback interactions across visual areas (Bastos et al., 2015). In addition, alterations in the beta band are associated with the resting-state EEG default mode network (DMN) (Chen et al., 2008). In this regard, increased power in the beta1 and beta3 bands might suggest functional restoration of the EEG DMN, specifically top-down attentional process. Also of note is the decrease in the characteristic path length found in the intervention group. Quantitative analysis of complex brain networks is based on graph theory, with brain networks defined as a set of nodes or vertices and the edges or lines between them. When investigating altered features of functional brain networks, EEG provides measurement of neuronal activity with good temporal resolution (Bullmore and Sporns, 2009). AD is characterized by a loss of small-world network characteristics, as seen in the longer characteristic path length (Stam et al., 2007). As path length is defined by the minimum number of edges, a longer path length represents lower global efficiency. An increase in path length was also shown in MCI patients, and it was negatively correlated with cognitive status (Zeng et al., 2015). In this study, we found a decrease in characteristic path length in the intervention group, but an increased path length in the control group (Table 3). This finding implied a positive shift toward the restoration of functional brain networks through the multidomain intervention program, in contrast to a loss of small-world network characteristics in MCI or AD patients from the perspective of a disconnection syndrome. Finally, results showed a negative association between improvement in cognitive status and a decrease in the characteristic path length of the alpha1 band. Increased global efficiency, measured by the characteristic path length, was associated with an increase in the RBANS total scale index score in each of the FMI and HMI groups. This finding is in line with previous reports showing correlations between EEG parameters and neuropsychological test scores despite of various EEG parameters (Brunovsky et al., 2003;Babiloni et al., 2006). Specifically, the negative correlation between a cognitive marker and a marker of global efficiency in this study contrasts with previous findings showing a positive correlation between smallworldness and Montreal Cognitive Assessment scores in MCI and AD patients (Frantzidis et al., 2014;Zeng et al., 2015), suggesting the functional restoration of network organization as cognition improves. This study has some limitations. First, a 24-week period for a multidomain lifestyle intervention program may be too short to confirm changes in EEG parameters. This study did not find an increase in the alpha band or a decrease in the delta or theta band. However, despite the intervention program's short duration, an increase in the iCoh of the alpha1 band and a decrease in the characteristic path length of the alpha1 band in the intervention group suggested very early changes in neurophysiological markers following a lifestyle modification program. Second, this study was originally planned as a feasibility study with a small sample size in contrast to previous multidomain intervention studies with large sample sizes. The small sample size may underestimate the positive results of this study when we assess the biological effects of a multidomain intervention study to prevent dementia. However, the sample size in this study was comparable to that of previous studies of single or combined intervention programs using EEG. Third, this study did not assess biomarkers of AD including amyloid and tau, so information on the subjects' pathologic status was not available. The biological changes in EEG parameters after the intervention program might vary according to pathologic status. For example, subjects with preclinical AD might be less likely to show functional changes after an intervention program than would normal older adults without amyloid deposition. Fourth, there were more participants in the control group for whom follow-up EEG was not performed than in the intervention group. It is possible that this may have influenced the study results. However, since the participants excluded from the EEG analysis in the control group were older, it is likely that it may have a more favorable effect on the results of the control group, and it is not likely that it affected the false positives of the results of this study. Additionally, the results of comparing changes in the characteristic path length of the alpha1 band between the intervention and control groups after excluding missing data were similar to those in the analysis after multiple imputation for missing data. In summary, this study is the first study to show positive functional brain changes using QEEG after a 24-week multidomain lifestyle intervention to prevent dementia. The increased iCoh and the decreased characteristic path length of the alpha1 band in the intervention group implied increased functional brain networks with higher global efficiency following a lifestyle intervention program in at-risk older adults. Further studies with larger sample sizes and/or a longer period of intervention are needed to confirm the findings of this study. Data availability statement Anonymized data used in this work will be available from the corresponding authors upon request. Ethics statement participated in the critical revision of the manuscript and final approval of the version.
2022-10-12T13:27:37.212Z
2022-10-12T00:00:00.000
{ "year": 2022, "sha1": "eb4a8e8f3ced37c79064faed952950f86723dd51", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "eb4a8e8f3ced37c79064faed952950f86723dd51", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
71280758
pes2o/s2orc
v3-fos-license
Cultural adaptation and reliability for Brazil of the Automated Telephone Disease Management : Preliminary results Adaptación cultural y confiabilidad para el Brasil del Automated Telephone Disease Management : resultados preliminares Objectives: To translate, culturally adapt for Brazil the Automated Telephone Disease Management (ATDM) Satisfaction Scales and evaluate the reliability of the adapted version in Brazilian adults with diabetes mellitus (DM). Methods: A methodological study whose cultural adaptation process included: translation, expert committee, back translation, semantic analysis and pretesting. This study included a sample of 39 Brazilian adults with DM enrolled in an educational program in São Paulo. Results: The adapted version of the instrument showed good acceptance with easy comprehension of the items by the participants, with reliability ranging between 0.30 and 0.43. Conclusion: After analyzing the psychometric properties and finalizing the validation process in the country, the instrument can be used by Brazilian researchers, making it possible to compare with other cultures. INTRODUCTION The importance of chronic noncommunicable diseases (NCDs) in the current profile of population health is extremely important.Estimates from the World Health Organization (WHO) indicate that NCDs now account for 58.5% of all deaths and 45.9% of the total global disease burden expressed as lost years of healthy life (1) .Among these, diabetes mellitus (DM) is recognized as a global epidemic today, representing a major challenge for health systems around the world (2) . Research conducted by the Diabetes Control and Complications Trial -DCCT (1993) and United Kingdom Prospective Diabetes Study Group -UKPDS (1998), showed that for both diabetes mellitus type 1 (DM1) and diabetes mellitus type 2 (DM2), the metabolic control with intensive treatment within certain limits can significantly reduce the development of complications (3,4) . To address the complexity of treatment, it is necessary to use innovative technologies in DM, since these provide better clinical outcomes.In this sense, the use of the telephone is an important strategy in health communication, with an expected increased application of this technology next year.The telenurse is a strategy of action in health that is different from nursing, representing a a leap forward from traditional nursing care The Ministry of Health launched, in 2006, a manual with specific recommendations for integral care of diabetic patients and for the professional health team, recommending the following conduct: telephone contact between appointments, and planning services for emergency care of acute decompensation of glucose by telephone contact. The international literature shows an increased effort in the insertion of new technologies for the care of person with DM.Studies with community participation in planning and evaluation of care are relevant, because they allow the feedback of interventions performed by health teams, improving health services. (10)Therefore, it is necessary to evaluate the developed work with new techonologies from the perspective of the person with DM. Thus, the strategy of using the telephone in assisting people with NCDs seems to be a possibility to advance and monitor throughout the course of treatment, ensuring continuity of health and longitudinality of the care.Thus, there is a need to provide tools that enable the evaluation of health care services by telephone, or interventions such as educational programs for people with diabetes mellitus. In this way, after literature review in national and international databases, it was evident that among the tools for evaluation in health care for chronic conditions using telephone was the Automated Telephone Disease Management (ATDM) Satisfaction Scales, whose aim was specifically to measure satisfaction of people with DM after submitting to intervention or educational programs by telephone. The ATDM Satisfaction Scales instrument , originally developed in English by Dr. John Piette, consists of 11 items covering three areas: facility of completing the call (4 items), perceived usefulness of the call (3 items) and intrusiveness of the call (4 items).For each item of the instrument, five possible answers are offered, with scoring on a range of 1 to 5 points.The ranking of the instrument items occurs by variation of the Likert scale: Always (1); Mostly (2); Sometimes (3); Rarely (4) Never (5) (11) . Considering the lack of instruments in order to assess this dimension of care in Brazilian culture, the present study aimed to translate and culturally adapt the ATDM Satisfaction Scales for Brazil and to present preliminary results on the reliability of the Brazilian version adapted for people with DM, despite the small convenience sample represented by the study. METHODS This study is characterized as a methodological investigation that included the search for new meanings, interpretations of phenomena, and development of data collection instruments (12) and to understand the cultural adaptation and reliability of an instrument that assessed the satisfaction of Brazilians with DM, after intervention or educational programs by telephone. Although research on hypothetical and actual methodological investigations has shown that a minimum sample size of 50 subjects is sufficient to adequately represent and analyze the initial psychometric properties of an instrument to be tested in another country (13) , the sample of our study included: 39 adult Brazilians, both genders, aged between 36 and 79 years, with 7.7% and 2.3% with DM2 and DM1, respectively, who were registered during the period of 2010-2011 in the Diabetes Education Program in the Nursing Education Center for Adults and Seniors, located in the Campus of the University of São Paulo in Ribeirão Preto-SP, and And linked to the Nursing School of Ribeirão Preto, University of São Paulo (EERP-USP).Subjects were selected using the following criteria: frequency greater than or equal to 75% participation in the educational groups of the Education Program on Diabetes, age greater than 18 years old, and agreed to participate. A convenience sample was used, since participants were invited to participate in the study by phone, in the order in which they were enrolled in the Diabetes Education Program at the study site.Despite not being probabilistic, we were careful to maintain homogeneity between gender and age.Samples of this type can be considered representative of the population assisted in the service considered (14) . In regard to ethical issues, the present study was approved by the Ethics Committee for Human Research of the EERP, as Research Protocol No. 1.175/2010, with approval on August 20, 2010.Data were collected from March to May of 2011.In all interviews by telephone, the Terms of Free and Informed Consent (TFIC) were read and verbal consent requested of all participants, ensuring their privacy and the condition of strict confidentiality of their names. To collect data, we proceeded in the following manner: initially, people were invited to participate in the study by telephone, and informed about the aims and purposes of the research by reading the TFIC.After verbal consent, the person was interviewed. Thus, the directed interview was conducted by telephone and the ATDM Satisfaction Scales, translated into Brazilian Portuguese and adapted to the culture of the country, were used.Each interview had a mean time of 10 minutes, and the data were recorded using PacTel, a program that records telephone conversations.After interviews, the responses were manually typed and inserted in the instrument and database with validation and double entry, prepared in Excel of Microsoft Office 2010. Following this, the medical records of the 39 participants were consulted for demographic data: age in years, date of birth, gender, marital status, occupation, education level, and family income. Then the adaptation process adopted for the ATDM Satisfaction Scales instrument was performed according to the following steps: translation of the instrument into Brazilian Portuguese; obtaining the first consensus on the Portuguese version of the translations; evaluation by a committee of experts; backtranslation; obtaining the consensus of versions in English, and compared with the original one; semantic analysis of items, and a pre-test (15) . The translation of the ATDM Satisfaction Scales instrument in its original version (ATDM-OV) of English into Brazilian Portuguese was conducted individually by two bilingual persons with knowledge about the subject.The two versions were analyzed by the researchers involved in this study, resulting in the Consensual Portuguese Version 1 (ATDM-CPV1). Thus, the evaluation was performed by an expert committee composed of researchers, health professionals, teachers with experience in the subject of care in diabetes mellitus and nursing communication, language professionals and translators, in order to assess the cultural, conceptual, semantic and idiomatic equivalence between the ATDM CPV1-and ATDM-OV instrument.The potential modifications of the equivalences were accepted, when they obtained a consensus of at least 85% approval by consensus of the total number of members of the expert committee, resulting in the Portuguese Consensual Version 2 (ATDM-PCV2). The ATDM-PCV2 was then subjected to two individual translators, unaware of the study objectives, born in the United States of America and residents in Brazil, for the back translation.Then, the two back translated versions were analyzed in a meeting with both translators and researchers involved in the study and the final English version of the ATDM Satisfaction Scales instrument (ATDM-FEV) was determined, which was compared to ATDM-OV and submitted for assessment by the main instrument's author, Dr. John Piette.The comparison between ATDM-OV and ATDM-FEV versions resulted in no change in Portuguese Consensual Version 2 (ATDM-PCV2). For the semantic analysis of the instrument items, the ATDM-PCV2 was implemented with five people with diabetes enrolled in the Diabetes Education Program of the EERP-USP.Then, the five selected people were invited to participate in this step, in order to properly understand the wording of the 11 items and the range of responses for the population for which the instrument was intended, in order to analyze the possible changes, additions and suggestions of the participants; no alterations were suggested. Thus, it the ATDM-PCV2 was used in the pretest with five people with diabetes enrolled in the Diabetes Education Program at the study site.There was no need for modifications in completing or comprehension.Therefore, the process of cultural adaptation of the ATDM Satisfaction Scales instrument for Brazil was considered completed, maintaining the ATDM-PCV2 as the Final Portuguese Version (ATDM-FPV). Finally, after completing the process of cultural adaptation and data collection, the reliability of ATDM-FPV was verified by the internal consistency of the items, calculated by Cronbach's alpha.It is the indicator most often used in the analysis of the internal consistency of instruments because it reflects the degree of covariance among the items themselves (16) .The level of significance was set at 5% (α = 0.05). RESULTS The ATDM Satisfaction Scales instrument was translated and adapted into Brazilian Portuguese, to be employed in the stage of semantic analysis, as described in the methods.After back translation, the ATDM-FEV version was compared with the original ATDM-OV version by lead author, Dr. John Piette, who gave his agreement, and the final Brazilian Portuguese version was completed.The cultural adaptation process lasted three months. As mentioned, the semantic analysis had five people with DM involved.In analyzing possible changes, additions and suggestions of the participants, no changes were suggested to the ATDM-PCV2 instrument.In the same way, the ATDM-PCV2 instrument was also applied to five persons with DM, and no modifications in completion and understanding of the instrument through pretest application was needed. The title of the instrument Automated Telephone Disease Management (ATDM) Satisfaction Scales was then named Escala de Satisfação para Manejo da Doença Automatizado por Telefone (MDAT) As previously stated, 39 Brazilian adults were involved in the reliability analysis, with no refusals.The average Acta Paul Enferm.2012;25(5):795-801. age was 60, the majority were female (67.4%); 65.2% were married, 51.2% retired, 60.5% attended school up to elementary school, with an average of 7 years of study.Regarding family income, 44.1% reported income three to four times the minimum wage. With respect to the Satisfaction Scale items for MDAT, in the first item, 94.9% answered that the words used during the connections were always easy to understand.As to the question regarding the sound volume of the voice calls, 36 (92.3%) responded that the sound level was always enough to hear without difficulty.When asked if the information was given too fast, for more than half, 53.8%, the answer was never.Twenty-eight (71.8%) reported never having difficulty responding to questions using a phone call.The majority (64.1%) stated that the phone calls made them assured that the nurse knew how they were and 51.3% said they always learned something new during the calls; 69.2% said the calls always reminded them to do something, how to check their blood sugar or eat healthy foods. The majority (94.9%) responded that the calls were interesting; 92.3% always enjoyed receiving the calls, and 87.2% said that the calls were never uncomfortable.On the last item, 84.6% said that the length of calls was appropriate.The analysis of reliability for internal consistency of the items of the adapted version was calculated by Cronbach's alpha coefficient, resulting in α = 0.39 for the total scale, ranging between values of 0.30 and 0.43 for the 11 items of the instrument. DISCUSSION The Brazilian health system needs to be strengthened to provide assistance to people with NCDs, through: models of care for chronic conditions based on local experiences; expansion and upgrading of the Family Health Strategy; increased access to cost-effective medicines; better communication between basic attention and other levels of care; integration of programmatic actions for chronic diseases, among other things (23) . In this context, along with the modernization and integration of new technologies in health care, it is necessary to evaluate the use of these new technologies, so that health actions are more efficacious, efficient and easily accessible to the entire population.Thus, to understand and evaluate the care provided to people with diabetes mellitus, it is necessary to use new technologies, with the phone being a very advantageous option. The assessment of health technology needs to be expanded to provide a strong basis for appropriate selection of new programs and public health actions, and new medications, devices and diagnostic tests (23) .An evaluation consists of making a value judgment about an intervention, in order to assist in decision making (24) . Thus, the introduction of methodologies such as assessment of satisfaction of persons with diabetes mellitus, after participating in telephone educational programs by a health system, gives a comprehensive knowledge of the needs of the person, whereas the lack of measuring instruments in the area are aspects to be considered in studies that evaluate the effect of new treatment modalities and what impact they can bring to health care, in addition to expected outcome data. The mean age was 60 years.Most participants were women with low educational levels.These findings are in line with the profile of the sample of studies that compared the effectiveness of educational strategies in a Diabetes Education Program (17,18) . The predominance of females is indicative of women presenting higher searching behavior for self-care, and are more assiduous in regards to educational programs than men (19) .The level of formal education is an important feature to consider in proposing educational programs, because the lower education may hinder access to information and impair treatment adherence (20) . Regarding the occupation and the prevalence of retired persons, the study showed that only 25% of older people were economically active.People who worked showed greater physical and mental disposition, higher educational level and family income, Acta Paul Enferm.2012;25(5):795-801. and lower prevalence of chronic diseases (21) .Low socioeconomic status found in this research should be taken in consideration, since the person who stays active can achieve greater personal satisfaction, opportunity for social interaction and benefits to physical and mental health (22) . After finalizing the stages of cultural adaptation, people with diabetes mellitus answered the Satisfaction Scale for MDAT easily and quickly, it was found that there was no difficulty in understanding of the issues as to the adequacy of the response categories, which were used easily. For the analysis of internal consistency of the items of the adapted version, the Cronbach alpha coefficient was calculated, with values of α = 0.39 for the total scale, ranging between 0.30 and 0.43 for the items.However, 0.70 is considered an ideal minimum value, and 0.60 for exploratory research may be accepted (25) .We justified the low values found for measures of the items in this study as a limitation of the sample size.It is clarified that this is an early stage of the validation process of the instrument, considering the small sample size for the amount of items of the adapted instrument.It is important to highlight that the study sample will be increased to assess the psychometric properties of the adapted Brazilian instrument. It is important to consider that in a country of large expanse, like Brazil, different cultural contexts can be identified and it may have implications for adaptation and validation of instruments to measure specific constructs, such as the satisfaction of people after beeing submitted to educational programs by phone.Thus, Brazilian adults with DM may be developing in different cultural and social contexts within the same nation.Low educational levels found in the sample of this study added to the social and cultural context of people with chronic conditions, as an important source of knowledge acquisition and modification of knowledge (26) are reasons that may be influenced by different cultures, lifestyles and education. CONCLUSION The ATDM Satisfaction Scales instrument , originally in English, was translated and culturally adapted into Brazilian Portuguese, following all the steps listed in sequential methodology.Thus, it was observed that the idiomatic, semantic, cultural and conceptual equivalence of the original instrument have been retained, and the Satisfaction Scale for Automated Telephone Disease Management, translated and adapted for the Brazilian context, maintained the concepts and the evaluation of the dimensions of the original instrument, including being ratified by the author. With respect to those results achieved by internal consistency analysis, it was found that the adapted instrument presented low levels, since this is an initial phase of the study with a small sample size.However, the sample will be further expanded in the same social environment of the study site, in order to enable a workable reliability of the instrument and additional statistical analyses. It is expected that after the replication of this study in a larger sample size and analysis of the psychometric properties of the Satisfaction Scale for MDAT in Brazil, this instrument can then be used by Brazilian researchers, and their results can be compared with other cultures, as well as incorporated as an additional tool in the daily care of health professionals to monitor health status over time and, thereby to know the impact of their interventions on the condition and evolution of Brazilian adults with DM. Given the above, we reiterate the importance for the development of studies of this nature that may contribute to the understanding of the factors involved with the satisfaction of people with DM after participating in educational programs conducted by telephone, relevant within the context of managed care, ensuring therefore the continuity of health interventions and contributing to clinical practice and nursing education. Table 1 - Numeric (n) and percentage (%) distribution of the data of the study population, as the responses to items of the domain "Facility of completing the call" and Cronbach's alpha (α).Ribeirão Preto (SP), 2011 Table 2 - Numeric (n) and percentage (%) distribution of the data of the study population, regarding the responses to items of the domain " Perceived usefulness of call" and Cronbach's alpha (α).Ribeirão Preto (SP), 2011 Table 3 - Numeric (n) and percentage (%) distribution of the data of the study population, regarding the responses to items of the domain "Intrusiveness of call" and Cronbach's alpha (α).
2018-09-03T20:20:03.889Z
2012-10-02T00:00:00.000
{ "year": 2012, "sha1": "36969d1f4173d3ee048e932e4798fc936e013431", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/ape/a/c8KHfW9CthhMCBtDjYnWKnd/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fbe3b978ad939fd1614389a7b7dc0c23bf45c1fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257062025
pes2o/s2orc
v3-fos-license
Adapting Family Planning Service Delivery in Title X and School-Based Settings during COVID-19: Provider and Staff Experiences The COVID-19 pandemic introduced urgent and unique challenges to family planning providers and staff in ensuring continued access to high-quality services, particularly for groups who experience greater barriers to accessing services, such as women with systemically marginalized identities and adolescents and young adults (AYA). While research has documented key adaptations made to service delivery during the early phase of the pandemic, limited studies have used qualitative methods. This paper draws on qualitative interview data from family planning providers and staff in Title-X-funded clinics and school-based clinics—two settings that serve populations that experience greater barriers to accessing care—to (a) describe the adaptations made to service delivery during the first year of the pandemic and (b) explore provider and staff experiences and impressions implementing these adaptations. In-depth interviews were conducted with 75 providers and staff between February 2020 and February 2021. Verbatim transcripts were analyzed via inductive content analysis followed by thematic analysis. Four key themes were identified: (1) Title-X- and school-based staff made multiple, concurrent adaptations to continue family planning services; (2) providers embraced flexibility for patient-centered care; (3) school-based staff faced unique challenges to reaching and serving youth; and (4) COVID-19 created key opportunities for innovation. The findings suggest several lasting changes to family planning service delivery and provider mindsets at clinics serving populations hardest hit by the pandemic. Future studies should evaluate promising practices in family planning service delivery—including telehealth and streamlined administrative procedures—and explore how these are experienced by diverse patient populations, particularly AYA and those in areas where privacy or internet access are limited. Introduction The COVID-19 pandemic introduced urgent and unique challenges to family planning providers and staff in ensuring continued access to high-quality services. In the first year of the pandemic, providers across the globe navigated service delivery amid concerns over staff and patient safety, rapid introduction of new technology systems, and staffing shortages during a time when much was unknown about the SARS-CoV-2 virus [1][2][3]. Fears were raised early on about implications for access to family planning care [4][5][6], and it has since been well documented that access to sexual and reproductive health (SRH) services declined significantly worldwide [7,8]. Many family planning clients have faced barriers to accessing care, including longer wait times for appointments, fear of entering medical settings, prescription shortages, clinic closures, and increased economic hardship [9][10][11][12]. The effects of COVID-19 on access to family planning services have not been experienced uniformly across all groups and populations and have served to exacerbate existing inequities in family planning care. In the United States, certain population groupsincluding women with systematically marginalized identities, women living in poverty, and adolescents and young adults (AYA; referring to young people from onset of puberty through age 25)-have long been shown to experience greater barriers to accessing medical care, including SRH services [13][14][15]. These barriers have been heightened in the current context due to the disproportionate, multi-layered impacts of the pandemic [4,9,16,17]. For example, women who face systematic marginalization on the basis of race, ethnicity, and class, including the Black and Hispanic communities, have been disproportionately likely to experience loss of employment and income, in addition to increases in childcare responsibilities, all of which can further impede access to care [18]. Likewise, many AYA in the same communities have experienced their parents' changing economic circumstances in combination with school closures, leading to a loss of privacy necessary for confidential sexual health care [4,19,20]. Two key mechanisms for providing SRH care to populations that experience greater barriers to accessing services are Title-X-funded clinics and school-based health services. Title X is the only federal grant program dedicated solely to providing low-income or uninsured individuals with comprehensive family planning and related preventive health services, making it an important source of care for individuals from systematically marginalized populations [21,22]. One-quarter of all women, and one-half of women with incomes below the federal poverty level, access contraceptive services at publicly funded family planning centers, with Title-X-funded sites providing the majority of publicly funded care [23]. Title X clients are also more likely to be Black or Hispanic and live under the federal poverty line than the general US population [24][25][26]. Meanwhile, school-based clinics provide on-site services, including SRH care, to students in over 10,000 K-12 schools across the country. These clinics are an important source of SRH care in communities that may otherwise experience limited access to services, especially for young people from low-income and rural populations, and populations of color [26]. School-based clinics can play a critical role in facilitating young people's transition from pediatric care to the adult health care system, when rates of unintended pregnancy are the highest [27]. The presence of health care services on school campuses also reduces the unique barriers that AYA face to accessing family planning care, such as lack of privacy or access to transportation [28,29]. The emergence of COVID-19 disrupted the ability of clinics-including Title-X-funded and school-based clinics-to offer in-person family planning services [19,30,31]. In 2020, over 77 percent of certified school-based health centers (SBHCs) shut down due to school closures [19], and Title-X-funded clinics saw almost 900,000 fewer clients than before the pandemic [32]. A growing body of survey research has documented the ways providers and staff responded to early disruptions from the pandemic, identifying immediate and ongoing adaptations made to clinical practices and protocols [9,[33][34][35][36][37]. In general, there is consensus that providers made rapid transitions to telehealth services, began offering select family planning services in a "drive by" or "curbside" fashion, and implemented updated clinical guidance on contraceptive refills and spacing [9,24,33]. Surveys have provided quantitative data on the prevalence of these adaptations in settings that serve both adults and AYA [35,36,38,39]. For example, one study saw an increase in telehealth use from 11 percent to 79 percent among reproductive health providers [38], and another found that 82 percent of clinics providing abortion or contraceptive care had added or expanded telehealth services for contraceptive counseling [36]. Other service adaptations appear to be less common, with 10 percent to 23 percent of providers surveyed offering self-administered medroxyprogesterone acetate [36,38], 15 percent to 22 percent offering curbside pick-up for contraceptives [36,38], and 15 percent to 35 percent mailing out contraceptives or using mail-order pharmacies [36,38]. Researchers have also surveyed providers on perceptions of the effectiveness, acceptability, and challenges of telemedicine, the most widespread and common change to family planning service delivery during the pandemic [40][41][42]. In one study, Stifani et al. found that 80 percent of family planning providers strongly agreed that telehealth was effective for contraceptive counseling, and a majority supported continuing to provide it post-pandemic, with a significant preference for video visits over phone visits [40]. This and other studies have also shed light on key limitations of telehealth, including not being able to conduct physical exams or diagnostic testing, patient difficulties using telehealth, insufficient cellular service or poor Wi-Fi connections, and disparities in access to technology [40,41]. Additional challenges reported by providers at youthserving clinics include lack of client awareness of telehealth services and concerns around confidentiality [42]. Although a considerable amount of scholarship has explored family planning providers' responses and adaptations to continue providing care, few published studies have included qualitative data and methods. Developing a qualitative understanding of not only how family planning providers and staff modified care but also their impressions and experiences of these changes can offer critical information to researchers and practitioners looking to ensure high-quality services, particularly for populations hardest hit by the pandemic. To date, limited qualitative studies have examined family planning service adaptations during COVID-19. These studies include a textual analysis of Title-X-funded clinic progress reports documenting reported service delivery adaptations [37], a content analysis of openended survey responses from providers implementing telehealth services for contraceptive care [42], and a case study of rapid implementation of telehealth across multiple specialties at a clinic serving AYA [2]. While these efforts have helped to define the changes made to family planning practice in various settings, they do not provide an in-depth exploration of provider experiences with a range of adaptations. Only one recent report-a descriptive study of private, hospital-affiliated, and Planned-Parenthood-affiliated clinics providing abortion or contraception services-has explored this topic qualitatively through in-depth interviews with providers [43]. Ly et al. found that providers faced multiple challenges to modifying abortion and contraception services during COVID-19, including staffing and limited resources, but saw rewards to doing so, such as increased camaraderie and creativity among staff. Although this study provides important context, there remains a need to better understand the experiences of family planning providers more broadly, particularly in settings that serve populations facing barriers to accessing care. This paper seeks to build on and expand existing research by exploring family planning provider experiences adapting services during COVID-19 in Title-X-funded and schoolbased clinics-two settings serving women and AYA facing limited access to SRH services. Through content and thematic analysis of in-depth interviews, we aim to (a) describe the adaptations made to family planning service delivery in these settings during the first year of the COVID-19 pandemic and (b) explore provider and staff experiences and impressions implementing these adaptations. Study Background and Design Data for this analysis come from two sets of qualitative interviews conducted with providers and staff at Title-X-funded clinics and school-based health centers from February 2020 through February 2021. These interviews were designed and implemented under two concurrent projects that shared a principal investigator and key research team members. One project sought to explore trends in publicly funded family planning services through an analysis of publicly available data and interviews with Title X providers and staff. The second project aimed to identify and explore unique or innovative approaches to family planning service delivery in school settings through interviews with family planning providers and staff. While these projects were distinct in their populations and settings of interest-one focused on providers serving clients of any age in Title X clinics and the other on providers serving AYA in educational settings-both engaged family planning providers and staff for interviews as the pandemic gave way to state-ordered shutdowns. Accordingly, in April 2020, both teams added questions to their interview protocols to capture the ongoing changes made to family planning services due to COVID-19. Questions were similar across the two studies and explored how clinics modified their services in response to the pandemic, how well these strategies met the needs of their clients, and what challenges they faced in making these changes. Given the overlap in these questions, the authors combined these data to enable a more comprehensive view of the family planning landscape during COVID-19. This paper reports on this subset of COVID-19-related data; broader findings from each specific project have been disseminated elsewhere [44][45][46]. Study Recruitment and Enrollment Teams followed two separate recruitment protocols under the two projects. For interviews with Title X providers and staff, the research team began recruitment by accessing publicly available lists of Title X sites. Team members stratified clinics by key variables, including geographic region, urban-rural classification, and racial/ethnic composition of client population, and then randomly selected 150 clinics for recruitment via email. Based on initial responses and interest, the team then reached out to an additional 33 clinics that were located in geographic regions where recruitment lagged. Screening interviews were administered to staff at each clinic that was responsive to recruitment to determine eligibility and willingness to participate. Eligibility criteria included having received Title X funding within the past two years and serving over 50 family planning clients per year. The team ultimately enrolled and conducted in-depth interviews with 46 providers and staff from current and former Title X clinics. Of those, 38 were asked questions relating to COVID-19 (i.e., conducted after April 2020) and were, therefore, included in this analysis (see Table 1 for study sample characteristics). These 38 interviewees represented 33 unique sites. For the second project, the research team used a snowball sampling approach to identify and enroll school-based providers and staff. This approach was selected as the project focused on identifying and exploring novel or "innovative" approaches to family planning service delivery in school settings, which required referrals from practitioners. Eligibility criteria for interviews included operating within or in partnership with a school setting, serving one or more population groups that experience greater barriers to accessing SRH care (i.e., people of color, including members of American Indian Tribes; people with limited English proficiency; people who have immigrated to the United States; people experiencing or at risk of experiencing homelessness; people with low incomes; rural communities; and communities without family planning clinics), and implementing an innovative approach to providing family planning care to AYA in school settings. For the purposes of the project, "school settings" were defined as clinics located within K-12 schools, provider partnerships with K-12 schools, and clinics located at community colleges, which are more likely to serve students from systematically marginalized identities [47,48]. Additionally, for the purposes of the project, "innovative approach" was defined as an intentional and focused approach that reaches one or more population groups experiencing greater barriers to accessing services. To identify potential interviewees, the team first reached out to key contacts, such as regional coordinators of certified SBHCs, and asked them to provide names and contact information for providers and/or clinics that might meet eligibility criteria. The team then conducted a round of screening interviews and focus groups to identify sites best suited for in-depth interviews. Ultimately, the team enrolled and conducted in-depth interviews with 57 school-based providers and staff, representing 48 unique sites. Of the 57 participants, 37 (representing 32 unique sites) were asked questions relating to COVID-19 and were included in this analysis (Table 1). Table 1 shows key characteristics of participants and associated clinics for the 75 interviews with Title X and school-based staff included in this analysis. In addition, of the Title X clinics interviewed, 45% served at least 20% Hispanic clients and 39% served at least 20% Black clients. The team did not record information on populations served by school-based staff. Data Collection For both projects, in-depth, semi-structured interviews were conducted via encrypted Microsoft Teams teleconferencing software (version 1.5) and lasted approximately 60 to 90 min. Two trained research team members were present at each interview. Prior to being interviewed, all participants were informed of the study purpose, the voluntary nature of their participation, their rights to withdraw participation at any time, and the confidentiality of their responses. Interviews were audio-recorded with permission from participants. Audio recordings were transcribed verbatim by an outside vendor and supplemented with notes taken by research team members. Participants received a USD 25 gift card for their participation. Data Analysis Verbatim transcripts were uploaded into Dedoose [49], a qualitative analysis software, for formal coding. For the purposes of this analysis, three research team members, each of whom participated in data collection for one or both studies, coded only portions of the interviews related to the COVID-19 pandemic. The team approached data analysis in two stages. First, analysts utilized an inductive content analysis approach to identify common adaptations discussed in the interview excerpts [50]. Inductive content analysis is a helpful means of reducing and grouping data according to categories identified in the data themselves [51]. Analysts first read through all the COVID-related excerpts to familiarize themselves with the data and worked together to generate an initial codebook based on the adaptations discussed by participants. Each analyst then independently coded a set of excerpts with the aim of sorting data into distinct categories of adaptations. The team met often to review coding and come to consensus when discrepancies arose [52]. Following this, the team revisited the coded data, analyzing excerpts thematically to generate themes on provider experiences and impressions. Thematic analysis is a flexible, iterative approach to qualitative research that is well suited to understanding a set of experiences and impressions across a dataset [53,54]. Analysts followed documented guidelines in thematic analysis, independently conducting open coding and searching for initial themes before coming together to compare codes, review themes, and achieve consensus on findings [53,55]. The team conducted multiple rounds of review, refinement, and comparison against the data before finalizing the most salient themes. Results Excerpts from interviews with staff at Title-X-funded clinics and school-based health centers revealed four key themes around family planning service delivery and experiences in the first year of COVID-19. These are described in detail below and can be summarized as follows: (1) Title X and school-based staff made multiple, concurrent adaptations to continue family planning services; (2) providers embraced flexibility for patient-centered care; (3) school-based staff responded to unique challenges to reach and serve youth; and (4) COVID-19 created key opportunities for innovation. Title X and School-Based Staff Made Multiple, Concurrent Adaptations to Continue Family Planning Services Providers and staff at both Title X and school-based clinics described making multiple adaptations to family planning service delivery to meet ongoing patient needs in the early months of the COVID-19 pandemic. The specific service adaptations described by providers and staff fell into five categories. These include: (1) utilizing telemedicine, including conducting contraceptive counseling and follow-up visits by phone or video call; (2) prioritizing urgent services for in-person care, such as colposcopies and LARC insertions, while delaying or remotely delivering non-urgent services; (3) providing select services "curbside," including delivering prescriptions or contraceptive injections outside or in a "drive-through" fashion; (4) adopting flexible approaches to birth control refills and spacing, such as prolonging the time required between contraceptive injections and in-office visits for oral contraceptive refills; and (5) using phone or digital technology to streamline administration, including digitizing forms and performing intakes over the phone. While each strategy was distinct, it was rare to hear from clinics making only one or two changes to practice. Instead, providers and staff tended to report making multiple, concurrent adaptations to reduce the number of people physically entering the clinic and ensure the safety of those onsite. Many providers and staff described several overlapping changes, illustrating how these adaptations worked together and how no one service adaptation was adequate to keep services up and running. For example, one Title X provider explained how they used telehealth, curbside services, flexible birth control refills, and streamlined administration to accommodate patients: For every appointment, whether it's a telehealth appointment or an in-person appointment, our healthcare techs call a patient ahead of time and complete all their history forms with them by phone trying to reduce the amount of time that they spend here in the building. For Depo, we're now having them come in to get their Depo, but for a little while now, we were going up to their car. For birth control pills, I will say until about June, we would just do a quick phone call and six-month refill to make sure everything's okay . . . We're now doing telehealth visits for pill refills, and we try [to give] just as many as we can allowable by the expiration date. In another instance, a Title X clinic administrator discussed their current intake process for LARC appointments, which uses both curbside services and streamlined administrative procedures to prepare clients for procedures: So now the client pulls into our parking lot. We have an iPad that we use as a kiosk for check-in. A staff member goes out to their vehicle, takes their temperatures, gives them an iPad. They do the registration and fill out a health questionnaire . . . That questionnaire gets imported into their medical record. I review that, call them, go over any information that needs clarification, put in their medications, drug allergies, do sort of that telephone intake. And then typically, I'll just transfer that phone call to the provider who speaks with them from a different part of our clinic and goes over all of the risks and benefits, contraceptive counseling, and once they've done that . . . goes over the consent for the procedure. The individual service delivery adaptations are described in greater detail in Table 2, which summarizes these adaptations and describes provider and staff impressions, along with illustrative quotes, for each. Telehealth Using phone or video calls to screen clients for COVID-19, triage whether a client needed an in-person appointment, conduct contraceptive counseling, start a client on a contraceptive method, and hold follow-up conversations. Staff described multiple benefits to seeing patients by phone or video, including increased access to services and convenience for patients. Some providers mentioned issues with clients not having adequate internet bandwidth, devices, or data plans to support video calls; others noted that the practice does not replace face-to-face interactions. However, most providers spoke positively of telehealth and felt that it will be an important complement to in-person care moving forward. Some providers and staff were unsure whether Medicaid coverage for telehealth, which many states expanded during the pandemic, would continue [48]. We do have telehealth. We've had that through most of the pandemic. It took a little bit to get it going, but now we have a pretty robust system, and we have a hotline that anyone under 19 can call. Parents, kids, school staff can reach us, and either have a full telehealth visit or just ask questions, or ask for a med refill, anything they need. And we have a behavioral health person staffing that every day too. So a pretty good system there. Prioritizing services was seen as a temporary measure, used more often in the early months of the pandemic in response to staffing shortages or social distancing requirements. Even with key services being prioritized, the overall reduced numbers of in-person appointments available often led to extended wait times for in-person LARC appointments. In these cases, providers typically offered patients bridge methods of birth control, such as the pill, patch, or ring. Multiple staff described getting "back to routine" in later months. [Annual exams] were postponed for patient safety. People that had abnormal paps, etcetera. those people came in but if it was a very healthy individual, and they were coming in just to get their annual and their birth control re-filled, we just gave them their birth control refill. (Title X Administrator and Provider) "Curbside" services Delivering services and/or prescriptions outside or in a drive-by fashion; most commonly used to administer contraceptive injections, distribute other contraceptives, such as pills, and conduct COVID-19 screenings. Staff felt that patients appreciated the ease and convenience of curbside services; a few noted a decrease in "no-shows" for curbside appointments compared to standard appointments. Some clinics did not offer this service due to concerns over cleanliness, safety, or the potential for HIPAA violations. Most felt that the practice would not continue post-pandemic. Something that we did develop with COVID was doing our follow-ups for family [planning visits] and Depo-we're following up through telemedicine, and then we're doing a curbside. They're just coming in for that last piece to sign the updated consent and do the Depo. (School-Based Provider) Flexible approaches to birth control refills and spacing Extending the interval required for in-office visits for birth control prescription refills; prolonging the time required between contraceptive injections; providing subcutaneous contraceptive injection refills for self-administration. Staff often made these adaptations following guidance from the Family Planning National Training Center's (FPNTC) guidance document on spacing for oral contraceptives and contraceptive injections [24]. Many staff felt that the increased flexibility around contraceptive refills and injections worked well for both providers and patients and should be integrated into practice moving forward. The biggest problem was our annual visits. You're coming for your annual visit so you can get a refill of, like, birth control pills. So those people, what we did is because we didn't know what was going on, how long it was going to take, our standard [was to give] them an extra two months of birth control, have them reschedule for two months out, make sure that they're not having any problems and they don't truly need to see the nurse practitioner or the nurse at that time. (Title X Provider) Streamlined administrative processes Using phone or digital technology (video calls, online portals, and digital forms) to reduce the amount of time clients spent on non-medical activities inside clinics; includes streamlining sign-ins, medical history forms, and pre-visit screenings and intake forms. Multiple providers and staff noted that moving to digital or phone-based intakes, sign-ins, and forms had improved efficiency and clinic flow, enabling more time spent delivering care. Many staff expressed a desire for these streamlined processes to continue post-pandemic. [Collecting intake information over the phone] makes things very efficient in the clinic. So we've done the intake over the phone, and we can get them in and do their vitals, and they get right straight in front of the practitioner . . . If we can do intakes over the phone when appropriate, we may continue that. (Title X Administrator) Providers Embraced Flexibility for Patient-Centered Care Providers from both Title X and school-based clinics emphasized that the pandemic required immediate and ongoing flexibility in how they responded to the changing pandemic environment and in the way they worked with patients. Throughout the early months of the COVID-19 pandemic, providers described needing to make continuous adjustments to their services to ensure that patients received critical services and to maintain quality of care. As one Title X provider stated, "The first word that comes to mind is 'constantly.' Every week is a new adjustment or micro-adjustment to what we put in place the week before." At the start of the pandemic, clinic efforts were largely focused on responding to national guidance on minimizing exposure to COVID-19, which included triaging in-person appointments, developing new clinic protocols, and setting up telehealth services. As described above, many clinics reduced or eliminated non-urgent care and had to prioritize certain services, such as LARC insertions, due to staffing shortages and physical distancing requirements. Clinics also rapidly created new safety protocols, with one provider saying, "everything was just happening so quickly-it's been the testing and protocols and cleaning staff and cleaning supplies." In addition, many clinics that did not have telehealth systems in place were forced to quickly develop and implement a means for providing remote services. One provider said, We knew that telehealth was way off on the horizon for us until it was staring us in the face, so we just need[ed] to put a ton of energy in a short amount of time in getting our telehealth system integrated into our EHR [electronic health record]. As the pandemic persisted, providers continued to adjust their services to meet the changing landscape of COVID-19. This shift was most evident in the way providers described how telemedicine for family planning services moved from emergency telehealth services only to a more sustainable mix of telehealth and in-person services. One schoolbased provider said, "As we learned more about the virus and got infection prevention protocols in place, we started to kind of readjust [the] mix of on-site, face-to-face versus telemedicine." In addition, many staff at clinics that did not have an integrated telehealth system in place described first providing services via phone, which was easier to implement, and then ultimately shifting to video telehealth. This focus on flexibility also extended to the patient level as providers prioritized care that was patient-driven and centered. Providers and staff spoke at length about making on-the-spot adjustments to attend to individual needs. For example, one Title X provider shared an anecdote of a patient who wanted to cancel her annual visit because she had a high-risk family member at home and feared COVID-19 exposure. The patient was scheduled to be seen in person to receive a refill for her birth control pills, so the provider prescribed an additional three months of birth control pills without requiring the in-person appointment and asked her to return to the clinic when she felt more comfortable. In other instances, providers mailed contraceptives or provided curbside pickup delivery for patients who expressed concerns or faced barriers with attending in-person appointments. One provider highlighted this commitment to patient-driven flexibility, saying "I guess it really depends on what the patient is looking for, so I mean, we are always accessible if they want to be here, and if not, we will try to approach it the best way." School-Based Staff Responded to Unique Challenges to Reach and Serve Youth While many of the described adaptations occurred at both Title-X-funded and schoolbased clinics, school closures introduced unique challenges for school-based providers around reaching students, engaging in confidential conversations, and providing services on-site or in alternate locations. These challenges pushed family planning providers and staff in these settings to further adapt services and approaches to better meet the needs of their AYA patients. In light of immediate and ongoing school closures, providers and staff in schools overwhelmingly spoke of challenges reaching students and informing them about the status of health care services and how to access them. One school-based provider said, It is a real challenge for anyone doing work with populations that have inadequate housing and are financially insecure. Phones being on and off and being able to contact patients, their numbers changing. I mean that is such a challenge for us, period. As a result, many providers and staff in school-based clinics, including health educators, described increased outreach efforts via phone calls, emails, or text messages. Some providers reported creating dedicated cell phone lines, such as a Google Voice line, to facilitate contact. In some cases, clinics also established social media platforms (e.g., Instagram or Facebook) or posted announcements on schools' digital platforms to communicate with students and share information about their clinic hours, location(s), and services. In addition to this general outreach, providers also described conducting more targeted outreach to students who were on birth control or expressed interest in obtaining contraception. One school-based provider recounted how their clinic aimed to reach each student due for contraceptive injections or oral contraceptive refills, saying "Every student that was due for Depo since we've been out has gotten a phone call or two or three from us to try to schedule them." Providers in school settings contended with an additional layer of complexity given significant confidentiality and privacy concerns for AYA. During the early phases of the pandemic-with schools closed and lockdown in effect-students were likely at home with parents and other family members, which made communicating with clinic staff challenging, especially for those who did not have access to a personal cell phone or reliable internet service. One provider shared the difficult situation school-based staff were in, saying: It's a really tough dance between respecting confidentiality and getting these kids services. I have students that have told me, 'My parents absolutely cannot know about this.' And they don't have a cell phone of their own . . . I risk, even if I just call them, their parents asking them, 'Why are they calling you?', so I've really agonized a lot over what the right thing is. To address this, some staff discussed calling students in the late afternoons or evenings as this tended to be a time when AYA could talk confidentially. Other staff also shared that, if parents answered their outreach calls, staff would speak more generically about wellness services and encourage parents to share with the students to reach out to the clinic if they have questions or concerns. Finally, school-based staff also faced barriers to providing services to students due to not having access to their clinic space or students being unable to secure transportation to be seen in person. In addition to providing telehealth and "curbside" services described above, school-based staff described using several unique strategies to continue services for school-based AYA, including providing mobile services, referring to or providing services at alternate locations, or sending prescriptions to nearby pharmacies. A small number of staff met students at offsite locations or students' homes to provide needed services. One school-based provider described conducting contraceptive counseling via telehealth and then "show[ing] up in our PPE and . . . dropping off their [prescription], whether it was asthma inhalers, the pills, the patch, or giving them their Depo shot in the car or in their house." In other cases, school-based staff partnered with outside organizations or local health departments to either refer students for care or provide services at those locations. For example, one provider reported arranging appointments for several students to receive their contraceptive injections at a neighboring occupational health site. Providers also described sending prescriptions to pharmacies that were close to students' homes rather than asking students to collect their prescription at the school or clinic to mitigate transportation barriers. COVID-19 Created Key Opportunities for Innovation Interviews across both Title X and school-based sites revealed an overarching sense that, despite challenges, the rapid pace and unique context of the pandemic allowed for unprecedented creativity and innovation in family planning care. Providers and staff spoke of being pushed to consider new ways of providing services when standard practices were "knocked out the window for the pandemic." Participants often expressed excitement or surprise at how well some adaptations worked or were received. For example, one school-based clinic director remarked about telehealth, "Interestingly enough, that'll probably be one of the things that lasts the longest. Because students like it. It's accessible. It makes life easier . . . so I think that that's going to be a wave of the future." In many cases, participants described how some adaptations put in place because of the pandemic may have actually led to more streamlined and accessible services. Telehealth and digitized forms were often cited as changes that may persist due to important and unexpected benefits to staff and patients. For example, one Title X provider recounted realizing that obtaining medical histories over the phone was not only more efficient than doing so in person but also provided a level of anonymity that allowed patients to be more forthcoming. Another school-based staff member said of their pandemic-related switch to digital intake forms, "Not only was it helpful for us being able to track things more, but I think it made it more accessible to our student body." Ultimately, providers and staff expressed a feeling that COVID-19 broke down longstanding roadblocks or barriers that might not have been breached otherwise. One provider at a Title X clinic said of COVID-19, "I think this has totally shaped, or actually changed, the way that a lot of medical providers have thought about providing medical care." In some cases, this was discussed as a mindset shift that could-or should-persist moving forward. For example, one Title X provider described how, for them, the surprise of how easily longstanding practices could be changed for the better will be a lasting effect of the pandemic: I think realizing that we have these barriers in place, whether it was, 'Oh, you need to come in every 12 months for your birth control prescription renewal,' or 'No, we can't. We're not able to prescribe you that without a face-to-face visit.' And those things were kind of unquestioned barriers that we all just sort of accepted. And then realizing how quickly they were able to bust through them. That's what we really need to do. Discussion This analysis expands previous research on family planning services during the COVID-19 pandemic by using in-depth interview data from providers and staff at Title X and school-based clinics to describe service adaptations in the early pandemic. These qualitative data build on existing descriptive information to provide key insights into provider experiences adapting family planning services, impressions on the usefulness and acceptability of these adaptations, and considerations for using these approaches moving forward, particularly in settings serving populations that face increased barriers to care. Our findings highlight the multiple, layered changes Title X providers and school-based clinic staff made to continue services, providers' focus on ensuring flexible patient-centered care, the unique challenges of serving AYA in schools, and the potential for some COVID-19-related adaptations to change family planning service delivery for the better. In exploring how providers adapted services, two approaches-telehealth and digitized administrative practices-emerged as potential long-term adaptations, both of which have important equity implications. Despite implementation challenges, telehealth services were largely embraced by providers and staff in our studies, with many noting their potential to expand access to care. This finding aligns with previous research with family planning clinicians [19,40,42]. In addition, many providers in our studies applauded use of online technology, including digitized forms that allowed patients to complete pre-visit screenings and intakes and thus streamline the amount of time patients spend in clinics. While promising, these practices raise several structural considerations for continued implementation. The ability to provide telehealth care for sexual and reproductive health is limited by federal and state laws and regulations, with wide variability across states [56]. In addition, telemedicine and digitized administrative procedures require client access to reliable internet and a location that ensures privacy, both of which can disproportionately be barriers for women from systematically marginalized groups and AYA, as well as individuals in rural communities [7,39,56]. It is critical that practitioners recognize that digital adaptations will not be accessible and beneficial for all patients. In many cases, providers will need to continue providing alternatives, such as in-person services or paper forms, to ensure equitable access to care. Research should also evaluate how these approaches benefit and are experienced by diverse patient populations, including AYA and those in areas where privacy or internet access are limited. Interviews with school-based staff and providers highlighted additional considerations for practitioners serving AYA in school settings as the COVID-19 pandemic continues. Historically, an advantage of school-based health centers and programs is that they offer confidential care in the very place where AYA spend much of their time (versus requiring travel to an outside clinic). The early months of COVID-19 disrupted this model, causing school-based family planning providers to quickly assess how to maintain student contact and care. The concerns we heard around access to and confidentiality for family planning services for AYA have been noted by researchers and practitioners elsewhere [39,40,56]. Our data provide additional information on the ways in which staff mitigated these barriers, including increasing outreach efforts and providing services at alternate locations. It is possible that intensified outreach and service delivery efforts will persist through the ongoing pandemic, stretching already limited staff and resources. Ongoing partnerships and sharing of best practices among school-based staff will be necessary to sustain this work. Finally, our findings point to a key success of family planning providers' and staff response to COVID-19: their flexibility, resiliency, and innovation. This creativity and determination occurred against a backdrop of high rates of stress, burnout, and turnover among healthcare workers broadly [57][58][59]. Others have also noted this resilient response to the challenges of the pandemic [36,43]. However, our data further suggest a lasting eagerness among providers and staff to continue improving upon traditional clinical practices to ensure high-quality family planning care. As we enter subsequent phases of the pandemic, the family planning community may grapple with how to best encourage and allow for continued innovation. For example, many providers in our interviews reported adopting telehealth in response to COVID-19-initiated expanded coverage under Centers for Medicare and Medicaid [56]. Similarly, many described adjusting birth control spacing in response to FPNTC guidance on family planning care during COVID-19 [30]. Should these initiatives expire as the pandemic wanes, there may be a gap in supportive structures to scaffold innovation in clinical care. While our data provide unique insight into the experiences of Title X and school-based providers and staff during a pivotal time period, some limitations must be noted. First and foremost, this analysis draws on combined subsets of data from two distinct but related projects that were not designed to be jointly analyzed. We opted to examine these data together due to several shared project features. For example, both projects engaged providers and staff working in settings that serve many women and adolescents who face barriers to accessing family planning care and used similar interview questions to explore service delivery at an inflection point of the COVID-19 pandemic. Analyzing these data together allowed us to examine similarities and differences between these two settings and gain a more holistic view of pandemic-era family planning care than if we had analyzed them separately. However, these samples and data were not uniform, and it is possible that key patterns were overweighted or overlooked. A second limitation is that interviews were only conducted with clinic providers and staff and do not incorporate client perspectives on service adaptations and clinical care. These perspectives are needed to provide important information on the extent to which service adaptations meet patient needs-context that is especially important for adaptations that are expected to continue, as discussed above. Moving forward, research should engage diverse patient populations, particularly those who typically experience greater barriers to access, to explore how they have experienced changes to practice and implications for equity in access to health care. Finally, interviews were conducted early in the pandemic and represent only one point in time in an ongoing event. Given the continuous pace of change noted by providers and staff, future studies should examine whether and how family planning delivery continues to evolve. Conclusions This paper explored provider and staff perspectives on adaptations made to family planning service delivery during the first year of the COVID-19 pandemic in Title-X-funded and school-based clinics-two settings that serve populations that experience barriers to accessing services. Interview data revealed that the pandemic demanded a layered, flexible approach from providers and staff to ensure continued high-quality services for women and AYA. Data also shed light on unique challenges faced by school-based staff serving AYA around reaching students and engaging them in confidential conversations when schools were closed. Ultimately, the findings suggest several lasting changes to family planning service delivery for populations hardest hit during the pandemic, including increased use of telehealth and digitized administrative procedures and a new openness to innovation among providers. We discuss considerations for family planning practitioners around ensuring equity in telehealth and digital health applications, serving school-based populations, and supporting provider resiliency and innovation. This analysis supports and provides important qualitative context to previous research on how providers continued providing critical family planning services during the early pandemic. Future studies are needed to evaluate promising practices, such as telehealth and digitized administration, and explore how these are perceived by diverse patient populations that face barriers to accessing care, particularly AYA and those in areas where privacy or internet access are limited. Institutional Review Board Statement: Both projects that supplied data for this analysis were considered exempt from IRB review and approval, as per Child Trends IRB (FWA00005835). The project titled "Trends in Family Planning Service Provision" met exemption category 5 as it was federally funded and collected information on access to a public benefits program. The project titled "Innovations in Family Planning Clinical Service Delivery for Underserved School-Based Populations" was deemed program implementation research and therefore exempt from review. Informed Consent Statement: Informed consent was obtained from all participants involved in the study. Data Availability Statement: Data for this analysis are not publicly available due to the need to protect the privacy of the research participants. Requests for deidentified data to support study findings should be directed to the corresponding author.
2023-02-22T16:13:55.792Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "57edd734980dd346527ce37137e9b55a4551bf29", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2046ee045b015c368fb341ced312f83b42b3f009", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
257101379
pes2o/s2orc
v3-fos-license
An integrated multiomics analysis of rectal cancer patients identified POU2F3 as a putative druggable target and entinostat as a cytotoxic enhancer of 5-fluorouracil Rectal cancer (RC) accounts for one-third of colorectal cancers (CRC), and 40% of these are locally advanced rectal cancers (LARC). The use of neoadjuvant chemoradiotherapy (nCRT) significantly reduces the rate of local recurrence compared to adjuvant therapy or surgery alone. However, after nCRT, up to 40%-60% of patients show a poor pathological response, while only about 20% achieve a pathological complete response. In this scenario, the identification of novel predictors of tumor response to nCRT is urgently needed to reduce LARC mortality and to spare poorly responding patients from unnecessary treatments. Therefore, by combining gene and microRNA expression datasets with proteomic data from LARC patients, we developed an integrated network centered on seven hub-genes putatively involved in the response to nCRT. In an independent validation cohort of LARC patients, we confirmed that differential expression of NFKB1 , TRAF6 and STAT3 is correlated with the response to nCRT. In addition, the functional enrichment analysis also revealed that these genes are strongly related to hallmarks of cancer and inflammation, whose dysfunction may causatively affect LARC patient's response to nCRT. Furthermore, by constructing the transcription factor-module network, we hypothesized a protective role of POU2F3 gene, which could be used as a new drug target in LARC patients. Finally, we identified and tested in vitro entinostat, a histone deacetylase inhibitor, as a chemical compound that could be combined with a classical therapeutic regimen in order to design more efficient therapeutic strategies in LARC management. | INTRODUCTION Colorectal cancer (CRC) ranks third in incidence and mortality globally, with similar numbers in men and women. 1 One-third of CRCs are represented by rectal cancer (RC) and 40% of RCs are locally advanced rectal cancer (LARC) at diagnosis. Recently, some studies have shown that colon and rectal cancer differ in terms of their clinical-pathological characteristics, carcinogenic pathways, and therapy. 2,3 The standard approach for LARC includes preoperative neoadjuvant chemoradiotherapy (nCRT) followed by total mesorectal excision. The use of nCRT significantly reduces the rate of local recurrence compared to adjuvant therapy or surgery alone. 4 However, after nCRT, up to 40%-60% of patients show a poor pathological response, while only about 20% achieve a pathological complete response. 5 To date, the most promising biomarker to monitor LARC treatment is the carcinoembryonic antigen (CEA); however, its prognostic potential is insufficient. 6 Further, the few studies performed on LARC to identify potential predictors of nCRT response are contradictory and remain inconclusive, principally due to discrepancies in patient selection, sample size, nCRT regimen, and parameters used to evaluate a patient's response to therapy. Thus, the identification of novel predictors of LARC response to nCRT is urgently needed to reduce LARC mortality and to spare poorly-responding patients from unnecessary treatments. 6 At the macroscopic level, the complex interactions among genetic, environmental, and lifestyle factors are at the basis of complex diseases, including cancer. 7 In parallel, at the microscopic level, complex diseases are typically caused by a combination of molecular alterations and their interplay. 7 The reductionist approach devoted to dissecting the individual gene biomarkers has been used in almost all existing studies, relying on an underlying hypothesis that changes in gene expression cause and explain different phenotypes. However, it is often not possible to distinguish whether gene expression variations are causative or merely an effect of intricate rewiring of regulatory and signaling cascades in complex diseases. 7 Genes do not work alone, they interact with each other and are controlled by many transcription factors to form networks or pathways, to carry out biological functions. In addition, it has been demonstrated that even if genetic modifications in tumor cells initiate and drive malignancies, the acellular component of the tumor microenvironment (TME), called extracellular matrix (ECM), coevolves with cancer cells, creating a dynamic signaling circuitry that promotes cancer diffusion and reduces the response to therapy. Taken together, these factors support the use of a computational integrated approach of experimental-driven evidence, including transcriptomic and proteomic datasets, to reconstruct the underlying intricate interaction networks at the basis of cancer onset and resistance to therapy. In our study, we applied such a holistic approach to identify novel potential biomarkers for the prediction of LARC patients' response to nCRT. Among these, POU2F3 was identified as a key gene in determining patients' response and survival, and we hypothesized that upregulating it could improve response to chemotherapy. We validated this hypothesis on CRC cell lines by performing POU2F3 upregulation using the histone deacetylase inhibitor (HDAC) entinostat. Indeed, combination treatment of entinostat with the classical chemotherapeutic 5 fluorouracil (5-FU) was proven to be more cytotoxic compared to single 5-FU treatment or entinostat treatment alone. | Patients A series of n = 21 paired (matched) healthy rectum (HR) and rectal cancer (RC) tissue samples from LARC patients were collected from the Tissue biobank of the First surgery clinic (Department of Surgery, Oncology and Gastroenterology-University of Padova). All of the patients fulfilled the following criteria: histologically confirmed primary adenocarcinoma of the rectum, tumor within 12 cm of the anal verge on proctoscopic examination, clinical stage cT3-4 and/or N0-2, resectable disease and age ≥18 years. Patients with a known history of a hereditary colorectal cancer syndrome and patient who were not nCRT naive were excluded. Patients' clinic-pathological characteristics were summarized in | Tissue decellularization HR and RC destined to decellularization process were treated with one to three detergent-enzymatic treatment (DET) cycles. Each DET cycle was processed as indicated in reference 9. After decellularization, matrices were rinsed in PBS +3% penicillin/streptomycin (pen/strep) and then stored at À80 C until used. | DNA isolation and quantification To assess total DNA content within the native HR and RC compared to decellularized matrices (respectively, DHR and DRC), 20 mg of each specimen was treated using DNeasy Blood & Tissue kit (Qiagen) under manufacturer's instruction. DNA samples were then quantified using Nanodrop 2000 spectrophotometer at 260/280 nm ratio (Thermo Scientific). | DHR and DRC proteomics A total of n = 6 paired HR and RC (n = 3 TRG 1-2, R and n = 3 TRG 4-5, NR) decellularized tissues (average weight 2 mg, range 0.5-3.4 mg) were analyzed by mass spectrometry following the protocol published by Naba et al 10 and as indicated in Supplementary methods (Data S1). | Resources and datasets In our study, we reused our previous published differential genes and microRNAs derived from datasets including n = 86 LARC patients treated with nCRT. Gene expression profile dataset GSE4540 11 was performed on platform GPL570 Affymetrix Human Genome U133 Plus 2.0 Array (HG-U133_Plus_2). microRNA expression profile dataset GSE68204 12 was performed on miRNA microarray platform Rel 12.0 (V3) manufactured by Agilent SurePrint Technology. The molecules were used to carry out the network analysis in combination with proteomic data. | Network analysis To gather a more complete picture of the molecular landscape of LARC response to treatment, we integrated proteomics and transcriptomics data from the above-mentioned datasets. Protein-protein interactions (PPIs) were retrieved using IID v.2020-05 13 ; microRNAs-gene interactions were retrieved using mirDIP 4.1, 14 16 retaining only pathways with adjusted P-value (BH method) <.01. Due to KEGG returning mainly disease pathways, this database was removed from the analysis. For network annotation, each gene was annotated with the pathway with the lowest adjusted P-value (BH method) it belonged to. Gene Ontology enrichment analysis was performed using clusterProfiler_3. 16 | Cells maintenance and expansion The | Identification of differentially abundant proteins between decellularized healthy rectum and rectal cancer Matched samples from both HR and RC specimens were decellularized using a detergent-enzymatic treatment. The DNA amount was quantified to confirm nuclear content depletion and cells removal. Both HR and RC samples were completely decellularized after one DET cycle with a reduction of 95.78% and 95.81%, respectively (P-value <.001 for both; native HR and RC tissue were used as control; Figure 1A Figure 1E). For each protein identified and quantified, a DRC vs DHR ratio has been calculated, and results were expressed as fold change (FC). Venn diagram presented in Figure 1F shows that proteins shared Figure 1G,H). R and NR patients were discriminated by the PCA analysis. The heatmap analysis showed that NR samples cluster together, while in the R samples, two out of three samples cluster together and one sample clusters separately. Proteins responsible for groups' separation were obtained using PLS-DA as supervised analysis, and the best classification performance was obtained using two components (Accuracy 85%, R2 0.98, Q2 0.43). The VIP score (Variable Importance in Projection) used to identify the important features for model construction is reported in Figure 1I. As it can be seen, COL4A3, COPB1 and MVP proteins were those mostly responsible for the separation between R and NR. A subsequent Significance Analysis of Microarray (SAM), designed to address the false discovery rate, confirms these results, indicating the three proteins as the most significant ones (FDR 0.11, P-values <.001 for all, Figure S1A). | Network analysis, annotation and validation The ECM proteomic signature discriminating LARC patients' response to nCRT was integrated with our previously identified genes and micro-RNAs sets in order to build a comprehensive multiomic view, combining transcriptional, posttranscriptional and proteomics results. On these bases, we first generated a protein-protein interaction network between COL4A3, COPB1, MVP (the ECM-derived proteomic set of response to nCRT); and CXCL9, CXCL10, CXCL11, IDO1, MMP12, AKR1C3 and HLA-DRA (a previously-published transcriptional gene set linked to response to nCRT). 20 We identified a total of 77 common interactors able to connect the transcriptomics identified genes to the proteomics ones (Table S1). Subsequently, we selected only those common interactors that are also targets of our previously published micro-RNAs linked to response to nCRT in LARC: miR-125b-5p, miR-299-5p and miR-154-5p. 21 Based on this, we identified the following seven after nCRT compared to TRG 3-5 (NR, red line) (respectively P < .001; P < .001 and P < .01, Figure 2B). | Pathway enrichment analyses and gene ontology Performing pathway enrichment analysis using the sets of genes from the transcriptomics, proteomics and seven central genes (combined network), we obtained 50 enriched pathways, mainly linked to inflammation ( Figure S1B and Table S2). We then focused on enriched pathways that included genes from at least two of the three gene sets (transcriptomic, proteomics and seven central genes), and 20 pathways were retained ( Figure S2 and Table S3). Of these, only Reactome (Table S4). Of these, only "Extracellular matrix regulation," "Extracellular matrix structure regulation" and "Regulation of vasculature development" molecular functions included all three datasets ( Figure 3A). However, only limited data exist for solid cancers. 23 On this basis, we sought to determine in vitro whether POU2F3 is targeted by entinostat, and whether this causes its upregulation. Subsequently, in order to validate the in silico data available in the three database analyzed, we also selected nadolol as a drug candidate reported to have an opposite effect, on respect to entinostat, on POU2F3. 24 HCT-15, | Targeting the upstream regulator HCT-116 and SW480 CRC cell lines exposed to nadolol and entinostat (both at 50 μM) showed respectively a significantly downregulated and upregulated expression of POU2F3 compared to the same lines nontreated (controls, P-value <.035, <.0001 and <.001, respectively, for nadolol; and P-value <.0001, <.0098 and <.001, respectively, for entinostat, Figure 4B-D). As nadolol-derived downregulation of POU2F3 was only tested as control, it was not further considered. Finally, since we hypothesized that the upregulation of POU2F3 could cause the subsequent overexpression of its down-stream regulated genes NFKB1, STAT3 and NR3C1, we confirmed that HCT-116 CRC cell lines exposed to entinostat (at 50 μM for 24 and 48 hours) showed a statistically significant downregulation of NFKB1, but an upregulation of STAT3 and NR3C1 ( Figure S3A). | Single and combined cytotoxicity evaluation of 5-FU and entinostat treatments To test the cytotoxic effect of entinostat alone or in combination with LARC chemotherapeutics, 5-FU was chosen as reference drug, since it represents the backbone of all chemotherapy approaches in the treatment of LARC patients. 4 Thus, HCT-15, HCT-116 and SW480 were either treated with 5-FU or entinostat in the range of 0.1 to 100 μM. As depicted in Figure 5A | DISCUSSION Currently, the best treatment option for patients with LARC involves nCRT, followed by curative surgery. Unfortunately, after nCRT, only approximately 20% of LARC patients show a complete pathological response, while in 40%-60% of patients, the response is poor or absent. 25 Thus, finding predictive molecular markers to nCRT would be of great clinical significance to improve the clinical management of LARC patients. Several studies proposed potential biomarker predictors of response to nCRT in rectal cancer at transcriptomic, posttranscriptomic and proteomic levels. 26 However, none has been successfully clinically applied, mostly due to the difficulty to distinguish whether gene or proteomic expression changes are causative or merely an effect of a complex disease as cancer is. In our study, we overcame these limitations by simultaneously investigating the transcriptomic, posttranscriptomic and proteomic expression data to develop a multiomic holistic approach that integrates multiomic data to reflect the casual relationship between the integrated network and phenotype. 7 First, we integrated the analysis of the tumor ECM, the most abundant compartment of the TME, to the transcriptomic and post- Collagen increases compared to nonmetastatic CRC patients and healthy controls. 29 The MVP protein is involved in protection against cellular stresses, such as DNA-damaging agents, irradiation, hypoxia, hyperosmotic, and oxidative conditions. The export from the nucleus of DNA-damaging drugs could be a molecular function of MVP involved in drug-resistance phenomena. 30,31 Subsequently, in contrast to the conventional methods relying on a single omic profile of the cell, we developed a protein-protein interaction network integrating multiomic data across different cohorts to reveal more likely causal relationships between common interactors and therapy resistance phenotype in LARC patients. Another Conversely, the biological processes were enriched for ECM organization, extracellular structure organization, regulation of vasculature development and different processes in the regulation of both innate and adaptive immunity. In this landscape it clearly emerges that the deregulated stroma is a hallmark of cancer able not only to contribute to tumor onset and progression but also able to modulate the response to chemotherapy. 38 Considering the growing evidences supporting that the immune landscape of CRC is extremely complex and linked to chemoresistance, it is urgent to understand even better the role of the tumor stroma in the molecular mechanisms of immune response and chemoresistance. 39,40 Third, through investigating the transcription factor-target interactions, we found that POU2F3 could be the hidden-player of the network which, although not directly, acts as a common mediator of the different deregulated molecular mechanisms discussed above, and may be the target of effective drugs. Indeed, we demonstrated that an increased expression of POU2F3 in the rectal cancer cohort of the TCGA elicits a protective role by increasing their patients' survival. POU2F3 (POU class 2 homeobox 3; also known as SKN-1a/OCT-11) is a transcription factor required for the generation of a rare chemosensory cell type found in the gastrointestinal and respiratory tracts known as Tuft cells. 41 Tuft cells' primary function is the release of bioactive immunomodulator molecules in response to external stimuli, and this immune control at the TME level could have a crucial role in determining drug resistance. 42,43 Also, recent studies reported that POU2F3 expression has been used to recognize different types of pulmonary neuroendocrine cancer, including small cell lung cancer (SCLC). 44 In conclusion, in this article we presented a new network-based approach to identify putative hidden biomarkers of complex diseases and novel treatment options by integrating multiomic data with network information. The analysis based on such an integration can potentially lead to new insight into complex diseases at the systems level. Therefore, by combining transcriptomic and posttranscriptomic data sets and proteomic data from LARC patients, we developed a reconstructed interaction network of seven central genes putatively involved in the response to nCRT. We identified several common interactors, which could serve as biomarkers for patient response prediction to nCRT. In addition, the functional enrichment analysis also revealed that these identified targets are strongly related to hallmarks of cancer and inflammation, whose dysfunction may affect LARC patients' response to nCRT. Furthermore, by constructing the transcription factor-module network, we hypothesized a protective role of POU2F3 gene, which could be used as a new drug target in LARC patients. Finally, we identified entinostat as chemical compound which, targeting POU2F3, could be combined with classical therapeutic regimen in order to design a more efficient therapeutic strategy in LARC. AUTHOR CONTRIBUTIONS Edoardo D'Angelo and Chiara Pastrello: planned and conducted the study, collected and interpreted the data and drafted the manuscript. CONFLICT OF INTEREST STATEMENT The authors declare no conflicts of interest. DATA AVAILABILITY STATEMENT The raw mass spectrometry data generated in our study have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD035408. Other data that support the findings of our study are available from the corresponding author upon request. ETHICS STATEMENT Our study was conducted according to the principles expressed in the Declaration of Helsinki. Written informed consent was obtained from every enrolled patient and the protocol was approved by the institu-
2023-02-24T06:18:28.617Z
2023-02-23T00:00:00.000
{ "year": 2023, "sha1": "094ca513776e261ce2f8f03727de4214b27388fa", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ijc.34478", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "83c1f14c9c8c4c01ad2e34a802c1b23c93da12e5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254849779
pes2o/s2orc
v3-fos-license
Cognitive science meets the mark of the cognitive: putting the horse before the cart Among those living systems, which are cognizers? Among the behaviours of, and causes of behaviour in, living systems, which are cognitive? Such questions sit at the heart of a sophisticated, ongoing debate, of which the recent papers by Corcoran et al. (2020) and Sims and Kiverstein (2021) serve as excellent examples. I argue that despite their virtues, both papers suffer from flawed conceptions of the point of the debate. This leaves their proposals ill-motivated—good answers to the wrong question. Additionally, their proposals are unfit to serve the legitimate roles for characterizations of cognition. Introduction Among those living systems, which are cognizers? Among the behaviours of, and causes of behaviour in, living systems, which are cognitive? Such questions sit at the heart of a sophisticated, ongoing debate (e.g., Adams 2019; Barandiaran and Moreno 2006;Brancazio et al. 2020;Godfrey-Smith 2016a;Lyon 2020;Van Duijn et al. 2006). It is important that 'cognition' be understood correctly in this context. There is a sense of 'cognition', subject to much debate, in which there might be a natural distinction between cognition and perception; similarly, 'cognition' is also used in contrast to emotion. Neither is the sense relevant here, however. 'Cognition' in this context is a notion that includes at least some examples of emotion and perception-indeed for Sims and Kiverstein (2021), affect, given its role in their account of allostasis (their preferred mark of the cognitive), is essential to cognition. Counterfactuals or allostasis? The recent papers by Corcoran et al. (2020) and Sims and Kiverstein (2021) are among the latest in a sizeable debate about the mark of the cognitive and the nature of cognition. Both papers integrate careful discussion of examples, broader biological and cognitive theorical frameworks, and the aims of cognitive science, in order to reach their conclusions. Both papers are also grounded in the same research paradigm-active inference and the free-energy principle (FEP; e.g., Friston 2012Friston , 2013Friston et al. 2006;Pezzulo et al. 2015). Corcoran et al. (2020) argue that the capacity for disengaged, counterfactual cognition, underwritten by a capacity for decoupled representation, and supported by a deep hierarchical model of the environment, is what makes a system a true cognizer. They situate their argument in relation to Godfrey-Smith's (1996) environmental complexity thesis, according to which cognition is fundamentally a tool for dealing with environmental complexity, notably that introduced by the presence of other living systems. They claim that the capacity for counterfactual cognition marks a significant discontinuity in the way systems are able to deal with environmental complexity, and plausibly maps onto Godfrey- Smith's (2002aSmith's ( , b, 2016aSmith's ( , 2016b proposed distinction between true cognition and mere proto-cognition (where 'protocognition' is the name for those ways of dealing with environmental complexity which resemble, but do not count as, cognition). Sims and Kiverstein (2021) deny that counterfactual cognition is necessary for cognition. They propose instead that a capacity for minimization of expected free energy is all that is required for true cognition (they talk variously in terms of 'cognitive behaviour' and 'cognitive causes of behaviour'). Minimization of expected free energy requires selection of action policies that minimize expected future surprise (eg, Friston et al. 2015;Parr and Friston 2019; for further discussion see Millidge et al. 2021). They appear to suggest that minimization of expected free energy is the interesting feature of counterfactual cognition from the perspective of the FEP, and indeed it is minimization of expected free energy that Corcoran et al. emphasise is enabled by counterfactual cognition (e.g., Corcoran et al. 2020, p. 32). However, Sims and Kiverstein argue for an interpretation of the FEP that does not make strong commitments about the representational apparatus of the described systems, instead claiming that by 'complementing' their environments, self-maintaining systems 'embody' a generative model of that environment. They then argue that on such a construal, minimization of expected free energy is to be found much more widely than anything that can obviously be described as a capacity for counterfactual cognition. In particular, minimization of expected free energy is entailed, they claim, by the kind of prospective, anticipatory action involved in allostasis. Such actions are to be found in systems as simple as single E-Coli bacterium, so they argue. The second part of their objection to Corcoran et al.'s proposal is that 'cognition' should be understood in a way that is geared towards finding 'gradations in [the] complexity of cognition', and so that cognition '[shades off] into more basic biological process' (Sims and Kiverstein 2021, p. 24). In contrast, they claim that 1 Page 4 of 24 Corcoran et al.'s proposal, counter to this aim, is geared towards identifying a 'sharp discontinuity' between the genuinely cognitive and proto-cognitive. Defining 'cognition' so that it lines up with such a sharp discontinuity has two disadvantages, they claim: first, it means that apparently cognitive capacities, such as memory and learning, might be found in systems classed as noncognitive by dint of falling the wrong side of the line; secondly, it entails an 'over-intellectualisation of cognition' (ibid.), an idea that they flesh out with an appeal to Morgan's canon (p. 25-26; discussed further below). Sims and Kiverstein do not deny that Corcoran and colleagues latch onto an interesting kind of (cognitive) system, specifically, one with a deep hierarchical model that enables a capacity for decoupled representation, and hence the kind of disengaged, counterfactual reasoning that we associate with the most impressive instances of human thought (see also Clark and Toribio 1994). In particular, Sims and Kiverstein hold that Corcoran et al.'s proposal identifies cognition with a capacity that is too 'intellectual' to be correctly identified with cognition, marked by too sharp a discontinuity to encourage the search for gradations and shading-off, too exacting to apply to systems that can nevertheless apparently be ascribed such capacities as memory and learning, and not directly related to any FEP-theoretic capacity (although Corcoran et al. claim it is necessary for expected free energy minimization, the bulk of Sims and Kiverstein's argument works towards the denial of this claim). Sims and Kiverstein, as such, propose a capacity to be identified with cognition that is directly lifted from the FEP (expected free energy minimization), shades off into more basic biological capacities, and plausibly applies to all living systems that can be described as learning or remembering (since it plausibly applies to all living systems). What's cognition for? Before I argue against the way the debate currently proceeds, I want to try to find some stable ground by clarifying the point of the concept of cognition (for discussion of the points of concepts, see Queloz 2019; Thomasson 2020). Towards the very beginning of this paper, I stressed that the notion of 'cognition' at play here is not the one that gets contrasted with perception or emotion, but the one that includes both perception and emotion. I did not, however, consider what the point of this notion of cognition is-what it is for. First, I argue that the explicit discussions of the point of the concept of cognition offered by Corcoran et al. (2020) and Sims and Kiverstein (2021) are insufficient on their own to tie down the debate (Sect. Proposals from the papers). Next, I argue for a key point of common ground, the link between the domain of cognitive science and the concept of cognition (Sect. Cognition and cognitive science). Finally, I bring out the commonalities and differences between the two proposals under consideration by placing them in a taxonomy of different sorts of view of the link between cognitive science, its domain, and the concept of cognition (Sect. Counterfactuals and allostasis as target domains). Proposals from the papers Both Corcoran et al. (2020, p. 32) and Sims and Kiverstein (2021, p. 24) suggest that the notion is for explaining the relationship between life and cognition (see also Van Duijn et al. 2006), as well as suggesting that it is for distinguishing between cognitive and noncognitive phenomena. However, on their own, these proposals for the point of cognition are insufficient. To say that a concept is for distinguishing those things that fall under it from those that do not seems, at best, trivial, since all concepts with extensions play this role (cf. Cappelen 2018). At first blush, it does not seem to help much to say that the concept of cognition is for explaining how cognition arises from life. Van Duijn et al. (2006) propose that cognition should be identified with sensorimotor control; Corcoran et al. that it should be identified with counterfactual reasoning; Sims and Kiverstein with expected free energy minimization, as indicated by allostasis. None appears to deny the existence of the capacity called 'cognition' by the others, nor that the relationship of each proposed capacity to life is an interesting candidate for explanation. It is possible to explain how sensorimotor control, expected free energy minimization, and counterfactual reasoning arise from life, and worthwhile to do so, whether or not any of these capacities is called 'cognition'-and furthermore, labelling any of these capacities as 'cognition' appears to do no explanatory work over and above explaining how these capacities arise from life. These issues might, however, be solved by embedding the proposals in a broader body of theory, or by further specifying what is at stake in distinguishing between the cognitive and the noncognitive. Both papers do embed their proposals for the point of cognition in broader bodies of theory, although for reasons I lay out below, I believe that it is not enough to save either proposal. Corcoran and colleagues appeal to the environmental complexity thesis, and this is an important part of the framing of their paper. Godfrey-Smith (1996) sets up the environmental complexity thesis as a theory about the core adaptive advantage generally conferred by those capacities we count as 'cognitive'. He later weakens the theory somewhat, dropping the idea that it is the 'core' or 'fundamental' advantage conferred (2002a, b). There are two key points here about the way Godfrey-Smith sets up the thesis, both of which are in tension with the way Corcoran et al. mobilize the thesis in their paper. The first is that Godfrey-Smith is setting up a non-trivial, empirical generalization about the capacities that we call 'cognitive'-not stipulatively defining cognition as 'that which is used to deal with environmental complexity' (see especially 2002a; for more on the difference, see Sects. Counterfactuals and allostasis as target domains, Against prescribing a target domain, Targetless characterizations of cognition). Secondly, although Godfrey-Smith insists on a distinction between nongenuine, 'proto-' cognition and genuine cognition (for criticism, see Lyon 2020), he also insists that this boundary is likely to be irredeemably vague, and unhelpful to try to precisify (see especially 2002a). Conversely, Corcoran et al. propose to define cognition such that it is a special way of dealing with environmental complexity, largely to make the distinction between proto-and genuine cognition precise. This is not only in tension with Godfrey-Smith's views, but also undercuts the empirical nature 1 Page 6 of 24 of the thesis-this is not exactly a fatal flaw, but it does render the appeal to Godfrey-Smith somewhat confusing, and does not clarify what Corcoran and colleagues see as the point of the concept of cognition. Corcoran et al. (2020, p. 32) do express some disagreement with Godfrey-Smith, suggesting that talking of non-cognitive (by their lights) systems as cognitive, or as grading into the cognitive, may 'obscure a fundamental discontinuity' (emphasis in the original), but this surely presupposes either that their definition of cognition is correct, or that there can be no significant discontinuities between cognitive systems. The framing of Sims and Kiverstein's paper centres on an extended appeal to Morgan's canon. In particular, they hold that Morgan's canon and attendant worries about animal psychology place a double burden on theorists, the burden of avoiding underestimating the complexity of seemingly simple systems like bacteria, while also avoiding overintellectualizing their capacities. For them, 'underestimating' a system appears to mean not labelling it as 'cognitive' when it ought to be, and 'overintellectualizing' a cognitive capacity (or 'cognitive achievement'; 2021, p. 25) appears to mean describing its operation in excessively sophisticated terms (e.g., describing E-Coli's anticipatory allostatic behaviour as supported by counterfactual reasoning). This latter worry does not seem directly to speak to whether or not a phenomenon should be labelled cognitive, since it applies only to phenomena already acknowledged as cognitive. The former worry, that it risks underestimating seemingly simple systems to deny them 'cognitive' status, is more directly relevant. The basic issue is that adopting a more restrictive definition of 'cognition', and thereby denying the cognitive status of, e.g., bacteria, need not 'underestimate' bacteria or their achievements. Say a certain species of bacteria is capable of rudimentary forms of epistemic action. Imagine a theorist who claims that only systems capable of consciously undertaking epistemic actions are cognitive, denies this kind of bacteria consciousness, and therefore denies they are cognitive. This does not mean that the theorist denies 'underestimates' this kind of bacteria-the theorist might fully acknowledge, and be wholeheartedly blown away by, the basic forms of epistemic action that it undertakes. They just might also think there are reasons not to label such behaviour-impressive though it may be-as 'cognitive'. They might, for example, think it is amenable to saliently different models, or that it belongs to a class of interesting phenomena so disparate that cognitive science would dissolve if it were to adopt this class as its subject-matter. Analogously, to deny that an extremely sophisticated robot is 'alive' is not necessarily to deny the impressiveness of its achievements; it might merely reflect a theoretical preference for a notion of life according to which it essentially arises from protracted processes of natural selection. Julian Kiverstein is generously serving as a reviewer on this paper. He has buttressed this appeal to Morgan's canon by clarifying two worries that lie behind it. One worry is about 'researchers that take human cognition to be the standard of what counts as cognitive relative to which all non-humans fall short.' The other worry is that 'many researchers assume non-cognitive behaviour to be rigid and inflexible whereas this is not the case.' I share the view that both classes of researcher are mistaken. The first issue, that many researchers taken human cognition as the standard, is a genuine issue in my view, and a genuine problem to be solved, but not a problem that can be solved by a mark of the cognitive (see also Sects. Counterfactuals and allostasis as target domains; Philosophical prescriptions in cognitive science). Sim and Kiverstein's argument presupposes that the human case is not the standard, and argues from that presupposition to a characterization of cognition. If it is intended as a refutation of researchers who think that 'cognition' is defined in relation to humans, it fails, because it begs the question against those researchers. On to the second issue, that many researchers wrongly assume that noncognitive behaviour is rigid and inflexible. One way to understand this worry renders it irrelevant: by this version, there is genuinely noncognitive behaviour that is nonrigid and flexible, and researchers wrongly assume that it is nonrigid and inflexible. This first way of reading the worry seems to me to undercut not support Sims and Kiverstein's argument, since one who wishes to deny cognitive status to bacteria can just highlight that there is this oft-neglected category of nonrigid, flexible, yet noncognitive behaviour for bacteria to find a home in. This seems especially true where their opponents, Corcoran et al., are supporters of the free energy principle (which ostensibly identifies a wide domain of flexible, nonrigid capacities and processes), but wish to identify cognition with only one small part of this domain. They, surely, do not therefore believe that the living world divides up into the cognitive and the rigid-and-inflexible. The second way of reading this worry is as identifying a tension to be solved by a liberalized notion of cognition: 'researchers think that all noncognitive behaviour is rigid and inflexible, so let's call all the nonrigid, flexible behaviour cognitive!' I have a lot of sympathy for this position, as it happens, but if this is the argument, I do not think it benefits from being framed as offering a mark of the cognitive (see also Sect. Philosophical prescriptions in cognitive science). Most of the paper, if this is the argument, is rendered confusingly irrelevant. In sum, I am not convinced that either paper offers sufficient resources to assess their proposals without further constraints. This in itself is not intended as any great criticism of the papers, since I believe that their arguments proceed by presupposing a widely-held view of the point of the concept of cognition. Cognition and cognitive science This brings us to a point of fairly widespread agreement in the debate, which I believe can be used as a fixed point to explore these proposals: cognition is for demarcating the domain of cognitive science (Akagi 2018;Allen 2017;Keijzer 2021;Ramsey 2017). There are historical reasons to suspect that the notion of cognition in play is that which is defined in relation to cognitive science-as Boden (2006) points out, prior to the founding of cognitive science, cognition was defined to exclude emotion and affect. Cognitive science, as a self-conscious, interdisciplinary exercise, arose in the 1950s, although it grew in part out of the cybernetics of the 1940s. Cognitive science, however, was not yet called 'cognitive science'much of the work in 1950s went under the simple name 'computer simulation', until 1 Page 8 of 24 later the term 'cognitive studies' took hold in the early 1960s, before gradually morphing into 'cognitive science' by the mid-1970s (Boden 2006). There are a few main reasons that the term 'cognitive' took off, according to Boden (2006), based on the account of those primarily responsible. Although the term was, at the time, defined to exclude emotion and affect, no-one wanted to exclude those things from being part of the domain of cognitive science. Instead, they wanted to mark a contrast with behaviourism, and offer a characterization of the new psychology's subject-matter that seemed less trivial and redundant than 'mental'. In the context of the early 1960s, where many cognitive scientists were focussing on cognition (in the narrow sense of perception, language, memory, and problem solving), the term seemed a natural fit (Boden 2006). Through these historical accidents, the term 'cognition' came to be associated with a new concept, one whose point is to pick out the subject-matter of cognitive science. Beer (2021) recently discussed the origin of the phrase 'minimal cognition', a bastardization of his 'minimally cognitive behavior'. When offering a putatively representation-free account of certain organism-level behaviours, Beer found to his frustration that his work was often viewed by cognitive scientists as irrelevant. His work was perceived as irrelevant for reasons best captured by Clark and Toribio (1994)-the worry was that the behaviours he modelled were too importantly disanalogous from, and too simple compared with, paradigmatic, 'genuine' cognition to be relevant to cognitive scientific debates over representation. The phrase 'minimally cognitive behavior' was intended by Beer to get around this problem, and to capture the idea of 'the simplest behaviour deemed worthy of a cognitive scientist's attention'. 1 This vignette contains, I think, a deep truth about the concept of cognition-that it is used, at the most abstract level, to demarcate the domain to which cognitive scientists ought to pay attention. Counterfactuals and allostasis as target domains Even accepting that the concept of cognition is for demarcating the domain of cognitive science, this leaves two key background issues unsolved. The first is this the issue of what might be called the 'direction of fit' (Anscombe 1957;Platts 1979, p. 257). Some proposals are based on the idea that it is up to cognitive science to gradually determine and discover its proper domain, 2 and that the concept of cognition is defined to refer to this to-be-revealed domain whatever it may turn out to be (e.g., Allen 2017;Figdor 2017Figdor , 2018Newen 2017; see also Peirce 1878). I will refer to this class of proposals as 'targetless', and the other class as 'target-driven'. Unlike targetless proposals, target-driven proposals identify a target domain, containing all and only the things that cognitive science ought to study. 3 Targetless proposals see cognitive science as gradually expanding or shrinking its current remit through the interaction of, at least, the goals of cognitive science, the scope of its models and methods, the original pretheoretic area of interest, the paradigm cognitive capacities that cognitive science originally set out to explain, and perhaps paradigm cognitive systems (normally humans; e.g., Rupert 2013;cf. Figdor 2018;Lyon 2006). Importantly, targetless proposals need not be so flat footed as to claim that anything that the tools of cognitive science can explain forms part of its domain (cf. Ramsey 2017). Although there are key differences, many targetless views of cognitive science see it as proceeding by working outwards from certain paradigmatically cognitive capacities and systems, incorporating more capacities depending on certain sorts of salient similarity to these paradigmatically cognitive capacities, and incorporating more systems depending on whether they instantiate these capacities (e.g., Allen 2017;Figdor 2017Figdor , 2018Lyon 2015;Newen 2017). Paradigmatically cognitive systems and capacities do not constitute a 'target domain' because it is essential to the way that 'targetless' cognitive science proceeds that this class, the 'paradigmatically cognitive', be used also to identify potential new targets of explanation. 'Working outwards' from the paradigmatically cognitive is guided and heavily informed by amenability to similar models and methods, relevance to the core interests of cognitive science, and other dimensions of salient similarity. A core idea of such views is often that we should let 'the productivity of research programs in cognitive science guide the extension of language to new contexts' (Allen 2017, p. 4240). A brief tangent is required here. One might worry that acknowledging 'paradigm' cognitive systems, especially if this is set partly in deference to the actual historical remit of early cognitive science, because the question in favour of 'anthropocentric' and against 'biogenic' approaches to cognition. It does not. Importantly, it would not mean humans are 'more' cognitive, but rather that they are more useful in judging whether another system is 'cognitive'. Of course, it is possible to deny that humans are the paradigmatic cognitive systems, and that there are paradigmatic cognitive systems (e.g., Figdor 2018). However, there may be methodological justifications for treating humans as the paradigm case, for example, a special interest in explaining human capacities (e.g., Heyes 2014Heyes , 2015Wundt 1907). Even conceding that humans are the paradigm cognitive systems and that they have a special place in the goals of cognitive science does not guarantee an anthropocentric approach. Indeed, without treating humans as the paradigm case, it is hard to understand many of the key arguments for the biogenic approach. For example, Lyon (see especially 2022) argues that there are 'basal' cases of cognition in extremely simple biological 1 Page 10 of 24 systems by arguing that these basal cases are salient similarity to human cases, especially in being amenable to similar models, and most of all on their relevance in explaining the human case. The relevance of such concerns presupposes and hinges on Lyon treating humans as paradigm cognitive systems in the relevant sense. Conversely, target-driven proposals are based on the idea that the concept of cognition provides a target at which cognitive science ought to aim. Ramsey (2017, p. 4207) expresses the core idea of such proposals: that cognitive science and cognition should be defined 'in terms of its relevant explananda, in terms of what it is we want explained'. This latter class of proposals faces a second issue. Some are presented as nonrevisionary, and see the concept of cognition as at least roughly the same as the intuitive folk concept of mind: on such a view, cognitive science ought to aim to explain those phenomena that intuitively count as 'mental' or 'psychological'. For example, Ramsey (2017) articulates a nonrevisionary target-driven proposal according to which cognitive science requires a target domain. In particular, he claims that cognition is best understood 'as a crudely defined cluster of capacities and mental phenomena', and that '[a] theory is a cognitive theory if it helps us to understand a capacity or process or phenomenon that we are pre-disposed to regard as psychological in nature' (p. 4208). Here, Ramsey appears to treat 'mental', 'cognitive', and 'psychological' as synonymous. Another proposal along these lines is offered by Clark (2011). Other target-driven proposals are presented as revisionary: the intuitive concept of mind is seen as an inappropriate target domain, and a new, more appropriate target domain is offered. These revisionary target-driven proposals have some similarities with targetless proposals: they tend to be informed by trends in cognitive science, views about the possible range of its models, and so on. Even so, they ultimately aim to set a target domain for cognitive science, rather than primarily seeing the proper domain of cognitive science as something to be revealed as cognitive science progresses and matures. For example, Keijzer (2021) articulates a proposal that like Ramsey's is targetdriven, claiming that it is desirable that cognitive science have a 'clear and stable' target domain (p. 137), but differs on the proper target domain. His proposal is offered as revisionary, claiming that initially, the target domain of cognitive science was the mind, or at least 'remained intrinsically bound up with the pre-existing and long-standing notion of the mind' (p. 138). The term 'cognition', he claims, was adopted because it 'provided a scientific, naturalistic phrase that stressed a modern non-dualistic view on the mind that could be articulated in terms of information processing and computation' (ibid.). However, he thinks that the domain of cognitive science should be tied to an 'empirical scientific concept' that can be 'adapted to scientific findings and theorizing' (p. 146). Additionally, he thinks that so long as cognition is tied to mind, it cannot play this role because '[m]ind is a key concept within our culture that is central for many topics ranging from responsibility, free will, using reasons, being rational, and so on'. He thinks that the intuitive concept of mind therefore frustrates the ability of cognitive science to acquire a stable target domain (see also Clark 2010). His proposal is therefore to set cognition free, and untether it from mind. Even so, he proposes a new target domain for cognitive science-cognitive science, in Keijzer's view, ought to study all living systems, and in particular, it ought to focus on studying 'cobolism', 'the systematic ways in which each living system encompasses structures, processes and external events that maintain the fundamental metabolic processes that constitute the core of each living system ' (2021, p. 137). Rather than approaching the life-cognition boundary by aiming to distinguish between living and cognitive systems, as Corcoran and colleagues do (see Sect. Proposals from the papers), Keijzer's approach is to focus on the distinction between cognition and metabolism as aspects of living systems. This approach is also precedented in the work of Godfrey-Smith (see especially 2016b). There is another important distinction among target-driven proposals. Strongly target-driven proposals specify a target domain for cognitive science which is also supposed to be its ultimate domain. This tends to be tied to the view that cognitive science is (or at least ought to be) the study of some currently-specifiable natural kind (e.g., Adams 2018). 4 It is this kind of view that Allen (2017, p. 4234) accuses of proceeding by 'definitional fiat', and that Keijzer (2021, p. 147) accuses of 'conceptual stipulation'. Weakly target-driven proposals give up on the idea that the current target domain of cognitive science should also be presented as the ultimate domain of cognitive science. Instead, target domains are understood as at least somewhat provisional and revisable in light of empirical discoveries. Keijzer sees such target domains as part of '[a] standard scientific bootstrapping process where theorizing and empirical work coevolve ' (2021, p. 147). I believe that Sims and Kiverstein's (2021) proposal is best understood as an elaboration of Keijzer's (2021) position, and hence as a revisionary, weakly target-driven proposal, offering a target domain for cognitive science. The link between Sims and Kiverstein's account, and that of Keijzer, is confirmed by Kiverstein in his role as a reviewer on this paper. Sims and Kiverstein offer a formal elaboration of the nature of Keijzer's 'cobolism', by offering a formal elaboration in free-energy theoretic terms of the nature of allostasis (as minimization of expected free energy), where allostasis is among the most fundamental and most important forms of cobolism (see also their footnote 9). The view of Corcoran et al. (2020) is not so obviously tied to any of the approaches discussed above. I do not think that it is charitable to interpret their proposal as targetless, largely for reasons I discuss in Sect. Targetless characterizations of cognition. Additionally, I am not sure how one might justify their proposed mark of the cognitive on such a view. The best option I can think of is that one might think that cognitive science will stop at the first major discontinuity (in ways of dealing with environmental complexity) that one reaches as one moves away from what they see as the paradigm cognitive system, humans. According to Corcoran 1 Page 12 of 24 and colleagues, this is the discontinuity between systems with hierarchical architectures, and systems with architectures that support counterfactuals. Absent a reason that cognitive science ought or is likely to stop at this discontinuity though, such a proposal would be unmotivated. Taking their proposal as target-driven, I think that it is clearly revisionary. Their definition of cognition is too restrictive to align with any intuitive notion of mind or mentality-disengaged counterfactual cognition is a small part of our 'mental' lives, and describes the activity of very few of our 'mental' capacities. Its closest link to an intuitive notion of mind is to the idea of 'having a mind'. Relatedly, they are particularly interested in demarcating which systems are cognitive (see especially the first paragraph of p. 32, and the appeal to Godfrey-Smith therein). Even here, the intuitive notion of having a mind does not line up precisely with their technical notion of being a cognitive system, since they are willing to deny cognitive status to systems capable of 'learning, memory, and decision-making' (p. 31; this is critiqued by Sims and Kiverstein,p. 25). Even so, one might think (along the lines of Keijzer) that having a mind is not a useful scientific notion. One might, on such a view, see Corcoran et al.'s proposal as identifying the scientifically interesting category of systems closest to the 'folk' notion of having a mind. It is not clear to me whether their proposal is weakly or strongly target-driven, but I will dismiss both kinds of approach in Sect. Against prescribing a target domain. In this section, I have tried to find some common ground from which to assess the two proposed marks of the cognitive. In Sect. Proposals from the papers, I argued that neither Sims and Kiverstein, nor Corcoran and colleagues, explicitly offer a satisfactory account of the point of the concept of cognition, and therefore of the stakes of the debate. In Sect. Cognition and cognitive science, I argued that the core point of the concept of cognition is demarcating the domain of cognitive science. In this section, Sect. Counterfactuals and allostasis as target domains, I considered two further background issues (the direction of fit between the domain of cognitive science and the concept of cognition, and the relationship between the concept of cognition and the concept of mind), in order to better flesh out the nature of the two proposed marks of the cognitive. I suggest that both are both understood as revisionary targetdriven proposals, trying in an empirically and theoretically informed manner to find relatively stable target domains for cognitive science, severing the link between cognition and the intuition-governed folk notion of mind. It is worth noting that the distinction between targetless, weakly target-driven, and strongly target-driven proposals crosscuts the question of whether there is a mark of the cognitive. Strongly target-driven proposals identify a mark of the cognitive that characterizes both the target-domain of cognitive science, and the ultimate domain of cognitive science. One way to look at weakly target-driven proposals and targetless proposals is as denying that there is a mark of the cognitive because they deny that any characterization should play both roles. A more liberal understanding of the 'mark of the cognitive' might identify the mark of the cognitive with whatever characterization fulfils just one of these roles. One could then construe the characterizations of provisional target-domains as provisional marks of the cognitive. Alternately, one could construe the mark of the cognitive as being whatever cognitive-scientific properties demarcate the ultimate domain of cognitive science. For a supporter of targetless proposals, this is the only kind of 'mark' that might exist. Here, there is room for disagreement among proponents of targetless proposals, and among proponents of weakly target-driven proposals: the positions as I have characterized them do not obviously have any entailments regarding the existence of a mark of the cognitive in this sense. They do, however, entail that if there is a mark of the cognitive, it cannot be known to us presently, since we cannot know the ultimate boundaries of the domain of cognitive science without first answering all the empirical and practical questions that appropriately inform the placement of this boundary. The mark of the cognitive, in this sense, can only follow along behind the practice of cognitive science, it cannot take the lead. What's the point of characterizing cognition? If the above is correct, then the two proposed marks of the cognitive represent two diametrically opposed revisionary target-driven proposals, each couched in freeenergy theoretic terms. Sims and Kiverstein (2021) follow Keijzer (2021) in suggesting a broadening of the target domain compared with the folk notion of mind, while Corcoran et al. (2020) suggesting a narrowing of the target domain. Each settles on a theoretically interesting target domain, that ties in interesting ways into the life sciences more generally and especially evolutionary theory. I may be wrong in this. However, it does not matter to my argument. I prefer to see these proposals as target-driven, suggesting target domains for cognitive science because, if this is their aim, then they have many features that are virtuous in such proposals. However, as I will argue below, this is an illicit aim (Sect. Against prescribing a target domain). Given this, many of the features of these proposals are serious vices in my view (Sect. Targetless characterizations of cognition). It does not matter if I am wrong about the intended direction of fit because even if I am, the proposals have features that are undesirable for targetless proposals. In Sect. Philosophical prescriptions in cognitive science, I clarify that my opposition to targetdriven characterizations of cognition is not allied to an opposition towards philosophical prescriptions for cognitive science, before concluding. Against prescribing a target domain One possible role for characterizations of cognition (of which I see 'marks' of cognition as a special case) is to specify the target domain that cognitive scientists ought to study-that is, a characterization of cognition may specify the content of a targetdriven proposal about the concept of cognition. Such characterizations might reasonably be expected to be clear and precise, and to pick out a category of reasonable scientific and broader theoretical interest. 5 If their goal is to find a suitable, principled target domain that might be assigned to cognitive science, then I believe that the papers by Sims and Kiverstein and Corcoran and colleagues do about as good a job as possible at this task. 6 Each identifies an interesting category of interrelated phenomena that are closely related to the paradigm cases in cognitive science's remit. However, I do not believe that this task ought to be performed-I do not believe in prescribing cognitive science with a target domain, provisional or not, and so I do not believe that characterizations ought to be used to play this role. I will first dismiss Ramsey's (2017) argument for prescribing cognitive science a target domain, before offering two brief arguments against doing so. Ramsey's argument is especially significant because it is the basis for Keijzer's claim that 'to get started, a target domain must be chosen ' (2021, p. 147; see also p. 139). The argument Ramsey (2017) offers for holding that cognitive science and cognition should be understood in terms of a given domain of target phenomena and capacities in need of explanation is that this is 'the standard way sciences are defined' (p. 4207). He offers the example of geology, which he sees as studying '[roughly] the formation of mountains and rocks and minerals and so on.' Interestingly, he also mentions chemistry, claiming that it deals with a very different, albeit overlapping, set of phenomena to geology. He does not specify the subject-matter of chemistry. I think he would have a great deal of trouble if he were to try to do so in similar terms. He would, I think, have a similar amount of trouble trying to specify the subject-matter of physics. The problem, compellingly identified by Hempel (1969) in a rather different context, is that the correct, final domain for physics, and its current domain, come significantly apart. The history of physics is littered with disputes about what physical phenomena there are and what phenomena are physical, as well as discoveries of new physical phenomena, and radical changes in our conception of the domain of physics (see Chomsky 2002;Wilson 2006). The same is true of chemistry, especially given its interactions and boundary disputes with physics (Chomsky 2002). Indeed, a major milestone in the maturation of physics was the abandonment of a target-driven view of its domain as the 'material', understood as comprising mechanisms that operated on principles of motion and contact (one might think that cognitive science is undergoing a similar development). Saliently, psychology has not operated by taking a target domain according to many historians of psychology, instead progressing in a disorderly manner as techniques, interests, and practical goals develop (Danziger 1990(Danziger , 1997Leahey 2018;Rose 1985;Smith 1988). Even more worrying for Ramsey's account, it does not appear that even geology functions with a set target domain. As geology progressed over time, it accrued techniques in service of answering certain questions (particularly the origin of the Earth), and its domain apparently shifted when other pressing questions came along which these techniques could help with (for example, how to find valuable minerals, and later oil). Hemeda (2019, p. 2) characterizes geology as 'the study of the character and origin of the Earth, its surface features and internal structure' but highlights as advantageous that this characterization has allowed geology the flexibility more recently to consider 'the atmosphere, biosphere and hydrosphere' as (partly) geological phenomena (see also Sect. Targetless characterizations of cognition). Additionally, according to one popular understanding of the history of geology, the Moon and its craters became securely 'geological' phenomena when it was discovered that they were amenable to geological, in particular stratigraphic, analysis (Hemeda 2019). The point here is that even if some sciences are defined with respect to a target domain, this is far from standard practice, and for many mature sciences is simply not the case (see also Allen 2017). Ramsey's argument from standard practice therefore fails. There are two further reasons not to believe that cognitive science proceeds by targeting a set domain of phenomena. The first, highlighted by Newen (2017) and Miller (2003), is that core 'cognitive' phenomena like human memory, planning, and perception are also studied by other sciences, such as molecular biology, economics, sociology, and the medical sciences. It is not merely that there is a small overlap between the (uncontroversial) domain of cognitive science and the domains of other sciences, as between geology and chemistry. Instead, the domain of cognitive science is almost completely shared with other disciplines, distinguished from cognitive science primarily-contra Ramsey-by their approach to that domain. 7 The second reason is that the domain of cognitive science has in fact been hugely unstable, and has expanded through discoveries of salient similarity between phenomena that were at the time uncontroversially part of the domain of the discipline, and those that were not uncontroversially part of its domain (including amenability to similar models and methods, and relevance to some of the practical goals of cognitive science). Consciousness, emotion, affect, allostasis, and the contemporary notion of stress were not uncontroversially part of the domain of cognitive science at its inception. In fact, they were discussed barely if at all. Even so, emotion and consciousness became an uncontroversial part of its domain as the science progressed, the range of models expanded, and these phenomena and their similarities to core cognitive phenomena became better understood (e.g., Akagi 2018;Boden 2006;Clark 2013;Damásio 1994;Hetmański 2018). Affect, allostasis, and stress, although still not entirely uncontroversially part of the domain of cognitive science, are widely discussed within cognitive science, and frequently modelled by cognitive scientists. One needs to offer a compelling argument that it is somehow harmful for cognitive science to proceed this way, if one believes that this way of proceeding has been or has become a mistake-as perhaps Ramsey (2017) and some of those offering highly conservative definitions of cognition (e.g., Adams and Aizawa 2001) do. I have a third, weaker, argument against characterizing cognition by specifying a prescribed target domain: I agree with Keijzer (2021) that mind is an inappropriate target domain for cognitive science, but I see no way of settling the dispute between 1 Page 16 of 24 revisionary target-driven proposals without undercutting the motivations for offering a target-driven proposal in the first place. So far as I can see, both Sims and Kiverstein (2021) and Corcoran et al. (2020) describe categories of phenomena that could support orderly, interesting sciences. Sims and Kiverstein (2021) argue that Corcoran et al.'s (2020) proposal is 'not unprincipled, [but] nevertheless unwarranted, and certainly not implied by the FEP' (2021, p. 24); I see no reason that Corcoran and colleagues could not say exactly the same of Sims and Kiverstein' proposal. Both go to great lengths to show in a principled way that their proposals are tied to a scientifically and theoretically interesting FEP-theoretic category, but this is not enough to draw a conclusion about what cognitive science ought to study-a question about which the FEP has no direct implications. The only way that I can imagine the dispute being settled is by considering more directly what it is useful, feasible, and interesting for cognitive science to study given its models, methods, goals, and pretheoretic aims: exactly the sort of concerns that drive targetless accounts of cognition and cognitive science. It is here, if anywhere, that I believe that the FEP has the most direct implications for the concept of cognition and the direction of cognitive science. If nothing else, the FEP provides formal tools that make it feasible for cognitive science to study a broader range of phenomena, because it uses models and tools that are not too alien to cast phenomena like allostasis and homeostasis as interestingly similar to paradigm cases of cognition. However, this is only one consideration among many for determining what it is presently a good idea for cognitive scientists to study. Targetless characterizations of cognition If I am right, and we ought not to be looking for a target domain that can reasonably be prescribed to cognitive science, then this removes one significant possible role for a characterization of cognition. This does not, however, mean that there is no interesting role for a characterization of cognition on the targetless view (see also Akagi 2018; Allen 2017). One possible role for a targetless characterization of cognition, which I raise mainly to dismiss, is to put forward one's best guess about the final subject-matter of cognitive science. The problem with this proposal is that it is, I hope, clear that if targetless proposals are correct and cognitive science leads the way on setting its domain, no-one is in a remotely good position to make such a guess about its ideal, eventual endpoint at the current time. Characterizations of cognition can be useful without being target-driven and without guesswork. For example, Allen (2017) suggests that characterizations of cognition should play such roles as 'orienting newcomers to phenomena of potential interest', for which they need be neither precise nor exceptionless-he goes through the example of the characterization of cognition as 'adaptive information processing', a characterization as imprecise as 'cognition', and arguably with exceptions, such as the maladaptive elements of human psychology. 8 Such a characterization helps to highlight the general range of things that cognitive scientists are interested in, and also to highlight why they are interested in those things. The imprecision of this characterization actually helps it to play its job. For example, 'adaptive information processing' is imprecise enough that it can be stretched to cover new kinds of case, especially by taking liberal views of 'adaptive' or 'information processing'. This affords more possibilities for creative work that highlights hitherto-overlooked similarities between uncontroversially cognitive capacities and other capacities not (yet) considered cognitive. There are other, more general reasons that characterizations of cognition benefit from imprecision. In very general terms, cognitive science is interdisciplinary and expansive, and because of this, at risk of disintegration and dissolution if its subsidiary disciplines cease to interact appropriately (as acknowledged by Allen 2017). In light of this, working characterizations of key concepts might also serve to facilitate intertheoretical integration, communication, including communication of different theoretical perspectives, and other 'bridging' roles that form productive links between disciplines in order to resist disintegration. Importantly, many of these roles are in fact better played by imprecise concepts (Haueis 2021;Neto 2020). The reason for this is that imprecision gives space for different researchers and disciplines to conceive of their subject-matter in significantly different ways, while still seeing each other as studying 'the same thing' (and therefore worth talking to). However, to play these roles-conveying the general idea of what cognitive scientists are interested in and why to newcomers, and helping unify the discipline-it is clearly possible for a characterization to be too imprecise. If a characterization is too imprecise, it will not be informative, and it may either fail to clearly apply to paradigm cases of cognition, or be so broad as to be stretched to cover cases that are clearly not cases of cognition. This will not serve to orient newcomers, nor help to integrate the discipline. Ideally, then, what we want is a characterization of cognition with just the right amount of imprecision. Akagi (2018) offers a proposal for how to characterize cognition (albeit, not a characterization of cognition) that can help to solve this problem. Akagi agrees with Allen that characterizations of cognition are of limited use to working cognitive scientists. Instead, Akagi thinks that the main benefits of characterizing cognition are epistemological benefits for others, including philosophers and the public. In particular, Akagi thinks that a characterization of cognition should make explicit current implicit consensus among cognitive scientists about their domain. This is, of course, difficult in the face of wildly different views of which systems, capacities, and phenomena are cognitive. To preempt this worry, Akagi suggests that characterizations of cognition should be 'ecumenical'-that is, they should capture the dispute, rather than try to gloss over it and take a side. The problem with any 'partisan' proposal that takes a side, in Akagi's view, is that it 1 Page 18 of 24 represents as uncontroversial and established what is in fact highly controversial and not-yet-established. 9 Instead, Akagi claims, a characterization of cognition should apply exactly as clearly and uncontroversially to any given case as that case is, in fact, a clear and uncontroversial instance of cognition-an ecumenical characterization should apply entirely uncontroversially to a paradigm case of cognition, and highly controversially to a highly controversial case of cognition. It should, in this way, reflect the current state of the art by capturing the nature of the disputes. 10 It should, I think, be obvious at this stage that the characterizations of cognition offered by Sims and Kiverstein, and Corcoran and colleagues, do not stack up well against the desiderata on targetless characterizations of cognition. Between humans and E. coli, most living systems are highly controversial as instances of cognition, and therefore ought to be part of the penumbra of an imprecise characterization of 'cognition'. Both proposals significantly reduce the penumbra, and decide one way or the other on these controversial cases (they might, in this sense, be understood as precisifying proposals; Fine 1975). They offer proposals that are partisan, and unduly precise, if they are understood as targetless characterization of cognitionalthough as I have already stated, I think they are better interpreted as target-driven characterizations, and as failing because cognitive science is in no need of such characterizations. Philosophical prescriptions in cognitive science In this closing section, I wish to consider two interrelated objections. The first is that a mark of the cognitive is required for settling disputes that are strictly internal to cognitive science, and held among cognitive scientists. The second is that my position is wrongly in tension with or opposed to philosophers offering prescriptions or guidance to cognitive science. Let us begin with the idea that a mark of the cognitive is required for settling genuine and legitimate disputes within cognitive science. The idea is that there are many disputes internal to cognitive science over whether phenomena are cognitive, such as the question of the boundaries of cognitive systems, and of the potential cognitive status of simple living creatures. Generally, the way such arguments proceed is by showing that present methodological concerns and empirical findings currently underdetermine the placement of some boundary between the cognitive and the noncognitive (see especially Varga 2018). They then appeal to a characterization of cognition. The characterizations offered are generally justified by an appeal to philosophical analyses and intuitions (Adams and Aizawa 2010;Aizawa and Adams 2005), or to the potential explanatory role and other theoretical benefits of the category/construct/property identified by the proposed characterization (Corcoran et al. 2020;Sims and Kiverstein 2021). My proposal is revisionary with respect to the current practice of cognitive science in pretty much only one way: I think that this process is wrongheaded, and unable to legitimately settle the disputes. Any appearance of settling the disputes is entirely spurious. Allen (2017) and Akagi (2018) focus on criticising the more 'philosophical' proposals and approaches. Appealing to the explanatory and theoretical benefits also fails because cognition is a subject-matter term, that like psychological, chemical, biological, geological, and physical, we should not expect to have any great explanatory role-it is a mistake, owed to the general overemphasis on explanatory terms in historical views and philosophies of science, to try to treat every legitimate scientific concept as playing such an explanatory role (Spencer 2016). To put the issue informally, demarcating a subject-matter is a big and important enough job that we should not overload the concept with further roles, that will inevitably place competing demands on it. 11 Of course, it is by placing further demands on the concept, whether these are reached by philosophical analysis or scientific-explanatory work, that we get the constraints required to settle the motivating disputes. This is why trying to impose further constraints is tempting: it makes us able to generate something that looks like an answer. However, these further constraints are not actually relevant constraints on the concept of cognition as it is used to demarcate the subject-matter of cognitive science. One reason I am highly suspicious of this process of seeking and imposing further constraints on the concept in order to settle the dispute is that both sides of any given dispute are generally equally able to justify their position, because there is no principled basis in these disputes regarding where to find these constraints. This is why some authors feel entitled find their further constraints in traditional philosophical analyses, others in evolutionary theory, and others in undesirable cultural views of plants. Nothing in the process precludes a post hoc grab-bag of principles picked to justify one's already-chosen answer to the dispute in question. The solution is patience, and a tolerance for uncertainty. Many of these 'disputes' represent a divergence between research programmes with competing commitments and interests. Surely, the thought seems to go, only one of them can be right, and we should try to work out which. The problem is that we do not know ahead of time which is right-or even that only one is, since the appearance of competition may turn out to be spurious. We cannot generally determine which is the correct research programme ahead of time, and have to pursue those competing research programmes to settle (and normally also recast) the disputes between them (e.g., Chang 2004Chang , 2012Chang , 2017. These disputes therefore look underdetermined by the practicalities of cognitive science and our current empirical and theoretical knowledge because they are in fact underdetermined. Adding arbitrary constraints that let us generate precise 1 Page 20 of 24 'answers' to these disputes is not settling these disputes but obscuring their existence and their nature. Of course, some research programmes are unmotivated, illicitly motivated, obviously hopeless, irrelevant to the goals of cognitive science, or deeply impractical. I am not against cognitive scientists or philosophers pointing this out, 12 and I think that often philosophers are the best-placed researchers to do so (Schliesser 2019). Offering a 'mark of the cognitive' is not a good way of offering such guidance: this is, in many ways, the most important claim in this paper. Identifying a mark of the cognitive requires prescribing a target domain and/or making a guess about the ultimate future of cognitive science; when arguing about what cognitive science should study, totalizing top-down prescriptions and Oracle-style guesses about the ultimate future of cognitive science are probably irrelevant distractions, and definitely needlessly more complicated than the question of what we ought presently to study. My basic position on many discussions of the mark of the cognitive is that they are attempts to offer legitimate guidance to cognitive science, but framed in an unhelpful and incorrect manner. Consider, for example, one of the more recent disputes between Clark (2010) and Adams and Aizawa (2010). There, Adams and Aizawa argue that cognitive science should limit itself to what's within the skin on pain of having a subject-matter so broad that the discipline falls apart. Many of their past arguments have been framed in terms of core, and according to them essential, features (in particular, underived intentionality) of what they claim are the true target phenomena of cognitive science. The issue they raise in 2010, however, is a more practical one-it is an attempt to warn cognitive scientists from making what they see as a mistake that might eventually undermine their discipline's very existence. This latter, practical worry is 'laundered' through a dubious philosophical analysis, thus obscuring the practical point by burying it under a needlessly complicated and contentious theoretical edifice. This more 'practical' understanding of the debate over the boundaries of cognition also suggests a more practical interpretation of Clark and Chalmers' (1998) original argument: that there is no principled reason for cognitive science not to expand its domain beyond the skin, and several potential benefits if it does so. Likewise, Sims and Kiverstein's proposal can perhaps, in a certain light, be understood as claiming that there is no principled reason from the FEP for cognitive science to limit the living systems that it studies to those capable only of explicit counterfactual reasoning. Separating prescriptions and characterizations I have argued that characterizations of cognition, understood as the subject-matter of cognitive science, ought not to aim to specify what cognitive science should study-they ought not to try to specify a target domain for cognitive science, including by trying to settle as-yet-unsettled disputes in cognitive science. While Corcoran et al. (2020) and Sims and Kiverstein (2021) offer principled arguments for interesting potential target domains for cognitive science, cognitive science does not need a target domain. Instead, the proper domain of cognitive science will be gradually revealed by the progress of cognitive science. This does not mean that there is no role for characterizations of cognition: they can play high-level roles in intertheoretic integration, highlighting phenomena of interest, and summarizing the state of the art. For each of these roles, however, they are well-served by being imprecise and nonpartisan: features that the proposals of Corcoran and colleagues and Kiverstein and Sims lack. Importantly, an insistence on targetless characterizations of cognition is not allied to a blanket ban on offering prescriptions to cognitive science. Instead, it suggests that prescriptions should be more fine-grained, more practical, and often more short term. Prescriptions for cognitive science are simply not best expressed as characterizations of cognition.
2022-12-19T16:02:48.273Z
2022-12-17T00:00:00.000
{ "year": 2022, "sha1": "6887be192cf73a10fbb41af545d1730f98003ed7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10539-022-09889-4.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "786fe0db642f4341fb00c8a828616331e14d829c", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [] }
57373826
pes2o/s2orc
v3-fos-license
Interest Point Detection based on Adaptive Ternary Coding In this paper, an adaptive pixel ternary coding mechanism is proposed and a contrast invariant and noise resistant interest point detector is developed on the basis of this mechanism. Every pixel in a local region is adaptively encoded into one of the three statuses: bright, uncertain and dark. The blob significance of the local region is measured by the spatial distribution of the bright and dark pixels. Interest points are extracted from this blob significance measurement. By labeling the statuses of ternary bright, uncertain, and dark, the proposed detector shows more robustness to image noise and quantization errors. Moreover, the adaptive strategy for the ternary cording, which relies on two thresholds that automatically converge to the median of the local region in measurement, enables this coding to be insensitive to the image local contrast. As a result, the proposed detector is invariant to illumination changes. The state-of-the-art results are achieved on the standard datasets, and also in the face recognition application. I. INTRODUCTION A well-designed interest point detector is supposed to effectively represent images across variations of scale and viewpoint changes, clutter background and occlusion [1], [2]. For years, interest point detectors have been extensively studied and widely used in many applications [3], [4], [5], [6], [7]. Nevertheless, an open question remains about extracting the stable points under illumination variations. The Hessian-Laplace/Affine [8], Harris-Laplace/Affine [8], SIFT [9] and SURF [10] detectors are built upon the derivatives of the Gaussian filter. Either the first or the second derivative of the Gaussian filter is used to compute the strength of the image local contrast. As the Gaussian filter responds proportionally to the image local contrast, these detectors perform poorly in detecting low contrast structures even if these structures are stable under different variations and significant in computer vision applications. Moreover, these detectors are susceptible to abrupt structures and image noises. To mitigate the influence caused by image noise and nearby image structures, a rankordered Laplacian of Gaussian filter is proposed in [11]. However, such a detector still partial relies on the image local contrast. To address the problems caused by illumination changes particularly, image segmentation has been utilized in designing interest point detectors. For example, the MSER [12], [13], PCBR [14] and BPLR [15] detectors use the watershedlike segmentation algorithms to extract the image structures. However, these detectors' performance is unsatisfactory under image blurring in which the boundaries of image structures are unclear [3]. Self-dissimilarity and self-similarity of image patches are used in SUSAN [16], FAST [17] and selfsimilar [18] detectors to alleviate the problems caused by lighting variation. In particular, the SUSAN and FAST detectors use the number of pixels that are dissimilar from that in a region center to detect corners. The weakness of two detectors is that they are not scale-invariant and inefficient in detecting blob-like structures. Although local pixel variance is adopted in [18] to estimate the self-similarly, the robustness of this detector is uncertain when there are strong abrupt changes within the image patch. Considering the above-mentioned limitations of existing detectors, this paper aims to develop a contrast invariant and noise resistant interest point detector. Inspired by the recent work on the Iterative Truncated Mean (ITM) algorithms [19], [20], [21], [22], an adaptive ternary coding (ATC) is proposed to adaptively encode the pixels into bright, dark and uncertain statues. The ternary status of each pixel in a local region is detected by the dynamic thresholds that are automatically computed by the ITM algorithm. Interest points are extracted from the blob significance map that is measured by the number of bright and dark pixels. As expected, the proposed ATC shows robustness to illumination variations and is effective in dealing with cluttered structures. A. Problem Formulation Blobs, as shown in Fig. 1(b), are the image local structures with the majority of the bright (or dark) pixels concentrating in the center while the majority of opposite intensity resides in the peripheral region. Such property of the blob structure is preservable under various variations. Moreover, the bloblike structures widely spread over a pictorial image. These properties make the blob-like structure suitable in anchoring the local descriptor [23], [9] under various image conditions. Hence, a lot of works have been proposed to extract bloblike structures from images [9], [12], [18], [24]. However, the linear filter based detectors, such as SIFT and SURF, are sensitive to the illumination changes. In contrast, the relative bright-dark order of pixels in a local region is more stable than the pixel intensity value under illumination changes. In view of this, we propose to detect interest points using the bright/dark labels of pixels. An issue that needs to be addressed is how to differentiate and label the pixels as bright or dark ones. One way is to dichotomize the pixels into bright and dark ones by a certain threshold, which could be set by the mean or median value of the local region. Take the image patch (shown in Fig. 1(b), as a zoom in from Fig. 1(a)) as an example, the bright and dark pixels dichotomized by the median value are identified in Fig. 1(c). Median is more robust to the outliers and abrupt variations than mean. However, the median-based threshold is sensitive to quantization error because of its inefficiency in suppressing this type of noise. This may lead to unreliable labelling. To solve this problem, we propose to introduce a fuzzy label for the pixels that are not clear enough to be labelled into either bright or dark set. This results in our proposed adaptive ternary coding algorithm. B. Adaptive Ternary Coding Algorithm Instead of using one threshold to binarize the pixels into bright or dark labels, a pixel intensity margin spanned by two thresholds is proposed to ternarize the pixels, as . (1) where I is the pixel intensity value, λ l and λ h are the lower and upper bounds for the pixel ternarization. Pixel intensities that are close to the median value in a local region are labeled the uncertain ones to reduce their sensitivity to noise. Properly choosing the two thresholds is essential in the ternarization. The two thresholds should be invariant to the illumination changes, and should be located on both sides of the median value to ensure the correctness of pixel labeling. Let the half width of the margin spanned by λ l and λ h be τ λ = (λ h − λ l )/2, and the mean of λ l and λ h be µ λ = (λ h + λ l )/2. Choosing λ l and λ h is equivalent to choosing µ λ and τ λ . One solution for the ternary coding is setting µ λ equal to the median of the local region and τ λ equal to some fixed threshold. However, this has two limitations: 1) computing the median is time consuming and 2) a fixed threshold cannot adapt to the contrast changes. Compared to the median, the mean µ of the pixel intensities in a local region is easier to be computed. By setting τ λ equal to the Mean Absolute Deviation (MAD) τ of the pixel intensities from the mean µ, the two thresholds λ l = µ − τ and λ h = µ + τ are located on both sides of the median [19] and invariant to the illumination changes. Moreover, by iteratively truncating the extreme samples with the ITM algorithm proposed in [19], [20], the mean of the truncated data starts from the mean and approaches to the median of the input data. Meanwhile, the MAD of the truncated data converges to zero [19], [20]. As a result, these two boundaries λ l and λ h computed by the ITM algorithm automatically converge to the median while keeping the median within the margin spanned by λ l and λ h . Therefore, this margin (as shown in Fig. 1(d)) separates the pixels into bright and dark ones and tolerates noise and quantization errors. Given the advantage of the ITM filter, we propose an adaptive ternary coding algorithm and a blob significance measure based on the ITM algorithm, which are presented as follows. Let S 1 and S 2 be the central region and the corresponding peripheral ring of a filter mask centered at (0, 0). For the blob detection, here both S 1 and S 2 are chosen as circle shape, and the radius of the outside ring is √ 2 times of the inner one to make the area size of these two regions the same. Two pixel sets centered at x are defined as x is the region center and I(x − m) is the pixel gray value at the location x − m. In order to ensure that the two pixel sets I 1 (x) and I 2 (x) have the same effect on estimating the thresholds for pixel labeling, the weighted ITM algorithm [20] is adopted to make them have equivalently equal number of pixels. The pixel numbers n 1 and n 2 in these two sets I 1 (x) and I 2 (x) are used to weight the pixels in I 2 (x) and I 1 (x), respectively. The proposed adaptive pixel ternary coding is shown in Algorithm 1. Algorithm 1: Adaptive Pixel Ternary Coding for the Proposed Detector Input: I 1 (x), I 2 (x), n = n 1 + n 2 , k = 0; Output: Blob significance B(x, k); 1 do 2 Compute the weighted mean µ w = (n 2 I 1 (x) + n 1 I 2 (x))/n; 3 Compute the weighted dynamic threshold (2), and truncate I i ∈ {I 1 (x), I 2 (x)} by: The lower and upper bounds λ l and λ u in Algorithm 1 are used to ternarize the pixels into bright, uncertain or dark ones by (1), as shown in Fig. 1(d). A bright pixel is the one that is larger than the higher threshold. A dark pixel is the one that is smaller than the lower threshold. The blob structures have the attribute that the majority of bright (or dark) pixels are concentrated in the inner region while the majority of the opposite ones in the surrounding region. As a result, we measure the blob significance by the distribution of the bright and dark pixels. First, the dominances of bright/dark pixels in S 1 and S 2 are measured by the difference of the numbers of bright and dark pixels in the corresponding region. The bright and dark pixels are respectively labeled as 1 and −1 by (1) and the uncertain pixels are labeled as 0. Therefore, the normalized dominance of the bright/dark pixels in S 1 and S 2 are 1 n1 Θ(I 1 (x), λ l (k), λ h (k)) and 1 n2 Θ(I 2 (x), λ l (k), λ h (k)), respectively, where λ l (k) and λ h (k) are the lower and upper bounds in the kth iteration. Second, these two parts are linearly combined as the blob significance in the kth iteration: From Algorithm 1 it is seen that the margin between the lower and upper bounds equals 2τ w . It monotonically decreases to zero by increasing the number of iterations [20]. In the first few iterations, the margin is large as only few extreme samples are truncated by the ITM algorithm. By increasing the number of iterations, both the lower and higher thresholds converge to the median value of the local region. As a result, the margin between these two thresholds reduces. Therefore, the number of pixels categorized into the intermediate group decreases. The blob significance B(x, k) (shown in Fig. 1(e)) is a function of the number of iterations k. The maximum value of |B(x, k)| over k is selected as the blob significance map for interest point detection, defined as However, exhaustively searching the global peaks over all iterations is time-consuming. The following stopping criterions are used to allow that the global maximum value is achieved in most cases within a reasonable number of iterations. Let I(x) = I 1 (x) ∪ I 2 (x), the corresponding weight set be w = {n 2 , ..., n 2 n1 times , n 1 , ..., n 1 n2 times } and the two sets separated by Let w h and w l denote the summation of the weights of I h (x) and I l (x), respectively. One stopping criterion [20], which enables the truncated mean to be close to the weighted median, is to meet the condition In some cases, after C 1 is met, the amplitude of the blob significance B(x, k) still increases because the number of pixels with uncertain status is still large. Therefore, an additional constrain is applied: The third condition is to limit the maximum number of iterations as which is chosen from experiment. The truncating procedure of in Algorithm 1 is terminated if the following conditions is satisfied, as From (2) we find that the blob significance value B(x) is within the range [−2, 2]. For a bright region, B(x) > 0. The maximum value of its blob significance is 2. Similarly, a local region is dark if B(x) < 0 and the minimum value of its blob significance is -2. C. The Proposed ATC Detector 1) Ridge and Edge Suppression: Interest points are extracted by detecting the local peaks from the blob significance map (3). In order to suppress the unreliable points detected on ridges and edges, the ratio is used. Small r means that the peak value is quite similar to that in its surrounding regions. We remove such candidates if r < 0.05, which is chosen empirically. wall 1508 1460 1520 1568 1593 1514 boat 1546 1501 1549 1429 1524 1501 leuven 1527 1426 1476 1501 1648 1488 desktop 1539 1539 868 1526 1698 1451 corridor 1526 1564 1540 1544 1583 1578 2) Algorithm for ATC Detector: Detecting interest points in multiple scales is essential in many vision applications where the same objects can appear with different sizes. By changing the size of the local image patches S 1 and S 2 , the ATC detector can identify local structures of various scales. Similar to that done in [25], we implement the multi-scale ATC detector by detecting the points in each scale. The procedures of the proposed ATC detector are summarized as follows: 1) Generate the blob significance map on multi-scales by Algorithm 1. 2) Detect the local peaks of the blob significance on spatial dimensions. 3) Remove the peaks on ridges and edges by (8). The remaining peaks are the interest points to be detected. A. Repeatability Two detected regions are regarded as repeated if their overlap is above 60% as suggested in [26]. For an image pair {Img1, Img2}, the repeatability score is defined as p r / max{p 1 , p 2 }, where p r is the number of repeated points, and p 1 and p 2 are the numbers of the points detected from the common area and scale of Img1 and Img2, respectively. We use the repeatability to evaluate the detectors under different variations. The three datasets 'wall', 'boat' and 'leuven' from Oxford database in [26] and the 'desktop' and 'corridor' datasets from [27] with complex illumination changes are used for testing. Similar to that in [18], half-sampled images are used for evaluation. For the ATC detector, interest points are extracted on 5 octaves by half-sampling the previous octave. In each octave, local extrema are detected on 3 scales: {σ n } n=1,2,3 = {4, 5, 6}. The ATC detector is compared with five detectors consisting of the SIFT [9], Harris-affine (HR-A) [8], Hessianaffine (HS-A) [8], MSER [12] and ROLG [11] detectors. For each data set, the detector parameters are adjusted so that roughly the same number of interest points (shown in Table I) are detected on the first image for all detectors. The interest points detected by the HR-A detector on the first image of the 'desktop' set is smaller than others although the contrast threshold is already set to be zero due to the darken illumination on this image. Fig. 2 B. Application to Face Recognition To demonstrate the implications of the proposed ATC detector, we evaluate it in the face recognition application [28], [29], [30]. Specifically, the ATC detector is compared with the SIFT [9], HR-A [8], HS-A [8], MSER [12] and ROLG [11] detectors. As the default setting produces too few interest points for the face recognition for all detectors, the thresholds that are used to remove the low response interest points are set to be zero for all detectors in the present experiment. For the MSER detector, the minimum size of its output region is set to be 1/4 of the default setting to ensure it is applicable to all of the testing databases. All the detected interest points are described by the SIFT descriptor. The matching algorithm for face recognition, which consists of interest point matching II FACE DATABASE SETTINGS. image size subjects gallery test AR 60×85 75 7 7 GT 60×80 50 8 7 ORL 50×57 40 5 5 FERET 60×80 1194 1 1 TABLE III and geometric verification with Hough transform, is described in [9]. Four standard face recognition databases, including AR [31], GT [32], ORL [33] and FERET [34], are used to evaluate these detectors. The database setting is shown in Table II. The face images in these databases have variations in illumination, expression and poses. The recognition rate, which is the percentage of correctly identified test images from the rank-1 best matched gallery, is used to measure the performance of the interest point detectors. Table III shows that the proposed detector achieves the highest recognition rate over the four databases. It suggests that the interest points detected by the proposed ATC detector are more robust and discriminative compared to others. IV. CONCLUSIONS In this paper, an interest point detector is designed based on the adaptive ternary coding (ATC) algorithm, which is inspired by the ITM algorithm to categorize the pixels into the bright, dark and uncertain statuses. As the blob significance is measured by counting the number of bright and dark pixels, the detection result is invariant to the illumination changes. Evaluations on the Oxford dataset [26] and the complex illumination dataset in [27] show that the ATC detector outperforms the other five detectors in terms of repeatability under the variations caused by scale, viewpoint and illumination changes. The advance performance of the proposed detector is also verified in the application of face recognition.
2018-12-31T20:00:00.000Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "bbc538f47a1677ea63c4cfd41ea79b23b079c29c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "24d62eff4e7361586de40b34883cdf5d16ff40e9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119519596
pes2o/s2orc
v3-fos-license
Joule-Thomson expansion of charged Gauss-Bonnet black holes in AdS space Joule-Thomson expansion process is studied for charged Gauss-Bonnet black holes in AdS space. Firstly, in five-dimensional space-time, the isenthalpic curve in $T-P$ graph is obtained and the cooling-heating region is determined. Secondly, the explicit expression of Joule-Thomson coefficient is obtained from the basic formulas of enthalpy and temperature. Our methods can also be applied to van der Waals system as well as other black hole systems. And the inversion curve $\tilde{T}(\tilde{P})$ which separates the cooling region and heating region is obtained and investigated. Thirdly, interesting dependence of the inversion curves on the charge $(Q)$ and the Gauss-Bonnet parameter $(\alpha)$ is revealed. In $\tilde{T}-\tilde{P}$ graph, the cooling region decreases with charge, but increases with the Gauss-Bonnet parameter. Fourthly, by applying our methods, the Joule-Thomson expansion process for $\alpha=0$ case in four dimension is studied, where the Gauss-Bonnet AdS black hole degenerates into RN-AdS black hole. The inversion curves for van der Waals systems consist of two parts. One has positive slope, while the other has negative slope. However, for black hole systems, the slopes of the inversion curves are always positive, which seems to be a universal feature. I. INTRODUCTION Thermodynamics of black holes is an interesting and challenging topic since the discovery of black hole's entropy [1], the four thermodynamic law [2], and the Hawking radiation [3] in 1970s. It has fundamental connections between the classical thermodynamics, general relativity, and quantum mechanics. Specially, due to the development of AdS/CFT duality [4][5][6], this connection has been deepened and a lot attention has been attracted to the AdS black holes. In AdS space, there exists Hawking-Page phase transition between stable large black hole and thermal gas [7] which is explained as the confinement/deconfinement phase transition of a gauge field [8]. Considering the AdS black holes are electrically charged, rich phase structures are found by Chamblin et al [9,10]. It is discovered that the phase transition behavior of charged AdS black hole is reminiscent to the liquid-gas phase transition in a van der Waals system [11]. In the extended phase space where the cosmological constant is identified as pressure [12], further investigation of the P − V critical behavior of a charged AdS black hole support the analogy between the black holes and the van der Waals liquid-gas system. It is found that both the black hole and van der Waals system share not only the same P − V diagram, but also the critical exponents [13]. This analogy has been generalized to different AdS black holes, such as rotating black holes, higher dimensional black holes, Gauss-Bonnet black holes, f(R) black holes, black holes with scalar hair, etc . Apart from the phase transition and critical phenomena, the analogy between the black holes and the van der Waals system has been creatively generalized to the well-known Joule-Thomson expansion process [57] recently. In Joule-Thomson expansion of classical thermodynamics, gas at a high pressure passes through a porous plug to a section with a low pressure and during the process enthalpy is unchanged. There is an interesting phenomena during Joule-Thomson expansion process in T − P graph which is divided into two parts, one is cooling region, the other is heating region. There is an inversion curve separating the cooling and heating regions. For charged AdS black holes [57] and Kerr-AdS black holes [58], the isenthalpy expansion process and the inversion curve are investigated. Then the works are generalized to quintessence charged AdS black holes [59], holographic superfluids [60] and charged AdS black holes in f(R) gravity [61]. The results show that the inversion curvesT (P ) for all the above black hole systems have only positive slope. While for the van der Waals system, the inversion curves have both positive and negative slopes which form a circle in pressure axis. We are curious about the missing of negative slopes of the inversion curves, and eager to check out that whether the quantum gravity effects will cure this problem or whether this feature for black hole systems is just universal. So we will focus on the Gauss-Bonnet Einstein-Maxwell gravity. Consider the effects of Gauss-Bonnet term is interesting and important, since whatever the quantum gravity may be, there will be higher order corrections to the pure Einstein action, and Gauss-Bonnet is a well combination of terms which have candidate corrections. What's more, these terms represent part of the 1/N correction to the large N limit of the holographically dual SU (N )-like gauge field theory [62]. As a result, the investigation is interesting in its own right. This paper is organized as follows. In Sec.II, we briefly review the D-dimensional Charged Gauss-Bonnet black holes in AdS space. In Sec.III, firstly we investigate Joule-Thomson expansion for the Gauss-Bonnet parameter α > 0 and D = 5 dimensional black holes, the isenthalpy curve is studied, two methods are introduced to derive an explicit Joule-Thomson coefficient, the effects of charge and Gauss-Bonnet parameter on the inversion curves are studied. Then we investigate Joule-Thomson expansion for the Gauss-Bonnet parameter α = 0 and D = 4 dimensional black holes by our new methods. Sec.IV devotes to conclusion and discussion. GAUSS-BONNET BLACK HOLES IN ADS SPACE Consider action in D-dimensional Einstein-Maxwell theory with a cosmological constant Λ and a Gauss-Bonnet term: where the Gauss-Bonnet parameter α GB has dimesions of [length] 2 and the cosmological . When α GB = 0, the action degenerate into Einstein-Maxwell theory in AdS space. When α GB = 0, we will work in D ≥ 5, since in D = 4 the Gauss- The action admits a static black hole solution with maximal symmetry as where dΩ 2 D−2 is the metric on a round D − 2 sphere with volume ω D−2 , and Notice that since the m = q = 0 case, defining the vacuum solution, for a given value of l, α cannot be arbitrary [63], but must be constrained by 0 ≤ 4α/l 2 ≥ 1 which should be carefullly considered in the following investigation. When α = 0, using the L'Hospital's rule, Eq.(3) will become which is the familiar charged AdS black hole. m, q are related to the ADM mass M , electric The horizon radius r + of the black hole is the largest root of Y (r + ) = 0,which gives us an equation for the black hole mass M , where the AdS radius l is replaced by pressure P = (D−1)(D−2) 16πl 2 , and in this extended phase space the black hole mass is treated as enthalpy instead of internal energy. The temperature is obtained by the first derivative of Y (r) at the horizon, In the following, we will only use Eq.(6) and Eq. (7) to find the isenthalpic curves and determine the inversion temperatures between the cooling and heating regions for the black hole system in T − P plane. III. JOULE-THOMSON EXPANSION The Joule-Thomson expansion for black hole system is an isenthalpy process with fixed enthalpy H which is identified as the black hole mass H = M in the extended phase space. Similar to the Joule-Thomson process with fixed particle number for van der Waals gases, we should consider the canonical ensemble with fixed charge Q. The Gauss-Bonnet parameter α will also be treated as a constant. A. All the above methods are equalvalent. Choose one method and set µ = 0, together with When α = 0 and D = 4, the Gauss-Bonnet AdS black hole degenerate into normal AdS black hole. Its Joule-Thomson expansion process is investigated in Ref. [57]. In this section, we will use our methods to reinvestigate the Joule-Thomson expansion and double check these methods. The start point is still Eq.(6) and Eq. (7) which become or rewritten as Then the Joule-Thomson coefficient is obtained as, or They are consistent with each other. Setting µ = 0, the only positive and real root of r + is curveT (P ) is obtainedT which is exactly same with Eq. (44) in Ref. [57]. Same with α > 0, D = 5 case, the slopes of the inversion curvesT (P ) are also always positive. IV. CONCLUSION AND DISCUSSION In this paper, we studied the Joule-Thomson expansion for charged Gauss-Bonnet black holes in AdS space. In the extended phase, the cosmological is identified as pressure and the black hole mass as enthalpy. Thus we considered the expansion process with constant mass. Firstly, the Joule-Thomson expansion process (the isenthalpy curve) is depicted in Thirdly, the dependence of the inversion curves on the charge and the Gauss-Bonnet parameter are investigated. An interesting result is depicted in Fig.3 where the slope of the inversion curves increase with charge, but decrease with Gauss-Bonnet parameter. As a result, the effect of charge and Gauss-Bonnet parameter are different. InT −P phase, the charge decrease the cooling region, while the Gauss-Bonnet parameter increase the cooling region. Finally, we checked that the Gauss-Bonnet AdS black hole degenerate into normal AdS black hole when α = 0. By applying to our methods, the Joule-Thomson expansion process is reinvestigated in four dimensional space-time. An analytical expression of the inversion curves is obtained which is exactly same with that in Ref. [57]. The slope of the inversion curves are also always positive. In the end, we would like to point out that the inversion curves of Joule-Thomson expansion for van der Waals gas system consist of two parts which forms a circle in pressure axis. One part is a lower one with positive slope, the other is an upper one with negative slope. For the details, one can refer to Ref. [57], or any textbook on thermodynamics and statistical physics. But as far as we know, the inversion curves of Joule-Thomson expansion for black hole system, such as charged AdS black holes [57], Kerr-AdS black holes [58], quintessence charged AdS black holes [59], charged AdS black holes in f(R) gravity [61], as well as our charged AdS black holes in Gauss-Bonnet gravity, are only with positive slope. This seems to be an universal feature for black hole system. The under physics behind the missing of the negative slope or the difference of Joule-Thomson expansion for black hole system and van der Waals system deserve to be disclosed in the future research.
2018-10-21T14:00:29.000Z
2018-05-14T00:00:00.000
{ "year": 2018, "sha1": "7ad4e97a3d7d8bf7a9e50eede409a2d88e80bab4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1805.05817", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7ad4e97a3d7d8bf7a9e50eede409a2d88e80bab4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250463374
pes2o/s2orc
v3-fos-license
China’s Vaccine Diplomacy and Its Implications for Global Health Governance The COVID-19 pandemic has wreaked havoc on global economy and human communities. Promoting the accessibility and affordability of vaccine via diplomacy is the key to mitigating the pandemic crisis. China has been accused of seeking geopolitical objectives by launching vaccine diplomacy. The definition of vaccine diplomacy is neutral by nature. China’s vaccine diplomacy is based on its holistic approach to national security and the importance China attaches to the “Belt and Road” Initiative. With a whole-of-government approach on both the bilateral and multilateral levels and marketization of vaccines, China’s vaccine diplomacy has immense implications for global health governance, in that it helps to narrow the global immunization vaccination gap and to promote human-right-based approach to global health governance. However, the sustainability of China’s vaccine diplomacy is questionable because of the Sino-American geopolitical competition and doubts over the efficacy of China’s vaccines. The escalation of power rivalry between China and the U.S. and the concerns over the efficacy of China’s vaccines forebode the gloomy future of China’s vaccine diplomacy. The COVID-19 pandemic has wreaked havoc on the world economy and global health security. To some degree, China has successfully developed vaccines to tackle the pandemic. However, many countries in the world, particularly the least-developed countries, lack the biotechnology to develop vaccines. No country will be safe until all countries are safe in terms of pandemic control. Therefore, it is both morally and realistically imperative for China to promote accessibility, affordability and availability of vaccines for the international community. China temporarily put the pandemic under control domestically and offered vaccines both within and beyond its borders. However, China has been accused of conducting vaccine diplomacy to expand its influence and achieve geopolitical objectives. This paper aims to explore the concept of China's vaccine diplomacy in the context of COVID-19 and to identify China's motivations to launch vaccine diplomacy and its implications for global health governance. The challenges of China's vaccine diplomacy ahead are also examined in the paper. Vaccine Diplomacy Vaccine diplomacy is not something new. It is as old as the vaccine itself [1]. Vaccine diplomacy has been launched to promote international health and international relations. Hotez defines vaccine diplomacy as "any aspect of global health diplomacy that relies on the use or delivery of vaccines" [2] (p. 43). Shakeel et al. describe vaccine diplomacy as the branch of global health diplomacy "that promotes the use and delivery of vaccines to achieve larger global health goals and shared foreign policy objectives" [3] (p. 82). Vaccine Healthcare 2022, 10, 1276 2 of 10 diplomacy contributes to global health security. The U.S.-Russia cooperation on vaccines to eradicate smallpox worldwide in the 1970s is a remarkable example of successful vaccine diplomacy [4] (p. 1301). Vaccine diplomacy involves research, development, production and exchange of vaccine products in health diplomacy. It refers to both the means and ends in international cooperation on vaccines. It is neutral by nature, as it is regarded as an end to promote the availability, accessibility and affordability of vaccines via diplomacy. Meanwhile, it could be used as a means to achieve diplomatic and political objectives. For example, the Central Intelligence Agency (CIA) of the U.S. once used vaccination programs for intelligence purposes [5] (p. 413). As another example, the U.S. government used vaccines as a means to win goodwill from the Indians in the west region and other countries in the 19th century [6]. Doubts over China's COVID-19 Vaccine Diplomacy China has launched unprecedented vaccine diplomacy since vaccines against the COVID-19 have become available in April of 2020. However, China's vaccine diplomacy has incurred widespread criticism. Eckart Woertz and Roie Yellinek claim that vaccine diplomacy has entered China's political dictionary. China uses this means to coerce recipient countries into conducting business [7]. Some argue that China uses vaccines to increase its influence and presence in certain areas to confront and compete with the U.S. [8]. Additionally, China has been accused of manipulating vaccine narratives in its competition with the United States. [9]. The Financial Times warned the Western countries of the increasing influence of China with its vaccines [10]. China was also criticized on the ground that its vaccine diplomacy undermined regional prosperity [11]. The American Congress even released the Curbing China's Vaccine Diplomacy Act to respond to China's vaccine diplomacy [12]. The doubts cast over China's vaccine diplomacy make it worthwhile to investigate China's motivations for prioritizing vaccines in its diplomatic agenda. China's Motivations for Launching Vaccine Diplomacy China has actively engaged itself in vaccine diplomacy since the outbreak of the COVID-19 pandemic. China's vaccine diplomacy derives from its holistic approach to national security and commitment to building the Health Silk Road. China's Pursuit of a Holistic Approach to National Security Vaccine diplomacy is a means for China to achieve its national security. China adopts a holistic approach to national security, as biosecurity has been integrated with its overall national security strategy [13]. In June 2020, Xi Jinping stressed the importance of building a strong public health system to provide adequate support for people's health in such fields as reform of the disease control and prevention system, improvement of the legal system related to health and promotion of international cooperation [14]. The Biosecurity Law of the People's Republic of China was released by China's National People's Congress in October 2020. This law "is formulated so as to maintain national security, prevent and respond to biosecurity risks, safeguards people's lives and health, protect biological resources and the ecological environment, promote the healthy development of biotechnology, promote the construction of a community with a shared future for mankind" [15] (pp. 2-3). It specifies that biosecurity is an important part of national security, and China will promote international cooperation in biosecurity [15] (pp. [3][4]. The law also states that China will mitigate biosecurity threats posed by the emerging infectious diseases [15] (p. 21). Vaccine diplomacy is an indispensable part of international cooperation in biosecurity. China's unprecedented formulation of the Biosecurity Law explicitly speaks volumes of China's efforts to address biosecurity threats for its overall national security. In addition, China's speech acts on COVID-19 pandemic and the extraordinary measures China adopted to address the pandemic indicate that health security has been combined with China's holistic national security framework. No country is safe from COVID-19 until every country is safe. The biosecurity interdependence between China and other countries motivated China to implement vaccine diplomacy for its national security. China's Commitment to Building a Health Silk Road (HSR) for Cooperation The concept of the HSR was proposed by China's National Health Commission in 2015. At the HSR Construction Symposium in 2016 in Uzbekistan, Xi Jinping stressed the importance to work jointly with other countries to build the HSR [16]. China committed itself to building the Health Silk Road by promoting cooperation in the public health system among countries along the Belt and Road routes [17] (p. 3). China even signed a Memorandum of Understanding on Health Cooperation within the framework of the Belt and Road Initiative with the World Health Organization (WHO) in 2017 [18]. The HSR is set to further advance China's role in global health governance. The COVID-19 pandemic has presented China a wonderful opportunity to build the HSR by practicing vaccine diplomacy in its engagement in global health governance. Having cast vaccines as international public goods, China has taken steps to provide vaccines or localize the production of vaccines in the participant countries of the HSR initiative as a means of bridging the immunization gap. For example, China launched the Initiative for Belt and Road Partnership on COVID-19 Vaccines Cooperation with 28 countries, including Kazakhstan, Thailand and Colombia, and called for international cooperation on vaccines [19]. China has directly provided vaccines to four geographical regions. Out of these four regions, Asia Pacific, the preferred region for HSR construction, has received the most significant number of Chinese vaccines. Not surprisingly, 9 out of the top 10 biggest recipients of China's donated doses of vaccine are participant countries of the HSR initiative. China's Whole-of-Government Approach to Vaccine Diplomacy China adopts a whole-of-government approach to vaccine diplomacy. Effective vaccine diplomacy entails close coordination among various ministries of the Chinese government. Three ministries play a key role in China's vaccine diplomacy: China International Development Cooperation Agency (CIDCA), Ministry of Commerce (MOC) and Ministry of Foreign Affairs (MFA). In general, CIDCA is responsible for formulating foreign aid guidelines and policies. MOC and MFA are in charge of implementing vaccine diplomacy. The former is also responsible for the specific implementation of foreign aid and negotiation with aid recipients and handling of specific affairs related to foreign aid projects, while MFA's agencies abroad, such as embassies and counsels, are taking the responsibility to coordinate and manage the foreign aid in the host country [20]. China's vaccine diplomacy also involves many other ministries or agencies due to the urgency and complexity of vaccine production and distribution, such as the Ministry of Industry and Information Technology (MIIT), National Health Commission (NHC), Ministry of Transport (MOT), Ministry of Finance (MOF), General Administration of Customs (GAC), National Medical Products Administration (NMPA), etc. [21]. The coordination among so many ministries exemplifies China's whole-of-government approach to vaccine diplomacy. China's Vaccine Diplomacy on Both Bilateral and Multilateral Levels China has launched its vaccine diplomacy on bilateral and multilateral levels. China's Sinopharm and Sinovac are the main providers of vaccines abroad. The immunization inequality is widening between the developed and developing countries. As of August 2021, around 60% of the population of higher-income countries had received at least one dose of the coronavirus vaccine. By contrast, only 1% of poorer populations in lower-income economies had received at least one dose of a vaccine at the same time point [22] (p. 1). China committed itself to narrowing the immunization gap. At the Global Health Summit, China's President Xi Jinping reiterated the urgency to find solutions to issues concerning the production capacity and distribution of vaccines in order to make vaccines more accessible and affordable in developing countries. He called on the international community to uphold fairness and equity to close the immunization gap [23]. China pledged to offer 1 billion additional doses of vaccines to Africa, among which 600 million doses will be provided as donation, and 400 million doses will be provided through joint production by Chinese companies and relevant African countries [24]. With regard to ASEAN countries, China promised to donate 150 million doses and provide USD 5 million for the COVID-19 ASEAN Response Fund [25]. Apart from bilateral cooperation in donations and sales of vaccine, China's vaccine diplomacy is extended to bilateral joint research and development of vaccines. For example, China and Cuba jointly developed a vaccine called Pan-Corona. The research was conducted in China and led by specialists from Cuba [26]. China has cooperated with Serbia to produce a China-developed COVID-19 vaccine [27]. At the beginning of 2022, China's Sinovac and the Egyptian government agreed to accelerate the transfer of COVID-19 vaccine production technology and the building of a warehouse with a capacity for 150 million vaccine doses [28]. China also collaborated with Pakistan to produce the Chinese CanSi-noBio COVID-19 vaccine [29]. China's efforts on the bilateral level to produce vaccines contributed to the accessibility and affordability of vaccines in the developing countries. China adheres to multilateralism in its foreign policy. China's vaccine diplomacy has also been implemented on the multilateral level. At the International Forum on COVID-19 Vaccine Cooperation hosted by China on 6 August 2021, Xi Jinping announced that "we (China) stand ready to work with international organizations to advance vaccine cooperation to protect the international community for a shared future" [30]. China's multilateralism in vaccine diplomacy finds expression in its contribution to COVAX, a multilateral international COVID-19 vaccine initiative led by the World Health Organization. COVAX is the main mechanism for China's vaccine diplomacy at the global level. China has provided over 180 million doses of vaccines to 49 countries through COVAX [31]. China's multilateralism in vaccine diplomacy is also reflected in its collaboration with Gavi, a multilateral mechanism dedicated to improving vaccine accessibility in lower-income countries. China hosted the International Forum on COVID-19 Vaccine Cooperation on 6 August 2021. At the forum, China pledged USD 100 million to the Gavi COVAX Advance Market Commitment to finance equitable access in 92 lower-income countries [32]. The announcement is China's largest voluntary pledge to an international organization to date. On 12 July 2021, China's Sinopharm and Sinovac signed agreements with Gavi to provide 550 million doses of vaccines [33]. China's contribution to the aforementioned multilateral mechanisms tremendously alleviated the global vaccine inequity. China's Vaccine Diplomacy through Marketization in Developing Countries China has tried to expand the market for its vaccines through diplomacy in developing countries. China based its vaccine diplomacy on its comparative advantages in vaccine R&D, manufacturing and delivery, and it has achieved a relatively significant success [34]. Chinese COVID-19 vaccines have claimed the largest market share in developing countries. A close examination of China's commitment to COVAX and Gavi helps to shed light on the point that China's vaccine diplomacy is based on marketization rather than donation. On 12 July 2021, China signed agreements with Gavi, the Vaccine Alliance, to sell 550 million doses of vaccines to Gavi [33]. The overwhelming majority of Chinese vaccines have been provided via sales instead of donations. China has provided most of its vaccines abroad as commercial supplies. China claimed that it would donate additional 10 million doses of vaccines to COVAX [35]. Apparently, the number pales in comparison to that of China's sales to Gavi in the agreements. Therefore, it is not surprising that China is not among the top donors of vaccines (Figure 1). rather than donation. On 12 July 2021, China signed agreements with Gavi, the Vaccine Alliance, to sell 550 million doses of vaccines to Gavi [33]. The overwhelming majority of Chinese vaccines have been provided via sales instead of donations. China has provided most of its vaccines abroad as commercial supplies. China claimed that it would donate additional 10 million doses of vaccines to COVAX [35]. Apparently, the number pales in comparison to that of China's sales to Gavi in the agreements. Therefore, it is not surprising that China is not among the top donors of vaccines ( Figure 1). Overall, China's donations of doses have been a small portion of China's portfolio (see Table 1). China's vaccine diplomacy through marketization of vaccines can also be observed from the great commercial interests its vaccine companies gained from overseas markets. For example, the sales of Sinovac in 2021 increased to USD 19.4 billion from USD 510.6 million in the prior year. Half of its revenues in 2021 were generated from overseas markets [36]. In addition, China launched vaccine diplomacy to promote extensive partnerships in sales and manufacturing in developing countries to scale up the manufacture of Chinese vaccines overseas. As of 20 April 2022, China has partnered with over 20 developing countries with annual productive capacity amounting to one billion doses, which has significantly enhanced the marketization of China's vaccines in developing countries [37]. Overall, China's donations of doses have been a small portion of China's portfolio (see Table 1). China's vaccine diplomacy through marketization of vaccines can also be observed from the great commercial interests its vaccine companies gained from overseas markets. For example, the sales of Sinovac in 2021 increased to USD 19.4 billion from USD 510.6 million in the prior year. Half of its revenues in 2021 were generated from overseas markets [36]. In addition, China launched vaccine diplomacy to promote extensive partnerships in sales and manufacturing in developing countries to scale up the manufacture of Chinese vaccines overseas. As of 20 April 2022, China has partnered with over 20 developing countries with annual productive capacity amounting to one billion doses, which has significantly enhanced the marketization of China's vaccines in developing countries [37]. The Implications of China's Vaccine Diplomacy for Global Health Governance China's vaccine diplomacy has significant implications for global health governance. Given China's strong capacity in vaccine production and distribution, its vaccine diplomacy makes a huge impact on the landscape of global health governance. China's Vaccine Diplomacy Narrowed the Global Immunization Gap The immunization gap between the developed countries and middle-and low-income countries poses a formidable challenge to global health governance. China regards vaccine diplomacy as an important instrument to narrow the global immunization gap. At the virtual session of the 2022 World Economic Forum (WEF), Xi Jinping stressed the importance of fully leveraging vaccines as a powerful weapon to close the global immunization gap [38]. China's vaccine diplomacy has significantly contributed to global COVID-19 vaccine equity. According to Wang Yi, China's Foreign Minister, one in two doses of vaccine administered globally are "made in China", and China has carried out joint production with 20 countries, with an annual production capacity of one billion doses [39]. China has also repeatedly called on all countries to uphold the primary attribute of vaccines as global public goods to ensure an equitable distribution of vaccines and speed up vaccination to close the gap in immunization [40]. China's efforts to promote the accessibility and availability helped to narrow the immunization divide. China contributed to global health governance via vaccine diplomacy, as it promoted the accessibility, availability and affordability of vaccines worldwide. China's Vaccine Diplomacy Promoted the Right-to-Health Approach to Global Health Governance The right to health was first articulated as a human right in the Constitution of the WHO. China has attached great importance to the right to health in its domestic vaccine administration. The rights to subsistence and development have been regarded as primary and fundamental human rights in China [41]. During the COVID-19 pandemic crisis, the right to subsistence manifested itself in the right to health. China adopts a human-right-based approach in its vaccine diplomacy. China's rightto-health approach in vaccine diplomacy is an extension of its domestic vaccine policy. China regards the right to health as a basic human right. In 2017, China released a white paper titled "Development of China's Public Health as an Essential Element of Human Rights". The paper emphasized that "Health is a precondition for the survival of humanity and the development of human society"; "The right to health is a basic human right rich in connotations"; "It is the guarantee for a life with dignity. Everyone is entitled to the highest standard of health, equally available and accessible" [42]. The right-to-health approach has been highlighted in China's high-level discourses on vaccine policy. The equitable distribution of vaccines is the key to living up to the right to health. Therefore, China has promised to make Chinese-made vaccines a global public good and ensure vaccines are affordable for developing and least-developed countries [43]. "COVID-19 vaccine development and deployment in China, when available, will be made a global public good, this will be China's contribution to vaccine accessibility and affordability in developing countries," China's President Xi Jinping declared at the 73rd World Health Assembly [44]. At the opening ceremony of the Boao Forum for Asia Annual Conference 2021, Xi Jinping reiterated that China would honor its commitment to make vaccines a global public good [45]. China's right-to-health approach in vaccine diplomacy has not only projected China as a responsible stakeholder in global health governance but also renewed global attention to extraterritorial human rights obligations in global health. Challenges for China's Vaccine Diplomacy Ahead Undoubtedly, China's vaccine diplomacy has promoted China's international image and helped to narrow the vaccination gap between developed and developing countries. However, it faces geopolitical and technological challenges ahead. Geopolitical Rivalry between China and the U.S. In an age of escalating great power rivalry, the geopolitical competition between China and the United States has been demonstrated in their vaccine diplomacy. China's vaccine diplomacy has targeted the developing countries, including in Africa and Latin America. This was regarded as a threat to the dominance of the United States in those areas. The United States accused China of using coercion in making the vaccines available to governments in need [46]. The United States tried to blunt China's vaccine diplomacy in Latin America by boosting vaccine donations in the region. China's vaccine diplomacy faced its real test when Biden promised that "America will become the arsenal of vaccines as we were the arsenal of democracy during World War Two" [47]. The European Commission also expressed its concerns over China's vaccine diplomacy. For example, European Commission President Ursula von der Leyen expressed skepticism over why China has exported its vaccines around the globe while neglecting its own population [48]. China launched vaccine diplomacy to enhance its bilateral relations to boost its influence in eastern Europe, Latin America and the African countries. China's vaccine diplomacy fits within its agenda of branding itself as a global health leader. In response, the U.S. initiated vaccine diplomacy to shape the international environment to its benefit and tried to claim its global leadership in the fight against the COVID-19 epidemic. The Biden administration views HSR as a clear geopolitical challenge to the United States. To counter China's ambition to expand both the market share and international influence via vaccine diplomacy, the U.S. partnered with Australia, India and Japan through the Quadrilateral Security Dialogue in March 2021 to finance, manufacture and distribute at least one billion doses of COVID-19 vaccines by the end of 2022 [49]. The Biden administration hosted the Global COVID-19 Summit in 2021 and 2022. China bluntly refused to attend the summit out of concern that China's attendance might consolidate the U.S global leadership in vaccine diplomacy [50]. Given the geopolitical rivalry between China and the United States, China's vaccine diplomacy will be challenged by the U.S. with its tremendous vaccine diplomacy. Concerns over the Efficacy Rate of China's Vaccines China's success in vaccine diplomacy hinges on the efficacy of its vaccines. Biotechnologically speaking, China has made great strides in vaccine research and development to respond to COVID-19, which finds full expression in the fact that the WHO has listed the Sinopharm and the Sinovac-CoronaVac COVID-19 vaccines for emergency use. However, the efficacy of China's vaccines has been called into question. On 10 April 2021, the director of the Chinese Center for Disease Control and Prevention allegedly said that Chinese vaccines "don't have very high protection rates" [51]. According to Gavi, the Vaccine Alliance, the efficacy of China's Sinopharm and Sinovac vaccines in protecting against all symptomatic disease after the second dose is 65-86% and 36-62%, respectively, which is significantly lower than that of Moderna (90-97%) and Pfizer-BioNTech (90-97%) [52]. Some countries also cast their doubts over China's vaccines. For example, Bahraini officials claimed that it would be offering Pfizer-BioNTech doses to certain high-risk individuals who have already received two Sinopharm jabs [53]. In September 2021, Brazil suspended its use of 12 million shots of China's Sinovac COVID-19 vaccine [54]. In October 2021, Thailand ceased using the Sinovac COVID-19 vaccine after its supplies were exhausted [55]. The widespread suspicions over the efficacy of China's vaccine tend to hinder China's vaccine diplomacy. The Omicron outbreak in China has indirectly demonstrated the low efficacy of China's vaccines. Since the outbreak of Omicron in China, many cities, including Shanghai, have been locked down. The nationwide lockdown has caused unthinkable damages to China's economy. However, China seems to have given up on mass vaccination as a viable tool against the pandemic. Mass vaccination has seldom been mentioned by the Chinese government in the fight against the pandemic. It seems to have been excluded from the toolkit of local governments since the outbreak of Omicron in Shanghai. China's zero-COVID policy through large-scale lockdown instead of mass vaccination indicates that China is not confident in the efficacy of its domestically produced COVID-19 vaccines. Due to widespread concerns over the relatively low efficacy rate of China's vaccines, China's exports of vaccines have plunged dramatically so far. For example, China's top three vaccine producers, Sinopharm, Sinovac Biotech and CanSino Biologics, exported a total of 6.78 million doses in April 2022, a 97% decrease from the peak in September 2021, according to UNICEF [56]. Because of concerns over the relatively low efficacy rate of China's vaccines, a growing number of countries, including those in southeast Asia, are shifting away from Chinese vaccines [57]. It is hard for China to be the main supplier of the vaccine. The concerns over the efficacy of China's vaccines makes the future of China's vaccine diplomacy gloomy. To improve the efficacy of the COVID-19 vaccines and mitigate the concerns, China has actively invested in vaccine R&D against Omicron. As mRNA vaccines have been more widely used in the world, China has made great efforts to develop its own mRNA vaccines. Sinopharm and Sinovac vaccines targeted at the Omicron variant have been approved to enter the clinical trial [58]. According to Lei Zhenglong, deputy director of China's National Health Commission's Bureau of Disease Prevention and Control, China has arranged several R&D tasks in mRNA vaccines. Some with faster timelines are conducting phase III clinical trials abroad, and some are in the process of review and approval. [59]. China's efforts to develop mRNA vaccines indicate that China expects to mitigate the concerns over the efficacy of its vaccines. Conclusions The COVID-19 pandemic has presented China with a window of opportunity to uplift its international image and influence through vaccine diplomacy. China uses vaccine diplomacy to contribute to its nation branding and present itself in a successful and responsible role against the pandemic [1,60]. China has also been motivated to launch vaccine diplomacy for its comprehensive national security and commitment to building a Health Silk Road for Cooperation. China's centralized political system makes it easy to adopt a whole-of-government approach to vaccine diplomacy at both the bilateral and multilateral levels. China's vaccine diplomacy has had a significant impact on the global geopolitical landscape, as China has attempted to turn its health crisis into a geopolitical opportunity. As noted by Yanzhong Huang, "where Beijing's inoculations go, its influence will follow" [61]. Admittedly, China's vaccine diplomacy, to some extent, has helped to close the global immunization gap and promote the right-to-health approach to global health governance. However, the sustainability of China's vaccine diplomacy is questionable. The escalation of the power rivalry between China and the U.S. and the international concerns over the efficacy of China's vaccines due to biotechnological barriers forebode a gloomy future for China's vaccine diplomacy.
2022-07-13T16:40:45.034Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "156a0f09e18e3efc5e3f94c70474d64668a49911", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/10/7/1276/pdf?version=1657613556", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7001ceddefbc6ae3acba248f3c6302b0bfb78408", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
22984601
pes2o/s2orc
v3-fos-license
Symptoms Analysis of Obsessive – Compulsive Disorder in Adolescents and Adults in a Teaching Hospital Introduction: Obsessive-compulsive disorder has a broadly diverse clinical expression that reflects heterogeneity. Several studies have identified consistent symptom dimensions of obsessivecompulsive disorder. The purpose of this study was to conduct an exploratory symptoms analysis of obsessive-compulsive symptoms in adolescents and adults with obsessive-compulsive disorder. Methods: This was a cross-sectional study conducted in the Department of Psychiatry, National Medical College. This study examined lifetime occurrence of obsessive-compulsive symptoms included in the 13 symptom categories of the Yale–Brown Obsessive Compulsive Scale. Symptoms analysis was performed on 60 patients with obsessive-compulsive disorder. Eight categories of obsessions and six categories of compulsions from Yale–Brown Obsessive Compulsive Scale were included in the analyses. SPSS software package (version 16) was used to analyze the data and shown in the table. results: Of 60 adolescents and adults, female and male were in the ratio of 1.2:1. Contamination was the most common occurring obsession followed by aggressive obsession. The most common occurring compulsion was checking followed by washing. Only a minority of patients (13.33%) presented predominantly with obsessions however 18.33% patients presented predominantly with compulsions. Certain obsessions and compulsions co-occur to form a cluster. conclusions: In adolescents and adults, obsessive-compulsive disorder is a multidimensional disorder. Symptom dimensions are predominantly congruent with those described in similar studies of adults with obsessive-compulsive disorder. _______________________________________________________________________________________ INtrODUctION Obsessive-compulsive disorder (OCD) is characterized by the presence of distressing intrusive thoughts, impulses, fears or images (obsessions), and/or repetitive behaviors or mental rituals (compulsions). 1 Obsessions have four essential features: they are recurrent and persistent thoughts, impulses or images that are experienced as intrusive and cause great anxiety; they are not simply excessive worries about real life issues; the affected individual attempts to ignore, suppress, or neutralize them with some other thought or action; and the affected individual recognizes that these thoughts are a product of his or her mind. 2en obsession causes anxiety, one is compelled to perform repetitive action to relieve the anxiety in part which is called compulsion with essential features: they are a form of behaviour which usually follow obsession, they aim at either preventing or neutralizing the distress or fear arising out of obsession, the behaviour is not realistic and is either irrational or excessive and the behaviour is performed with a sense of subjective urge or impulse which may diminish the anxiety associated with obsessions. 3is study was designed to explore and analyze the symptoms of OCD.It also tried to reflect the relationship between certain obsession and compulsion.Apart from this, relation of symptoms within and between obsessions and compulsions were explored. MEtHODs This descriptive cross-sectional study was conducted in the Department of Psychiatry, National Medical College after getting approval from the Institutional Review Committee (IRC).The subjects in this study consisted of 60 adolescents and adult with OCD, evaluated independently and consecutively at outpatient department of National Medical College, Birgunj.Inclusion criteria were defined as a diagnosis of OCD according to the DSM-IV-TR criteria,4 and age of 18 years or older.Subjects were excluded if they had ever received a diagnosis of other mental disorder and organic conditions.Following a detailed description of the study to all subjects, written informed consent was obtained for each subject. All Subjects were 18 years of age and above so they were assessed with the Yale-Brown Obsessive Compulsive Scale (Y-BOCS),5,6 a commonly used measure of OCD symptom severity.The instruments are clinician-rated, semi-structured interviews used in rating the presence and severity of obsessions and compulsions.They are divided into two major sections, including a symptom checklist and a symptom severity scale.The symptom checklists are also very similar and assess both current and past symptoms.Y-BOCS checklists have more than 60 items, which are organized into two miscellaneous categories and 13 other categories according to their thematic content (including contamination, aggression, sexual, hoarding, somatic, symmetry, and religious obsessions; and washing, checking, repeating, counting, ordering and hoarding compulsions).In this study we have focused only on symptoms checklists. A demographic characteristic regarding gender was compared between male and female using chi-square test.Eight categories of obsessions and six categories of compulsions from Y-BOCS were included in the analyses.Somatic symptoms were excluded from the categories because the somatic items may be related to hypochondriasis, a frequently comorbid illness with OCD.7 Moreover it was also difficult to distinguish between those two symptoms.SPSS software package (Version 16, SPSS Inc., Chicago, USA) was used to analyze the data.Descriptive statistics was used to obtain the desired results. rEsULts The total 60 patients were evaluated using Yale Brown Obsessive Compulsive Scale Checklist (Y-BOCS).Of the 60 patients, who constituted the study sample, 33 (55.0%) were females and 27(45.0%)were males in the ratio of 1.2:1.There was no significant difference in gender distribution of subjects (chi-square=0.6;p= 0.05).The age range was between 18 to 56 years.The mean (±SD) and median age at the time of study was 27.9±8.9and 27.5 respectively.The study finding suggests that the most commonly occurring obsessive symptoms were contamination followed by aggressive obsessions (Table 1).Similarly it is followed by religious, pathological doubt, symmetry, sexual and hoarding obsession.Other obsessions or miscellaneous category that include superstitious fears, intrusive meaningless thoughts, images, sounds, words or music consisted of 10% of the patients.The most commonly occurring compulsions were checking followed by hand washing. DIscUssIONs Although the standard classification system i.e. diagnostic manuals of the DSM-IV, 4 and ICD-10, 8 regard OCD as a unitary nosologic entity, data from clinical, genetic, neuroimaging, and treatment response studies demonstrate that this severe and potentially disabling condition is a heterogeneous disorder. 9,10Numerous studies have used different strategies to identify more homogeneous subgroups of this illness using both categorical and dimensional approaches to address this heterogeneity.The categorical strategies include subdividing the illness according to the age at symptom onset, 11,12 the presence of comorbid disorders, 13,14 the morbid risk among family members, 15 and differences in treatment outcome. 16On the other hand, the dimensional strategies have been proposed to address symptom heterogeneity.Several adult factor analytic studies have identified dimensions from symptom categories of the Yale-Brown Obsessive Compulsive Scale (Y-BOCS) that reflect the heterogeneous nature of OCD. 10 Our study explored the heterogeneous nature of OCD as reflected by other researches as well.This study revealed contamination obsession as the most commonly occurring obsession in adolescents and adults.8][19] Such obsessions in our study mostly consisted of concerned with dirt or germs, insects and animals, and concerned with disgust with bodily secretions.Other contamination obsessions such as excessively concerned with household items cleaners or solvents, concerned will get ill (eg: AIDS) and somatic obsessions etc. were seen rarely.This variation in contents of contamination obsessions might be due to socio-cultural variation since cleaners or solvent are not readily available in every households and most people are illiterate and they might not know regarding disease consequences. Second most occurring obsession is aggressive obsession which occurred in almost half of patients. Most common contents were violent or horrific images, fear of harming others and fear responsible for terrible happening (eg.fire).Foa and Kozak, 18 in their study found similar finding but the percentage of occurring was 24% which was almost half of our study findings. In contrast Rasmussen and Eisen, 17 study revealed it as fifth common occurring symptoms.Study in similar context in Pakistan found contamination, pathological doubt and need for symmetry as the major common occurring obsession rather than aggressive obsession. 20proximately one fourth of respondent had religious obsession.This finding result was in contrast with other study done in western culture. 9,17,18Past study revealed that the cultural background could affect the content of obsessions or compulsions. 21Moreover, our study sample consisted of adolescents who are not much religious as compared to adults and elderly.Religious obsession might be more common in those ethnic groups or age groups in whom religion has less prominent role which was reflected by the study of Vishne, Misgav and Bunzel. 22e fourth occurring obsession was pathological doubt. Patients with doubts typically worry that something terrible may happen because they have not completed an act thoroughly or completely.There are evidences showing similar finding by some searches 9,17,20 while others have contradictory findings in term of occurrence. 3,18The reason for this mixed finding may be due to cultural variation, geographical variation or may be some unknown factor.More research should be done to reveal these variations.The fifth occurring obsession in the study was symmetry that consisted of 18.33% of obsessions.Need of symmetry is a drive to order or arrange things perfectly or to perform certain behaviours symmetrically or in a balanced way.Some of them also had slowness, taking hours to perform acts such as grooming, dressing, and brushing teeth etc.These findings were consistent with study in similar settings in India and Pakistan. 23,24xual obsession was also found in approximately one seventh of respondent.Sexual obsessions were selectively more prevalent in adolescents compared with adults and children. 25It is possible that sexual obsessions were underrepresented in this sample due to the fact that the subjects kept them secret because of embarrassment and possible guilt associated in revealing them.Other cause may be cultural barrier in exposing this type of secret. Other obsessions that include superstitious fears, intrusive meaningless thoughts, images, sounds, words or music consisted of 10% obsessions.As most of the researches excluded this category in different factor analysis study due to miscellaneous contents, 7,9,10,26 inclusion of this category in research could reveal an important factor.Since the occurrence of this other obsessions in our study is common than hoarding, inclusion in all OCD study among adolescents and adults is recommended to explore more heterogeneity. The last component is hoarding that accounts for 8.33% of obsessions in which collection of useless items such as newspaper was the commonest content.Different researches result varies in term of frequency and percentage of occurrence of hoarding especially in the western context. 17,18.24The most common occurring compulsions were checking followed by cleaning, and counting and repeating.These study finding were consistent with other research finding. 17,18,19Excessive cleaning was mostly associated with contamination obsession.The presence of checking compulsions in this study was somewhat surprising since it co-occur with different obsessive and compulsive category.However, it has been recently suggested that the checking category could be a heterogeneous entity, i.e. some symptoms could be associated with one dimension and the others with other dimensions. 26Further it was revealed that checking was found to be associated with contamination and cleaning symptoms, 27 with aggressive, sexual, and religious obsessions, miscellaneous symptoms or with symmetry, ordering and repeating symptoms. 28hus, the integration of the checking category could depend on clinical characteristics that are not directly related to the theme of the associated obsessions and/ or compulsions.Other commonly occurring contents of cleaning compulsions such as excessive hand washing, bathing, tooth brushing and grooming were consistent with other previous researches.Similarly counting and repeating compulsions, ordering compulsions and hoarding compulsions were consistent with previous researches however, order of occurrence in term of percentages were different. 3,4,17,18,24fferent recent researches indicated that certain obsessions and compulsions tend to co-occur to form four to five main dimensions across ages from childhood through adulthood. 8,29,30Our study finding was also consistent with those recent research findings and several correlations were identified within and between obsessions and compulsions.Although we did not do detailed factor analysis, the study revealed certain obsession and compulsions which co-occur to form the following clusters: 1) contamination obsessions and cleaning compulsions, 2) aggressive, pathological doubts obsessions and checking compulsions, 3) sexual obsessions and religious obsessions, 4) symmetry obsession and ordering, counting and repeating compulsions, 5) hoarding obsessions and hoarding compulsions. This study had certain limitations as well.This study has several methodological limitations.The sample size of the study was small as compared to other study done on other OCD.The study cohort included only the OPD patients so it was difficult in the generalizing the results as selection bias might have been a limiting factor.Different outcomes may be expected in community setting. cONcLUsIONs Obsessive and compulsive disorder is heterogeneous condition.The most occurring obsessions are contamination followed by aggressive.The most occurring compulsions are checking followed by washing.There were correlations within and between compulsions and obsessions and they tend to occur together.
2018-04-03T01:52:13.340Z
2014-06-30T00:00:00.000
{ "year": 2014, "sha1": "a67b326ea92d0ae1c09ce113761301e04dced7df", "oa_license": "CCBY", "oa_url": "https://doi.org/10.31729/jnma.2730", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a67b326ea92d0ae1c09ce113761301e04dced7df", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
17706550
pes2o/s2orc
v3-fos-license
Multiplicity fluctuations in the string clustering approach We present our results on multiplicity fluctuations in the framework of the string clustering approach. We compare our results --with and without clustering formation-- with CERN SPS NA49 data. We find a non-monotonic behaviour of these fluctuations as a function of the collision centrality, which has the same origin as the observed fluctuations of transverse momentum: the correlations between the produced particles due to the cluster formation. Non-statistical event-by-event fluctuations have been proposed as a possible signature for QCD phase transition. In a thermodynamical picture of the strongly interacting system formed in heavy-ion collisions, the fluctuations of the mean transverse momentum or mean multiplicity are related to the fundamental properties of the system, such the specific heat, so may reveal information about the QCD phase boundary. In particular, we have proposed [26] an explanation for those fluctuations based on the creation of string clusters. In our approach, we find an increase of the mean p T fluctuations at mid centralities, followed by a decrease at large centrality. Moreover, we obtain a similar behaviour at SPS and RHIC energies. In the framework of string clustering such a behaviour is naturally explained. As the centrality increases, the strings overlap in the transverse plane forming clusters. These clusters decay into particles with mean transverse momentum and multiplicities that depend on the number of strings that conform each cluster. The event-by-event fluctuations on mean p T and mean multiplicity correspond then to fluctuations of the transverse momentum and multiplicity of those clusters and behave essentially as the number of clusters conformed by a different number of strings. If the number of different clusters -different in this context means that the clusters are made of different numbers of strings-grows, that will lead to an increase of fluctuations. And in fact this number grows with centrality up to a maximum. For higher centralities, the number of different clusters decreases. On the other hand, in a jet production scenario [8], the mean p T fluctuations are attributed to jet production in peripheral events, combined with jet suppression at larger centralities. A possible way to discriminate between the two approaches could be the study of fluctuations at SPS energies, where jet production cannot play a fundamental role. Recently, the NA49 Collaboration have presented their data on multiplicity fluctuations as a function of centrality [30,31] at SPS energies. In order to develop the experimental analysis, the variance of the multiplicity distribution V ar(N) =< N 2 > − < N > 2 scaled to the mean value of the multiplicity < N > has been used. A non-monotonic centrality -system size-dependence of the scaled variance was found. In fact, its behaviour is similar to the one obtained for the Φ(p T )-measure [32] used by the NA49 Collaboration to quantify the p T -fluctuations, suggesting they are related to each other [33,34]. The Φ-measure is independent of the distribution of number of particle sources if the sources are identical and independent from each other. This implies that Φ is independent of the impact parameter if the nucleus-nucleus collision is a simple superposition of nucleon-nucleon interactions. Our aim in this note is to calculate the event-by-event multiplicity fluctuations applying the same mechanism -clustering of colour strings-that we have used previously [26] for the study of the p T -fluctuations. Let us remember the main features of our model. In each nuclear collision, colour strings are streached between partons from the projectile and the target, which decay into new strings by sea q −q production and finally hadronize to produce the observed particles. For the decay of the strings we apply the Schwinger mechanism of fragmentation [35], where the decay is controlled by the string tension that depends on the colour charge and colour field of the string. the strings begin to overlap forming clusters [36]. We assume that a cluster of n strings that occupies an area S n behaves as a single colour source with a higher colour field, generated by a higher colour charge Q n . This charge corresponds to the vectorial sum of the colour charges of each individual string Q 1 . The resulting colour field covers the area S n of the cluster. As Q 2 n = ( n 1 Q 1 ) 2 , and the individual string colours may be arbitrarily oriented, the average Q 1i Q 1j is zero, so Q 2 n = nQ 2 1 if the strings fully overlap. Since the strings may overlap only partially we introduce a dependence on the area of the cluster. We obtain Q n = nSn S 1 Q 1 [37]. Now we apply the Schwinger mechanism for the fragmentation of the cluster, and one obtains a relation between the mean multiplicity < µ > n and the average transverse momentum < p T > n of the particles produced by a cluster of n strings that covers an area S n : where < µ > 1 and < p T > 1 correspond to the mean multiplicity and the mean transverse momentum of the particles produced by one individual string. In order to obtain the mean p T and the mean multiplicity of the collision at a given centrality, one needs to sum over all formed clusters and to average over all events: The sum over j goes over all individual clusters j, each one formed by n j strings and occupying an area S n j . The quantities n j and S n j are obtained for each event, using a Monte Carlo code [38,39], based on the quark gluon string model. Each string is generated at an identified impact parameter in the transverse space. Knowing the transverse area of each string, we identify all the clusters formed in each event, the number of strings n j that conforms each cluster j, and the area occupied by each cluster S n j . Note that for two different clusters, j and k, formed by the same number of strings n j = n k , the areas S n j and S n k can vary. Because of this we do the sum over all individual clusters. So we use a Monte Carlo for the cluster formation, in order to compute the number of strings that come into each cluster and the area of the cluster. On the other hand, we do not use a Monte Carlo code for the decay of the cluster, since we apply analytical expressions (eqs. (1)) for the transverse momentum < p T > n j and the multiplicity < µ > n j of each individual cluster. In order to obtain the scaled variance we calculate < µ 2 >: where we have supposed that the multiplicity of each cluster follows a Poissonian of mean value < µ > n j , and we have applied the property for a Poissonian: Finally, our formula for the scaled variance obeys: where the mean value in the r.h.s. corresponds to an average over all events. The behaviour of this quantity is as follows: in the limit of low density -isolated strings that do not interact-, where N s corresponds to the number of strings. Considering that, for a fixed number of participants, the number of strings behaves as a Poissonian distribution we obtain so V ar(µ) < µ > = 1+ < µ > 1 . In the large density regime -all the strings fuse into a single cluster that occupies the whole interaction area-we have: where S A is the nuclear overlap area. The second element of the r.h.s. of this equation tends to zero, and the scaled variance becomes equal to one. Our results for the scaled variance for negative particles V ar(n − ) <n − > compared to experimental data [30,31] are presented in Fig. 1. Note that in order to obtain these results we need to fix the value of the parameter < µ > 1 . It is defined as < µ > 1 =< µ > 0 ∆y, where < µ > 0 is the number of particles produced by one individual string and ∆y corresponds to the rapidity interval considered. We do not introduce any dependence of < µ > 0 with the energy or the centrality of the collision. The value of < µ > 0 has been previously fixed from a comparison of the model to SPS and RHIC data [37,40] on multiplicities. In the first Ref. of [37], the total multiplicity per unit rapidity produced by one string has been taken as < µ > 0 tot ≃ 1. If we assume that 1/3 of the created particles are negative, that would lead to a negative particle multiplicity per unit rapidity for each individual string of < µ > 0 neg = 0.33. The rapidity interval considered, in order to compare with NA49 experimental data, is 4.0 < y < 5.5. The data are obtained in a restricted p T range, 0.005 < p T < 1.5 GeV/c, while our results take into account all possible transverse momenta. Nevertheless, the experimental acceptance covers the small p T region, which gives the largest contribution at SPS energies. Because of this, we obtain a good agreement for the centrality dependence of < p T > (see Table 1 of Ref. [26] for more details.). In Figs. 2 and 3 we present separately our results for the variance V (n − ) and the mean multiplicity < n − > of negatively charged particles. We have included our results without clustering formation. One can observe that, when clustering is included, we find a perfect agreement with experimental data for the mean multiplicity. Concerning the variance and the scaled variance, the agreement is less good, but still one can see that the clustering works in the right direction: it produces a decrease of the variance in the central region -where the density of strings increases so the clustering has a bigger effect-. Instead of that, without clustering, the scaled variance tends to a monotonic behaviour with centrality. Note that, if no clustering is taken into account, our result for the variance is qualitatively similar to the HIJING simulation. From eqs. (4) to (8) one can also deduce what will be the behaviour of the scaled variance if both positively and negatively particles are taken into account: there will be an increase of the scaled variance in the fragmentation region -low number of participants and low density of strings-according to eq. (7), due to the increase of < µ > 1 , that now becomes proportional to 2/3 of < µ > 0 . In the most central region our result for the scaled variance essentially does not change, since the dependence on < µ > 1 is in this region much smaller, according to eq. (8). In our approach, the scaled variance for the positive particles is equal to the one for the negatives particles, since both depend on < µ > 1 in the same way. This is in agreement with experimental data [5]. In Fig. 4 we present our prediction for the scaled variance at RHIC energies. The behaviour is similar to the one obtained at SPS energies. This is in accordance with our results for the mean p T fluctuations. Note that now < µ > 1 is going to be smaller that in the SPS case, since we take ∆y=0.7, according with the experimental acceptance of PHENIX experiment. This in principal implicates smaller correlations. On the other hand, at RHIC energies we have a higher value for the mean number of strings at fixed N part . Both effects tend to compensate each other, specially in the small and mid centrality region -where < µ > 1 plays a fundamental role, according to eq. (7)-. In the large centrality region we can observe that the effect of clustering leads to a scaled variance very close to one. In conclusion, we have found a non-monotonic dependence of the multiplicity fluctuations with the number of participants. The centrality behaviour of these fluctuations is very similar to the one previously found for the mean p T fluctuations. In our approach, the physical mechanism responsible for multiplicity and mean p T fluctuations is the same [26]: the formation of clusters of strings that introduces correlations between the produced particles. On the other hand, the mean p T fluctuations have been also attributed [8] to jet production in peripheral events, combined with jet suppression in more central events. However, this hard-scattering interpretation, based on jet production and jet suppression, can not be applied to SPS energies, so it does not explain the non-monotonic behaviour of the mean p T fluctuations neither the relation between mean p T and multiplicity fluctuations at SPS energy. Other possible mechanism, extensively discussed in [33,34] are: combination of strong and electromagnetic interaction, dipole-dipole interaction and non-extensive thermodynamics. Still, it is not clear if these fluctuations have a kinematic or dynamic origin, but clustering of colour sources remains a good possibilty, since: • It can reproduce the qualitative behaviour of the even-by-event fluctuations with centrality. • In this approach, mean p T fluctuations and multiplicity fluctuations are naturally related. • It applies at SPS and RHIC energies. Acknowledgments It is a pleasure to thank N. Armesto for interesting discussions and helpful suggestions.
2014-10-01T00:00:00.000Z
2005-05-23T00:00:00.000
{ "year": 2005, "sha1": "947d09179dbb2218d31d8f4bcd7ce3a575e45104", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0505197", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8296fb76a2006c7823190b2e043481bc6e6ec299", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235427174
pes2o/s2orc
v3-fos-license
Silkworm model for Bacillus anthracis infection and virulence determination ABSTRACT Bacillus anthracis is an obligate pathogen and a causative agent of anthrax. Its major virulence factors are plasmid-coded; however, recent studies have revealed chromosome-encoded virulence factors, indicating that the current understanding of its virulence mechanism is elusive and needs further investigation. In this study, we established a silkworm (Bombyx mori) infection model of B. anthracis. We showed that silkworms were killed by B. anthracis Sterne and cured of the infection when administered with antibiotics. We quantitatively determined the lethal dose of the bacteria that kills 50% larvae and effective doses of antibiotics that cure 50% infected larvae. Furthermore, we demonstrated that B. anthracis mutants with disruption in virulence genes such as pagA, lef, and atxA had attenuated silkworm-killing ability and reduced colonization in silkworm hemolymph. The silkworm infection model established in this study can be utilized in large-scale infection experiments to identify novel virulence determinants and develop novel therapeutic options against B. anthracis infections. Introduction Bacillus anthracis is a spore-forming Gram-positive bacterium that infects both animals and humans. Most animals ingest B. anthracis spores while grazing and develop an infection, and humans occasionally acquire infection from the infected animals or animal products. The sporeforming ability allows the bacteria to exist for decades in a dormant state and resist harsh environments [1]. Despite immunization efforts, B. anthracis is still a potential threat due to sporadic anthrax outbreaks [2][3][4][5][6][7][8][9] and its use in bioterrorism [10]. Consequently, the world health organization and centers for disease control and prevention have placed B. anthracis as one of the top bioterrorism agents. The pathogenicity of B. anthracis has been attributed mainly to toxins and capsule encoded in the plasmids pXO1 and pXO2, respectively [11,12]. Emerging evidence has uncovered the involvement of chromosomal genes in virulence [13][14][15][16][17][18], implying that other virulence factors of B. anthracis are yet to be identified. As pathogenesis is an outcome of host-pathogen interaction, a suitable animal host is desired to understand virulence mechanisms and design novel therapeutic approaches. A model that can be used in large numbers and have fewer ethical concerns would be of particular importance during the initial phases of research. Due to economic, technical, and ethical concerns associated with the use of vertebrate animals, scientists are turning toward invertebrate animal models such as Caenorhabditis elegans [19][20][21], Drosophila melanogaster [22][23][24], Galleria mellonella [25][26][27] and Bombyx mori (silkworm) [28][29][30] for large scale screenings. Accurate dose administration in C. elegans and D. melanogaster is difficult due to their small size. Due to the faster locomotion and smaller size of G. mellonella larvae (~2 cm), injection experiments are relatively difficult, and there is an increased risk of needle injury to the personnel, necessitating the use of additional restraint and personnel protective equipment [31,32] . Moreover, the ability of adult wax moths to fly increases the chance of their escape from the laboratory. Silkworm larvae have been used as a desirable animal model with many physical and biological advantages [33]. They are small enough for easy handling yet large enough (~5 cm) to perform experiments involving organ isolations and desired quantity injections. Besides, due to their slow locomotion, harmless nature, and the inability of adult silk moths to fly, they have less biohazard potential. Conserved basic biological features with mammals make silkworms appropriate as animal models of human diseases [34][35][36]. Using silkworms as an infection model, novel antimicrobial agents and novel genes with roles in bacterial virulence have been identified [28,29,[37][38][39][40]. In this study, we established a silkworm model of B. anthracis infection. We demonstrated that B. anthracis kills silkworms, and the infection can be treated by clinically used antibiotics. Using a fluorescent protein-expressing strain, we revealed how B. anthracis establishes infection inside the host. We further showed that mutants with disruption in the genes encoding known virulence factors had decreased virulence in silkworms. Bacterial strains and culture condition The bacterial strains used in this study are shown in Table 1. B. anthracis 34F2 with pRP1099, a plasmid possessing the gene for AmCyan1 protein, was constructed by conjugation [41]. Strains were grown in Brain-Heart Infusion (BHI) medium (Difco, USA) for routine culture at 37°C. Kanamycin (20 µg/ml) was supplemented for BYF10124. For liquid cultures, strains were grown at 37°C with shaking at 155 rpm. Silkworm rearing Silkworm eggs were purchased from Ehime-Sanshu Co., Ltd. (Ehime, Japan), disinfected, and reared at 27°C. The worms were fed with antibiotic-containing artificial diet Silkmate 2S (Nihon Nosan Corp., Japan) until the fifth instar stage as previously described [38]. After the larvae turn to the fifth instar, they were fed an antibiotic-free artificial diet (Sysmex, Japan) and used for infection experiments the following day. Silkworm infection experiments For infection experiments, fifth-instar day-2 silkworm larvae were used. Bacterial strains were revived from glycerol stock by streaking them on BHI agar plates and incubating overnight at 37°C. The overnight grown single colony was inoculated and cultured overnight in 5 ml BHI medium with shaking at 155 rpm. The culture was 100-fold diluted in BHI medium and grown till the OD 600 reached 0.5. The cells were diluted with physiological saline solution (0.9% NaCl), where applicable, and 50 µl of bacterial suspension was injected into the hemolymph of each larva using a 1-ml syringe equipped with a 27gauge needle (Terumo Medical Corporation, Japan). The infected worms were incubated at 27°C, and their survival was recorded. Although silkworms survive at 37°C, they are more sensitive toward infection at this temperature [42,43]. Therefore, for highly pathogenic microorganisms, we routinely incubate silkworm larvae at 27°C postinfection. Silkworms were considered dead if they did not move when poked with forceps. Antimicrobial susceptibility test Antibiotics were obtained from either Fujifilm Wako, Japan, or Sigma Aldrich, Japan. Antimicrobial susceptibility test was performed by broth micro-dilution assay according to the Clinical and Laboratory Standards Institute protocol (CLSI) as explained previously [38]. The plate was incubated at 37°C for 20 h, and the minimum inhibitory concentration (MIC) of each antibiotic was determined as the minimum concentration that inhibited the growth of bacteria. Treatment of infection by antibiotics in silkworms To evaluate the therapeutic activities of clinically used antibiotics in the infected silkworms, exponentially growing bacteria (~5 x 10 2 CFU) was injected into the hemolymph of each larva. Different concentrations of antibiotics were prepared in saline and injected into the larval hemolymph within 30 min of infection. For survival assay, 1 mg/kg of doxycycline and ampicillin each were injected into the larvae. To determine effective doses that cure 50% larvae (ED 50 ), various concentrations of doxycycline and ampicillin were prepared in saline and injected into the hemolymph of infected larvae (n = 3 for each dose). The survival of larvae was recorded, and ED 50 was calculated from the survival at 16 h post-infection by logistic regression analysis using the logit link function. To determine the microbial burden, larvae infected with B. anthracis (6 x 10 2 CFU/larva) were injected with doxycycline and ampicillin (1 mg/kg; n = 10) within 30 min of infection, hemolymph was collected 6 h and 9 h post-injection, diluted with saline and spread on Luria-Bertani agar plates followed by overnight incubation at 37°C. The appearing colonies were counted the next day. Fluorescence imaging B. anthracis BYF10124 was injected into silkworm hemolymph. The silkworms were kept at 27°C. After 3 h and 6 h post-infection, hemolymph from the infected silkworm was collected, placed on a glass slide, and covered by a coverslip. Fluorescence images of the samples were collected using an inverted Zeiss LSM 780 confocal microscope equipped with an EM-CDD camera (Zeiss Research Microscopy Solutions, Germany) under a 40 x objective lens. To determine This study the effect of the antibiotic, 1 mg/kg ampicillin was injected into the infected larvae, and hemolymph was observed under the microscope. Assessment of virulence in silkworms Virulence of the bacterial strains in silkworms was tested by injecting the exponentially growing B. anthracis 34F2 and mutants with disruption in virulence genes (~5 x 10 2 CFU) into the hemolymph of each larva. For survival, larvae were observed at different time intervals postinfection. To determine microbial burden, hemolymph of the infected larva was collected at 3 h and 6 h postinfection, diluted in saline, and colony-forming units were determined. For LD 50 determination, exponentially growing cells were serially diluted and injected into the larval hemolymph (n = 3/group). The survival of larvae was recorded, and LD 50 was calculated from the survival at 16 h post-infection by logistic regression analysis using the logit link function. Silkworms are killed by Bacillus anthracis infection To establish a B. anthracis silkworm infection model, we injected different cell numbers of B. anthracis Sterne strain 34F2 into silkworm larvae hemolymph and observed the survival. We found that B. anthracis killed the silkworms in a dose-dependent manner (Figure 1 (a)). We determined the lethal dose that killed 50% of the worms (LD 50 ) 16 h post-infection to be 8.3 × 10 2 colony forming units (CFU) per larva (Figure 1(b)). At 19 h post-infection, when all the silkworms infected with 8.1 × 10 2 CFU of B. anthracis died (Figure 1(c)), saline-injected silkworms were still surviving (Figure 1 (d)), indicating that the fatality is brought about by B. anthracis infection. The death of larvae was accompanied by a change of skin color to pale followed by black due to melanization as a secondary indicator of death after the lack of motion upon prodding. To further confirm that the observed killing of silkworms was due to B. anthracis infection, we heat-killed the bacteria by autoclaving and injected into the silkworms. We found that injection of heat-killed bacteria equivalent to 1.5 × 10 6 CFU did not kill the worms while live 2.6 × 10 3 CFU killed the worms within 16 h postinfection (Figure 2(a)). We further constructed fluorescent protein AmCyan1-expressing B. anthracis, whose silkworm killing ability was similar to that of the wild-type (Figure 2(b)). We confirmed the florescence-protein expression by observing the cells under a fluorescence microscope (Figure 2(c)). We then infected the silkworm with the bacteria, recovered hemolymph 3 h and 6 h post-infection, and confirmed the fluorescence expression of bacteria in the hemolymph. While most of the bacteria were engulfed by hemocytes 3 h post-infection (Figure 2(d)), increased bacterial growth outside the hemocytes was observed 6 h post-infection (Figure 2(e)), indicating the progression of bacterial proliferation within the host and establishment of infection by the bacteria. Infection is cured by antibiotics treatment After confirming that B. anthracis establishes infection in silkworms and kills them, we evaluated the therapeutic effectiveness of clinically used antibiotics against B. anthracis infection. At first, we determined the in vitro antimicrobial susceptibility of B. anthracis toward a range of antibiotics. Consistent with reported studies [44][45][46], we found that it was susceptible to most of the antibiotics tested and resistant to bacitracin and fosfomycin (Table 2). Next, we selected two antibiotics that are commonly used for the treatment of anthrax, doxycycline and ampicillin, and injected them into the hemolymph of silkworms infected with B. anthracis. We found that both antibiotics cured silkworms and prevented their death (Figure 3(a)). We further determined the dose-response of doxycycline and ampicillin and calculated the effective doses that cure 50% of the worms (ED 50 ) 16 h post-infection to be 0.05 mg/kg and 0.02 mg/kg, respectively (Figure 3(b-c)). We, next, determined the bacterial burden in the silkworm hemolymph at different intervals of time post-infection and found that the number of viable cells decreased with time in the antibiotic-treated groups (Figure 3(d)). In addition, we checked the fluorescence expression after administering ampicillin to the BYF10124 infected larvae. We found that after 3 h post-infection, most of the hemocytes were colonized with bacteria (Figure 4(a)). While upon ampicillin treatment, only a few hemocytes were colonized with bacteria, and the overall presence of bacteria was decreased (Figure 4(b)). At 6 h post-infection, bacteria started proliferating outside the hemocytes (Figure 4(c)), while the ampicillin treated group had fewer bacteria engulfed in the hemocytes with no bacterial growth outside the hemocytes (Figure 4(d)). Silkworm as a model to assess virulence of B. anthracis With the establishment of the silkworm infection model of B. anthracis as shown above, we next used silkworms to evaluate the virulence of B. anthracis mutants. The toxinrelated genes are known to have roles in B. anthracis virulence [47,48]. To test whether these toxins also act on silkworms, we used mutants with disruptions in pagA, lef, and atxA [49]. Located within a pathogenicity island on pXO1 11 , the pagA, lef, and atxA genes code for the protective antigen, the lethal factor, and a global virulence regulator AtxA, respectively. AtxA is reported to, directly and indirectly, regulate the transcription of several genes, including the pagA and lef genes [49,50]. We found that these mutants were less virulent in silkworms as they took a longer time to kill the larvae (Figure 5(a)) and had attenuated colonizing ability ( Figure 5(b)). To demonstrate the use of silkworms in the quantitative determination of virulence, we compared the LD 50 values of wild-type and the mutants. We found that the LD 50 values of the mutants were higher, suggesting reduced virulence ( Figure 5(c)). Taken together, it was evident that the disruption of virulence-related genes decreases the virulence of B. anthracis to the silkworm and that the silkworm infection model can be used for quantitative evaluation of B. anthracis virulence. Discussion In this study, we established a silkworm model of B. anthracis infection and evaluated the therapeutic effects of clinically used antibiotics in silkworms infected with B. anthracis. Moreover, we generated a B. anthracis strain expressing AmCyan1, which was useful in evaluating the proliferation of bacteria inside the host over time. While silkworm infection models of human pathogens have been reported [30,34,[51][52][53], this is the first report of the silkworm infection model of B. anthracis. We found that B. anthracis Sterne killed silkworms and the LD 50 was 8.3 × 10 2 CFU, which was comparable with those in mice models where LD 50 ranged from 1.6 × 10 2 -1.1 × 10 3 CFU [54,55]. Live bacteria were required for silkworm killing, which was evident from the fact that heat-killed bacteria (10 6 CFU) could not kill the larvae. B. anthracis established infection within silkworms as their proliferation was increased over time observed both from the increased CFU in the hemolymph of larvae and the increased proliferation of fluorescent bacteria under the microscope harvested at various intervals post-infection. When we administered clinically used antibiotics to the B. anthracis infected silkworms in this study, we observed therapeutic activities of the antibiotics as they prolonged the survival of infected silkworms. Recovered CFU of bacteria from the treated silkworm hemolymph showed a faster clearance of bacteria in the ampicillin-treated group than in the doxycyclinetreated group. As ampicillin is a bactericidal antibiotic, bacteria are killed in addition to clearance from the host immunity, which may have led to faster overall clearance; whereas, being a bacteriostatic antibiotic, doxycycline inhibited the bacteria growth, and overall clearance may have depended upon the host immunity taking a longer time. We further demonstrated, using fluorescent protein-expressing B. anthracis, that antibiotic treatment reduces bacterial burden in the hemolymph. The therapeutic effects of known antibiotics in the silkworm infection model imply that the therapeutic effectiveness of unknown compounds can be evaluated using this system, selecting for compounds with therapeutic activity and appropriate pharmacokinetics at an early stage of screening [37,38,40]. An additional advantage of using silkworms is that a small quantity of compounds would be enough to evaluate therapeutic effectiveness. We found that B. anthracis kills the silkworms, and the lack of toxin genes makes B. anthracis less virulent to the worms when injected into the hemolymph. Among other invertebrates, B. anthracis can infect G. mellonella [25], but not C. elegans [56]. Furthermore, the ability of blood-feeding insects to act as vectors of B. anthracis [57] suggest a difference among invertebrate response toward B. anthracis when administered orally or into the hemolymph. Besides, invertebrates lack the anthrax toxin receptor [58] required for B. anthracis infection in mammals [59]. The detailed investigation of the mechanism of anthrax toxin-mediated toxicity in invertebrates would be a subject of future studies. Although silkworms do not have acquired immunity, innate immunity among silkworms is partly conserved with mammals, and several signaling cascades such as the mitogen-activated protein kinase (MAPK) pathways are activated in silkworms by bacterial components resulting in antimicrobial peptides production [60]. Thus, silkworms could differentiate the virulence of the mutant deficient in the lethal factor of B. anthracis, which acts via the MAPK pathway in mammals [61]. Among invertebrates, it has been shown that lethal factor cleaves D. melanogaster MAPK kinase [58]; however, its effect on silkworm MAPK kinases remains unknown. Given that innate immunity is the first line of defense in all organisms [62], silkworms can be used to determine virulence factors that trigger the innate immunity and not the acquired immunity. Accordingly, evaluation of virulence factors of pathogenic microorganisms has been successfully performed using silkworm model [28,30,[63][64][65]. The finding of this study showing attenuated virulence of strains of B. anthracis with disruption in known virulence genes suggested that silkworms can be used to evaluate the roles of unknown genes in the virulence of B. anthracis. Furthermore, since Sterne strain lacks pXO2 and is less virulent to higher animals [66], the silkworm model of B. anthracis Sterne infection will have an additional advantage for the identification of virulence factors encoded by genes either in the chromosome or pXO1 that might be masked in a highly virulent strain containing both pXO1 and pXO2. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-06-15T13:18:13.997Z
2021-06-12T00:00:00.000
{ "year": 2021, "sha1": "60adabd17052fe677617756fe3aad06ff5ccb102", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21505594.2021.1965830?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd7fa5ff82dfafeb6ad78cad65f96ab65710bcdd", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
221113612
pes2o/s2orc
v3-fos-license
Investigating the initial effect of COVID-19 on the functioning of outpatient diagnostic imaging facilities Introduction As a result of the COVID-19 pandemic, outpatient diagnostic imaging (DI) facilities experienced decreased operations and even unprecedented closures. The purpose of this study was to examine the impact of COVID-19 on the practices of DI clinics, and investigate the reasons for the change in their operations during the initial period of the pandemic starting in mid-March 2020. Materials and methods A questionnaire was created and distributed to the managers of eighteen outpatient DI clinics in London, Hamilton, and Halton, Ontario, Canada. The managers indicated whether their clinics had closed or decreased operations, the reasons for closure, and the types of imaging examinations conducted in the initial period of the COVID-19 pandemic. Results Fifty percent of the DI clinics surveyed (9/18) closed as a result of COVID-19, and those that remained open had decreased hours of operation. The clinics that closed indicated decreased referrals as the primary reason for closure, followed by staff shortage, concerns for safety, and suspension of elective imaging. Chest radiography and obstetric ultrasound were the most commonly conducted examinations. Clinics that were in close geographical proximity were able to redistribute imaging examinations amongst themselves. All DI clinics had suspended BMD examinations and elective breast screening, and some transitioned to booked appointments only. Conclusion Many DI clinics needed to close or decrease operations as a result of COVID-19, a phenomenon that is unprecedented in radiological practice. The results of this study can assist outpatient DI clinics in preparing for subsequent waves of COVID-19, future pandemics, and other periods of crisis. Results: Fifty percent of the DI clinics surveyed (9/18) closed as a result of COVID-19, and those that remained open had decreased hours of operation. The clinics that closed indicated decreased referrals as the primary reason for closure, followed by staff shortage, concerns for safety, and suspension of elective imaging. Chest radiography and obstetric ultrasound were the most commonly conducted examinations. Clinics that were in close geographical proximity were able to redistribute imaging examinations amongst themselves. All DI clinics had suspended BMD examinations and elective breast screening, and some transitioned to booked appointments only. Conclusion: Many DI clinics needed to close or decrease operations as a result of COVID-19, a phenomenon that is unprecedented in radiological practice. The results of this study can assist outpatient DI clinics in preparing for subsequent waves of COVID-19, future pandemics, and other periods of crisis. R ESUM E Introduction : En raison de la pand emie de COVID-19, les installations d'imagerie diagnostique pour les patients ambulatoires sont confront ees a une baisse de leurs activit es et même a des fermetures. Cette etude vise a examiner l'impact de la COVID-19 sur la pratique des cliniques d'ID, et a examiner les raisons du changement dans leurs activit es durant la p eriode initiale de la pand emie, qui a d ebut e a la mi-mars 2020. R esultats : Cinquante pour cent des cliniques d'imagerie sond ees (9/ 18) ont ferm e leurs portes en raison de la COVID-19, et celles qui sont rest ees ouvertes ont r eduit leurs heures d'ouverture. Les cliniques qui ont ferm e donnent la diminution des aiguillages comme principale motif de fermeture, suivi par le manque de personnel, les pr eoccupations relatives a la s ecurit e et la suspension de l'imagerie elective. Les radiographies de la poitrine et les echographies obst etriques ont et e les deux types d'examens les plus fr equemment effectu es. Les cliniques en etroite proximit e g eographique ont pu se partager les examens d'imagerie. Toutes les cliniques d'imagerie ont suspendu les examens de DMO et de mammographie de d epistage elective, et certaines sont pass es a une formule sur rendez-vous seulement. Contributors: All authors contributed to the conception or design of the work, the acquisition, analysis, or interpretation of the data. All authors were involved in drafting and commenting on the paper and have approved the final version. Funding: This study did not receive any funding from agencies in the public, commercial, or not-for-profit sectors. Competing interests: All authors declare no financial relationships with any organizations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. Ethical approval: The study was reviewed by the university research ethics board in May 2020 with oversight waived for this project. Introduction The COVID-19 pandemic, which began in mid-March 2020, impacted the operations of many healthcare institutions. Decreased patient volumes were reported across ambulatory practices, 1 many non-urgent consultations were rescheduled, 2-5 and numerous physician practices transitioned to conducting online telemedicine consultations to prevent the spread of COVID-19. 6 Patient imaging was also impacted, as there was a significant decrease in imaging volumes due to factors such as governmental quarantine orders, rescheduling of elective imaging, and patient hesitancy in visiting healthcare settings due to fear of exposure to COVID-19. [3][4][5]7 Naiditch et al. examined the effect of the pandemic on various imaging modalities and found that the greatest decline in imaging volume was for mammography examinations (94% decrease), and the least decrease for radiography imaging (22%). 3 Outpatient diagnostic imaging (DI) settings were particularly affected by the pandemic in comparison to other imaging locations, experiencing as much as approximately an 88% decrease in imaging volumes relative to 2019. 3 As examined in this study, many outpatient DI clinics decreased operations or closed down entirely as a result of the pandemic. While previous literature examines the decreased patient volumes and operations of hospitals during prior disease outbreaks, such as SARS-CoV-1, [8][9][10][11][12][13][14][15][16] there is no recorded instance of DI clinics closing during the prior outbreaks, making this a potentially unprecedented phenomenon. Thus, the objective of this study was to investigate the initial impact of the COVID-19 pandemic, starting in mid-March 2020, on the functioning of outpatient DI clinics by examining the reasons for the change in their operations, including closures, and to gain insight into their practice during the initial period of the outbreak. Materials and Methods Eighteen public outpatient DI clinics in the metropolitan areas of London, Hamilton, and the Halton region in Ontario, Canada were examined. Five of the imaging clinics surveyed were in London, and 13 of the imaging clinics were in the Hamilton and Halton areas. DI clinics in the London area were associated with one imaging company (these clinics are henceforth referred to as Group A), while clinics in the Halton and Hamilton areas were associated with another company (these clinics are henceforth referred to as Group B). The surveyed DI clinics performed radiography, ultrasound (US), mammography, and Bone Mineral Density (BMD) imaging examinations, which is standard practice for public outpatient DI clinics in Canada. The study was reviewed by the university research ethics board in May 2020 with oversight waived for this project. A questionnaire was created and sent to DI clinic managers in the London, Halton, and Hamilton areas in May 2020 (Fig. 1). The respondents were required to indicate: Whether a DI clinic was currently open in May 2020 Whether the clinic had decreased its hours of operation or shut down entirely since mid-March 2020 If a clinic had closed, which factors influenced the decision to close it (staff shortage, decreased number of referrals, concerns regarding a safe working environment, PPE shortage, or other reasons) Which imaging modality was being used most frequently for cases in the clinic from mid-March to the end of April (X-ray, US, or BMD) The most common case being imaged in the clinic from mid-March to the end of April The results of the questionnaires were analyzed to examine the relations between the clinics that had closed or remained open, and their operational hours, reasons for closure, and the imaging investigations conducted. Additionally, the locations of the clinics were identified and analyzed. Results The results of the completed questionnaires for the 18 DI clinics are summarized in Table 1 In Group A, the one DI clinic that had closed indicated that the closure was due to a decreased number of referrals, and the suspension of elective breast screening (OBSP in Ontario) and BMD examinations. The indicated causes of the closure of all 8 DI clinics in Group B were decreased referrals, concerns regarding a safe working environment, and staff shortage; issues with child care was indicated as a contributing factor to the staff shortage. Thus, all 9 DI clinics in Groups A and B that had closed indicated a decreased number of referrals as a reason for closure. The clinic manager for Group A indicated that the 4 DI clinics which remained operational in Group A had redistributed their workload. One clinic suspended radiographic imaging and performed only ultrasound (US) examinations, with obstetric examinations being the most common. The other three DI clinics in Group A performed primarily radiographic examinations and minimal US examinations, with chest radiography being the most common examination in these clinics. All 5 clinics which remained operational in Group B performed both radiographic and US examinations. US was the most frequently used imaging modality in these clinics, with obstetric US being the most common examination. Clinics in Group B also suspended elective breast screening and BMD examinations. Discussion While there are reports of reduced patient volumes for radiological imaging at hospitals during the SARS-CoV-1 outbreak, 17,18 there is limited literature on the effect of prior widespread diseases on outpatient DI facilities. During the COVID-19 pandemic, decreases in patient imaging volumes and the rescheduling of elective imaging in outpatient settings were reported 3,5,7 ; however, at the time of the planning and execution of this study, there was no available literature detailing the unprecedented closure of outpatient DI clinics as a result of COVID-19. After our manuscript was submitted for publication and was in the process of acceptance, Lee et al. mentioned the closure of DI facilities and redistribution of workflow between outpatient clinics and hospitals as a result of COVID-19. 19 The initial effect of COVID-19 on the functioning of outpatient DI clinics was assessed in detail in our present study. The COVID-19 pandemic was declared in mid-March 2020, resulting in half of the imaging clinics surveyed in this study to cease operations. All 9 clinics that closed indicated a decreased number of referrals as a reason for closure. This correlates to reports of decreased patient volumes for imaging examinations during the COVID-19 pandemic, [2][3][4][5]7 as imaging is not the standard screening or diagnostic tool for COVID-19 20 and many elective imaging examinations had been postponed. [2][3][4][5] Eight of the clinics that had closed also indicated staff shortage as a reason for closure, citing issues with childcare as a contributing factor to the shortage. This was likely due to the closure of schools and child-care centres as a result of the provincial Ontario shut-down, forcing parents to take time off work and stay at home to care for their children. The same eight clinics that closed additionally indicated concerns for safety as a reason for closure. While the exact concerns were not specified, it can be hypothesized that initially limited experience dealing with potential COVID-19 patients can be among the contributing factors to the concerns for safety, as many institutions (healthcare and otherwise) were required to rapidly change their methods of operation with little preparation as a result of the pandemic. Interestingly, none of the clinics surveyed indicated a shortage of PPE as a reason for closure, despite the fact that many medical institutions were experiencing severe disruptions in PPE supply at the time. 21 It may be that closing some DI clinics allowed the managing companies to redistribute PPE resources to the clinics that did remain operational. Additionally, the closure of some clinics possibly allowed for the concentration of the remaining available staff resources in the clinics that had remained open. All the imaging clinics that had remained open had decreased hours of operation. Open clinics in Group A all transitioned to booked appointments and cancelled walkins. The reduced clinic hours, combined with the increased time required for safety precautions such as disinfection between patient encounters, 20,22 suggest that even clinics which had remained operational faced decreased referrals as compared to the pre-pandemic period. In the clinics which had remained operational, the most common examinations were chest radiography and obstetric ultrasound. The prevalence of chest radiography studies correlates with the reports that radiography examinations experienced the least decrease in patient imaging volume during COVID-19, 3 and this may be due to several reasons. The first is that chest radiography is one of the most commonly conducted examinations in regular DI clinic practice, 23 and it is possible that this remained the case during the pandemic. The second reason may be that many patients and referring physicians were concerned for COVID-19related findings and wished to investigate them. The prevalence of obstetric US cases can likely be explained by the fact that, for the patients and referring physicians, the importance of tracking the course of pregnancy and its outcome outweighed the risks of the patient contracting COVID-19. In Group A, most US examinations (primarily obstetric US) were conducted in one location, while all the other locations focused on conducting X-ray examinations. Upon examination of the distances between the clinics, it may be suggested that the close geographical proximity between the DI clinics allowed them to effectively redistribute referred cases (in Group A, the distance between most clinics was approximately 5-9 km). The study is limited in that it only assessed clinics in a limited geographical area, and it is possible that investigating the operations of DI clinics over a greater area (i.e., the whole province of Ontario) would have provided different statistical results. The study also did not investigate the exact dates when DI clinics reopened following the start of the pandemic. Investigating the aforementioned aspects would have been beyond the scope of the study, which was intended to specifically assess the initial impact of the pandemic on the general everyday functioning of DI clinics and the possible reasons for their closure. Finally, the study relied on self-reported data from clinic managers, and the results may have been affected by the managers' ability to recall information; however, this is unlikely as the data on the operations of clinics was collected at a time very close to the period being investigated (within weeks). Conclusion The COVID-19 pandemic in March 2020 had an unprecedented impact on outpatient DI clinics. Multiple DI clinics that were examined in the London, Halton, and Hamilton in Ontario, Canada had closed as a result of COVID-19, citing decreased referrals as the primary cause, followed by staff shortage, concerns for safety, and suspension of elective imaging. All the clinics that remained open had decreased hours of operation and some transitioned solely to booked appointments. Some of the clinics that had remained open were able to redistribute their workload amongst themselves; this was likely assisted by their close geographic proximity to each other. Chest radiography and obstetric US constituted the most frequently imaged cases in the DI clinics. Ultimately, the results of this study provide a greater understanding of the impact of the COVID-19 pandemic on diagnostic imaging practices, and may assist outpatient DI clinics in preparing for potential subsequent waves of COVID-19, future pandemics, and other periods of crisis.
2020-08-14T13:02:05.120Z
2020-08-13T00:00:00.000
{ "year": 2020, "sha1": "f64442a1bb7a2f44d0157af2fe1d81ed39917f11", "oa_license": null, "oa_url": "http://www.jmirs.org/article/S1939865420302186/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "53a84ffc7efad4c6755188dc4c1daa980e4e89b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
56170217
pes2o/s2orc
v3-fos-license
Athletes With Musculoskeletal Injuries Identified at the NFL Scouting Combine and Prediction of Outcomes in the NFL: A Systematic Review Background: Prior to the annual National Football League (NFL) Draft, the top college football prospects are evaluated by medical personnel from each team at the NFL Scouting Combine. On the basis of these evaluations, each athlete is assigned an orthopaedic grade from the medical staff of each club, which aims to predict the impact of an athlete’s injury history on his ability to participate in the NFL. Purpose: (1) To identify clinical predictors of signs, symptoms, and subsequent professional participation associated with football-related injuries identified at the NFL Combine and (2) to assess the methodological quality of the evidence currently published. Study Design: Systematic review; Level of evidence, 3. Methods: A systematic review was conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. We reviewed all studies that examined musculoskeletal injuries identified among athletes at the NFL Combine and associated outcomes. Data on signs, symptoms, and subsequent NFL participation were collected, and the methodological quality of the studies was assessed. Results: Overall, 32 studies, including 30 injury-specific studies, met the inclusion criteria. Twenty studies analyzed data collected at the NFL Combine from 2009 and later. When compared with matched controls, athletes with a history of a cervical or lumbar spine injury, rotator cuff repair, superior labrum anterior-posterior repair, anterior cruciate ligament reconstruction, full-thickness chondral lesions of the knee, or Lisfranc injury played in significantly fewer games early in their NFL careers. Additionally, athletes with a history of a cervical or lumbar spine injury, rotator cuff repair, and navicular injury had decreased career lengths versus controls. Defensive players and linemen were found to have decreased participation in the NFL for several injuries, including prior meniscectomy, anterior cruciate ligament reconstruction, and shoulder instability. Career length follow-up, measures of athletic participation, and matching criteria were highly variable among studies. Conclusion: For medical professionals caring for professional football athletes, this information can help guide orthopaedic grading of prospects at the NFL Combine and counseling of athletes on the potential impact of prior injuries on their professional careers. For future studies, improvements in study methodology will provide greater insight into the efficacy of current treatments and areas that require further understanding. data have been published. 1,[4][5][6] In recent years, Provencher and colleagues have published several studies with NFL Combine data collected from 2009 to 2015, analyzing the association between specific prior injuries and outcomes in the NFL (draft position, games played, and games started during the first 2 NFL seasons). 1,8,9,22,25,28,31,32,38 These studies enable team management, scouts, coaches, physicians, and athletic trainers to better understand the impact of a given injury on a player's participation in the NFL. More important, even beyond the NFL, such information may (1) help athletes and medical professionals better understand the ability to return to sport at a high level, (2) guide treatment options, and (3) set appropriate expectations for both parties. 6 The purpose of this systematic review was to critically evaluate the available literature on clinical predictors of outcomes relevant to musculoskeletal injuries reported or diagnosed at the NFL Scouting Combine. Specifically, we sought to (1) identify clinical predictors of signs, symptoms, and subsequent professional participation associated with football-related injuries identified at the NFL Combine and (2) assess the methodological quality of the currently published evidence. METHODS A systematic review was conducted in July 2018 according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. 20 PubMed (Medline), Embase, and the Cochrane library were searched with the terms "National Football League," "NFL," "combine," "injury," and surgery." The search was limited to English-language articles. Titles and abstracts from these searches were independently reviewed by 2 authors (D.W., S.A.T), and full-text articles meeting the inclusion criteria were then obtained and reviewed. Additionally, the references of all included full-text articles were scanned for further eligible studies. Inclusion criteria were established before data collection. Studies were included if they reported on musculoskeletal injuries identified among athletes at the NFL Combine and their association with clinical signs and symptoms and/or future participation in the NFL, including games played and career length. Studies were excluded if they (1) were case reports of 1 or only a few participants; (2) were epidemiologic studies that reported just the prevalence of specific injuries at the NFL Combine and did not evaluate for associations between injury and clinical signs, symptoms, or outcomes in the NFL; or (3) examined football-related injuries occurring after the NFL Combine. After elimination of duplicate articles among databases and screening of abstracts for relevance, 32 studies were analyzed ( Figure 1). Thirty studies examined a cohort of athletes with a specific diagnosis or injury. All studies were retrospective, with the exception of 2 studies that prospectively collected data at the NFL Combine from a single year. 2,12 Two authors (D.W., M.A.) extracted the data, which were then reviewed by the coauthors. Any disagreements in data were resolved by consensus or by arbitration of a third author (L.J.W.). The tabulated data included the injury or surgery, combine years studied, number of injuries and athletes, and level of evidence. Outcomes collected included draft status, game participation data, NFL career length, and clinical assessments derived from physical examination, imaging, and functional measures related to the specific injury. The level of evidence of the selected studies was determined according to the criteria established by the Oxford Centre for Evidence-Based Medicine. 39 As no randomized clinical trials were identified among the included studies, the MINORS criteria (methodological index for nonrandomized studies) were used to assess the methodological quality of the studies. 36 have undergone bilateral labral repairs (P ¼ .048) and possess concomitant shoulder pathology (P < .001) Recurrent labral tearing was more common in posterior labrum in the setting of prior posterior labral repair (P ¼ .032) No significant differences in games played and games started between athletes who had undergone labral repair and controls No significant differences in chance of being drafted, games played, and games started between athletes with recurrent tearing and intact repairs (continued) methodological quality of noncomparative studies and 4 additional criteria for assessing the methodological quality of comparative studies. Each criterion is scored 0 (not reported), 1 (reported but inadequate), or 2 (reported and adequate), with the global ideal score being 16 for noncomparative studies and 24 for comparative studies. Data by Injury Of the 32 studies, 30 were injury specific. There were 2 studies on cervical spine injuries, 30 Cervical and Lumbar Spine Athletes with a cervical spine diagnosis (including spondylosis, stenosis, sprain/strain, herniated disc, and spine spasms) were less likely to be drafted, played fewer games (Table 3), and had decreased NFL career lengths (Table 4) as compared with controls. Those with a history of multiple stinger episodes were noted on MRI to have a lower mean subaxial cervical space available for the cord, with 5.0 mm reported as the critical value. Of note, players with a cervical sagittal canal diameter <10mm did not have any significant differences in games played or performance score compared with matched controls, and no neurological injury occurred during their careers. 35 Athletes with a history of a lumbar spine diagnosis (including degenerative spondylosis, herniated disc, spondylolysis, and strain) were less likely to be drafted and had a decreased number of years played, games played, and games started. Radiographic evidence of hyperconcavity of the lumbar vertebral end plates (disk space expansion) in linemen was not Shoulder Two studies demonstrated that athletes with labral injuries or those who had undergone labral repair of the shoulder did not have any significant differences in draft status, games played, games started, or snap percentage when compared with controls. 15,28 Furthermore, athletes with evidence of recurrent labral tears on MRI did not have any significant differences in draft status, games played, or games started versus those with intact labral repairs. For athletes treated with bone block augmentation for shoulder instability, as many as 40% to 77% of athletes had evidence of glenohumeral arthritis on radiographs. Against controls, those who were drafted were not at significant risk for diminished participation with regard to games played or started in their first season in the NFL. In contrast, athletes with a history of a rotator cuff tear, of which 45% received operative treatment, were less likely to be drafted, played and started in fewer games, and played in fewer years versus controls. Finally, those treated with superior labrum anterior-posterior (SLAP) repair had no significant differences in draft status and performance scores as opposed to controls; however, they played and started in fewer games than healthy controls. Hip and Pelvis Athletes who had undergone athletic pubalgia repair or hip arthroscopic surgery did not have any significant differences in draft status, games played, or games started as compared with controls. Although the prevalence of camor combined-type femoroacetabular impingement and osteitis pubis was higher among symptomatic athletes, an increased alpha angle was the only independent predictor of athletic-related groin pain. Knee When compared with controls, athletes who had undergone anterior cruciate ligament (ACL) reconstruction were more likely to be picked lower in the draft, and they played and started fewer games in their first 2 NFL seasons. Chondral injuries of the knee were noted in 4.4% of athletes at the NFL Combine who had knee MRI because they reported prior injury or reported knee pain but had no known history of surgery; the patellofemoral joint was the most affected compartment. Athletes with chondral injuries, in the setting of no prior knee surgery or prior meniscectomy, played and started in fewer games versus controls. Specifically, subchondral bone edema and full-thickness chondral lesions were associated with fewer games played. Athletes with a history of medial collateral ligament injury or posterolateral corner knee injury did not have any significant differences in draft status, games played, or games started as opposed to respective controls. Foot A history of proximal fifth metatarsal fractures, including Jones fractures, was not associated with a difference in draft likelihood, games played, or games started, as compared with controls. In contrast, a history of Lisfranc or navicular injury was associated with worse draft position and fewer games played and started during the first 2 NFL seasons. In addition, a prior navicular injury was associated with significantly decreased probability of playing 2 years in the NFL. Data by Position Two level 3 studies specifically examined injuries identified at the NFL Combine and their impact on NFL participation by player position. 1,6 Based on the MINORS criteria, the mean score for methodological quality of these studies was 17 (range, 16-18) out of a possible 24 points. NFL participation data by athlete position are summarized in Table 5. Game participation appears to be affected by injuries most in offensive and defensive linemen and defensive backs. Of note, spondylolisthesis was not significantly associated with a reduced percentage of athletes playing in the league or a shorter career length at any position. DISCUSSION When compared with matched controls, athletes with a history of a cervical or lumbar spine injury, rotator cuff repair, SLAP repair, ACL reconstruction, full-thickness chondral lesions of the knee, or Lisfranc injury played in significantly fewer games early in their NFL careers. Additionally, athletes with a history of a cervical or lumbar 14 Athletic pubalgia repair a Bolded P values indicate statistically significant difference between groups (P < .05). NFL, National Football League; SLAP, superior labrum anterior-posterior. spine injury, rotator cuff repair, or navicular injury had decreased career length versus controls. The potential impact of these injuries seems to vary by player position as well, with defensive players and offensive and defensive linemen having decreased participation in the NFL for several injuries, including prior meniscectomy, ACL reconstruction, and shoulder instability ( Figure 2). Nevertheless, the available literature remains highly variable with regard to length of follow-up, matching criteria, measures of participation outcomes, and overall methodological quality. Using NFL Combine data collected by 1 team from 1987 to 2000, Brophy et al 5 examined the correlation between orthopaedic grade and career longevity in the NFL. Players with a high grade (no injury, minor injury, or successful surgical interventions) had a mean career of 42 games, as opposed to 34 games for players with a low grade (incomplete recovery and/or injury likely to recur) and 19 games for players with a failed grade. Thus, assigning orthopaedic grades to college football prospects based on their injury history has historically been a useful practice for predicting career longevity in the NFL. Of note, we found an increasing trend of likelihood of playing in the NFL for players treated with ACL reconstruction or shoulder stabilization over the study period, likely reflecting the improved understanding of these injuries and advancements in surgical technique and rehabilitation. As a result, over time, fewer players received failed grades at the combine. Although recent NFL Combine studies have improved a medical professional's ability to predict the impact of a prior injury on a player's professional career, there is a dearth of studies examining athletes with a history of hand, elbow, long bone, and ankle injuries. Although hand and ankle injuries are among the most commonly identified injuries at the NFL Combine, 1,4 this review found only 1 study on hand injuries and no studies on ankle injuries. Furthermore, while the lone hand study examined the clinical and radiographic outcomes of scaphoid fracture, it did not assess NFL participation metrics. 26 Moreover, future studies utilizing more rigorous methodology would allow medical professionals to provide more accurate predictions of a prior injury's impact on an athlete's NFL career. Currently available studies on injuries of the cervical spine or lumbar spine classify all spine diagnoses together in their analyses, resulting in heterogeneous cohorts. These aggregated diagnoses, which included stinger, spondylosis, stenosis, spondylolysis, and sprain/ strain, are all unique pathologies that have different symptoms and prognoses. Although the studies by Schroeder et al 34,35 found that athletes with a cervical or lumbar spine diagnosis were less likely to be drafted and played in fewer games than controls, diagnoses of strain, scoliosis, and spasms were included in relatively fewer numbers when compared with the more severe diagnoses of spondylosis, spondylolysis, herniated disc, and stenosis. Future studies examining a more focused cohort of spine diagnoses are needed. Additionally, measurement of draft status, games played and started, snap percentage, and game performance metrics are influenced by a multitude of factors (eg, player position, team needs, opponent game plan, depth chart), which can ultimately confound the results. Many currently available studies do not account for these factors. For instance, with regard to player position, drafted quarterbacks often do not play in any games during the first few years of their professional career, owing to their position on the depth chart, whereas kickers often go undrafted but are signed by teams and play during their rookie years. Several studies utilizing a matched control group did not match per player position. 14,17,29,30,32,37 Some players are made inactive on game day despite being healthy and participating in practice. Therefore, measurement of games played or games started may not accurately represent the degree of professional athletic participation. Metrics such as athlete exposures, which accounts for practice participation, or days on the "physically unable to perform"-injured reserve list would better characterize athletic participation. Finally, missed time caused by reinjury to the previously injured anatomic area is more likely to be indicative of the impact of a specific prior injury on participation in the NFL. Other limitations of this qualitative review are related to the level and availability of evidence reviewed. The majority of the studies reviewed were retrospective and used injury data that were self-reported or derived from scouting, introducing recall bias. Instead of using the NFL Injury Surveillance System, some studies used publicly accessible websites to collect participation and performance data, for which their accuracy or completeness cannot be verified. The majority of studies that measured participation or performance analyzed data within only the first 1 or 2 NFL years after the combine. 1,8,9,22,25,28,31,32,38 Analysis of outcomes within the first 4 to 5 years, which is the length of the typical rookie contract, may be more valuable from an administrative perspective. The impact of injuries within an anatomic region may not be mutually exclusive to the same region; for instance, limited hip rotation and femoroacetabular impingement have been linked to risk of ACL injury. 2,3 Finally, there is inherent selection bias in the analyzed studies, since athletes who were invited to the combine likely had successful outcomes after their injuries. These studies did not include athletes who were not invited to the combine but still made it to the professional level. Therefore, these findings cannot necessarily be extrapolated to the average collegiate football athlete, nor can they necessarily be extrapolated to high school or younger athletes, owing to the higher demands that are placed on the musculoskeletal system in the NFL. CONCLUSION NFL prospects with a history of a cervical or lumbar spine injury, rotator cuff repair, SLAP repair, ACL reconstruction, full-thickness chondral lesions of the knee, or Lisfranc injury played in significantly fewer games early in their NFL careers. Game participation was also dependent on player position, with defensive players and offensive and defensive linemen having decreased participation for several injuries. For medical professionals caring for professional football athletes, this information can help guide orthopaedic grading of prospects at the NFL Combine and counseling of athletes on the potential impact of prior injuries on their professional careers. For future studies, improvements in study methodology-including longer career follow-up, more accurate measures of athletic participation, more robust and consistent matching criteria, separate investigation of specific spine diagnoses, and prospective designs-will provide greater insight into the efficacy of current treatments and areas that require further understanding.
2018-12-19T14:03:53.612Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "64180d57811225559263a37f39468773d033d8d1", "oa_license": "CCBYNCND", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2325967118813083", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64180d57811225559263a37f39468773d033d8d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270960669
pes2o/s2orc
v3-fos-license
Efficient synthesis of limonene production in Yarrowia lipolytica by combinatorial engineering strategies Background Limonene has a variety of applications in the foods, cosmetics, pharmaceuticals, biomaterials, and biofuels industries. In order to meet the growing demand for sustainable production of limonene at industry scale, it is essential to find an alternative production system to traditional plant extraction. A promising and eco-friendly alternative is the use of microbes as cell factories for the synthesis of limonene. Results In this study, the oleaginous yeast Yarrowia lipolytica has been engineered to produce d- and l-limonene. Four target genes, l- or d-LS (limonene synthase), HMG (HMG-CoA reductase), ERG20 (geranyl diphosphate synthase), and NDPS1 (neryl diphosphate) were expressed individually or fused together to find the optimal combination for higher limonene production. The strain expressing HMGR and the fusion protein ERG20-LS was the best limonene producer and, therefore, selected for further improvement. By increasing the expression of target genes and optimizing initial OD, 29.4 mg/L of l-limonene and 24.8 mg/L of d-limonene were obtained. We also studied whether peroxisomal compartmentalization of the synthesis pathway was beneficial for limonene production. The introduction of d-LS and ERG20 within the peroxisome improved limonene titers over cytosolic expression. Then, the entire MVA pathway was targeted to the peroxisome to improve precursor supply, which increased d-limonene production to 47.8 mg/L. Finally, through the optimization of fermentation conditions, d-limonene production titer reached 69.3 mg/L. Conclusions In this work, Y. lipolytica was successfully engineered to produce limonene. Our results showed that higher production of limonene was achieved when the synthesis pathway was targeted to the peroxisome, which indicates that this organelle can favor the bioproduction of terpenes in yeasts. This study opens new avenues for the efficient synthesis of valuable monoterpenes in Y. lipolytica. Supplementary Information The online version contains supplementary material available at 10.1186/s13068-024-02535-z. Background Limonene is a well-known monoterpene composed of two isoprene (C5) units.Both optical forms of limonene are present in essential oils derived from various plant species.The main application of limonene has been as flavor and fragrance ingredients in cosmetics and foods.The flavor characteristics can vary depending on the chirality and source of limonene [1,2].As well as its traditional use as a flavor, limonene has diverse applications in pharmaceuticals as anti-microbial and anti-cancer compounds, and in chemical and food industries as a resin and masticatory agent [2][3][4].Furthermore, polymers derived from limonene are utilized in various industries as adhesives, sealants, metal coatings, and printing inks [2].In addition, limonene serves as a precursor for valuable compounds such as perillyl alcohol, menthol, carveol, and α-terpineol, which have significant applications in the foods, cosmetics, pharmaceuticals, biomaterials, and biofuels industries [5].Therefore, there is a growing demand for sustainable production of limonene at a large scale to meet these diverse industry needs. Plants have traditionally been the primary sources of limonene and other terpenes.However, plant-derived production faces several limitations including low yield, dependency on seasonal and climatic conditions, high production costs (including downstream processing), and environmental pollution resulting from complex extraction processes [2,6].Chemical synthesis of limonene also suffers from its own drawbacks including high energy consumption or environmental damage [1,2,6].Therefore, the microbial production of limonene by synthetic biology has emerged as a promising alternative in terms of sustainability and economic feasibility.Various strategies have been employed to produce limonene, including overexpressing heterologous or native genes in target pathways, increasing the copy number of limonene synthase genes and improving the tolerance to limonene [4,6,7].However, despite these efforts, achieving a high production of limonene still remains a significant challenge. The selection of a suitable microbial host plays a crucial role in bioproduction.Factors such as the presence of native precursor pathways, tolerance to intermediate and final compounds, and genetic amenability are important considerations in this selection process.Yarrowia lipolytica, a non-conventional yeast, possesses distinctive traits that makes it a good host for industrial bioproduction [8][9][10].Due to its safety, robustness, efficient genetic modifications, and broad range of possible substrates, Y. lipolytica has strengths as a host microorganism for bioproduction [8,10].In addition, high carbon flux toward acetyl coenzyme A (acetyl-CoA) and NADPH and a hydrophobic microenvironment make Y. lipolytica an organism of choice for terpene or lipid production [7,8,11,12]. In Y. lipolytica, the production of limonene has been achieved by introducing heterologous limonene synthases (LS) from diverse origins.Since the sole expression of LS was insufficient to have the desired levels of production in many cases, further metabolic engineering strategies have been employed.These include overexpressing genes in the mevalonate (MVA) pathway to boost the limonene precursors and expressing the limonene synthase genes at high copy numbers.To improve acetyl-CoA and upregulate the MVA pathway, Arnesen and colleagues overexpressed several native or heterologous genes encoding ACL1 (ATP citrate lyase), ACS (acetyl-CoA synthetase from Salmonella enterica), HMG (3-hydroxy-3-methylglutaryl-CoA reductase), ERG12 (mevalonate kinase), IDI (isopentyl diphosphate isomerase), ERG20 (farnesyl diphosphate synthase, mutated), and lowered the expression of SQS (squalene synthase) in Y. lipolytica [11].The expression of LS from Perilla frutescens in this platform strain resulted in the production of 35.9 mg/L of limonene.In another study, the carbon flux from isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP) was redirected towards neryl diphosphate (NPP) by introducing NPP synthase (NDPS1 from Solanum lycopersicum) into Y. lipolytica [12].Through a combination of strain engineering, involving the overexpression of d-LS, HMG1, and ERG12, and media optimization including testing different carbon sources and using a dodecane overlay, the limonene production reached 23.56 mg/L.The same group further developed the strain by expressing two copies of d-LS and optimizing the fermentation condition resulting in an increase in limonene up to 165.3 mg/L [13].In efforts to enhance the cost-effectiveness of limonene production, low-cost substrates have been utilized in combination with metabolic engineering strategies in Y. lipolytica.In these studies, limonene production levels of 20.57mg/L and 91.24 mg/L were produced from lignocellulosic hydrolysates and waste cooking oils, respectively [14,15]. In this study, we present two distinct strategies for enhancing limonene production in Y. lipolytica.The first strategy involves the fusion of limonene synthase and ERG20m to improve limonene production.The multicassette overexpression of MVA pathway genes coupled with the fusion enzyme was shown as an effective strategy in increasing limonene production.The second strategy focuses on compartmentalizing limonene production within the peroxisome.This approach aims to minimize competition between native metabolic pathways and limonene synthesis.By implementing the MVA pathway in the peroxisome along with LS, a significant increase in limonene production was shown.In addition, we have optimized the culture condition for our best-performing strain to maximize production. Selection of MVA gene to boost limonene production In this study, we expressed two limonene synthases (d-form from Citrus limon, l-form from Mentha spicata) in Y. lipolytica after codon-optimization.Previous studies have demonstrated that the production of monoterpene often requires the expression of genes in the MVA pathway, as well as the heterologous limonene synthase gene, to provide the necessary precursor pools (Fig. 1a).Thus, we specifically targeted three genes, HMGR, ERG20, and NDPS1 to improve the limonene synthesis.HMGRp (3-hydroxy-3-methylglutaryl-Coenzyme A reductase) has been identified as a key enzyme for terpene synthesis [7,15].In previous studies, the truncation of the N-terminal domain of HMGRp leads to increased terpene production by providing better soluble expression [16].In our study, we utilized the truncated version of HMPRp (tHMGRp) to assess its impact on limonene production.ERG20p (geranyl diphosphate synthase) is a bifunctional enzyme responsible for two consecutive reactions that form GPP and FPP.Overexpression of ERG20 often resulted in a moderate increase on monoterpene production because the GPP pool is insufficient.Mutations in ERG20p (ERG20 F88W−N119W , ERG20m in this study) have been found to enhance GPP availability resulting in higher levels of monoterpenes in S. cerevisiae and Y. lipolytica [11,12,17,18].In addition, we considered employing an alternative pathway, known as the orthogonal pathway, to bypass the native sterol pathway to improve the limonene synthesis.This approach involved the overexpression of NDPS1 from Solanum lycopersicum to synthesize neryl diphosphate (NPP), an alternative precursor [19].The biosynthesis of monoterpene derived from NPP has demonstrated an increase in monoterpene production in S. cerevisiae and Y. lipolytica [12,19].Furthermore, enzyme fusion has been investigated as a strategy to facilitate the conversion of precursors into terpenes [1,6].Thus, we expressed two fusion enzymes, ERG20m/(d)-LS or ERG20m/(l)-LS which fused ERG20m to N-term of limonene synthase with linker 'GSGSGSGSGS' , to evaluate their potential for improving limonene production. In the case of l-limonene, we observed that limonene production was only detected in the strain harboring the fusion enzyme ERG20m/(l)-LS and tHMGRp, as shown in Fig. 2.However, the combination of enzymes such as tHMGR, ERG20m with (l)-LS, or tHMGR, NDPS1 with (l)-LS did not result in limonene production.Regarding Increase the expression level To enhance the production of limonene, we conducted multiround integration of the best producing cassette, tHMGR and the fusion enzyme of ERG20m and LS, as illustrated in Fig. 3a.Three expression cassettes of selected genes with different selective markers were randomly integrated in the genome of Y. lipolytica.By increasing the number of transformation events with the expression cassettes with different selective markers, we observed a significant increase in both forms of limonene (l-and d-) by 51.8-and 5.3-fold, respectively (Fig 3b).To further investigate the potential for increased production and to demonstrate the effect more clearly, we cultivated the strains under conditions with a high glucose concentration (4%) and a high initial OD of 1.0.Under this condition, multi-cassette integration led to a 15.2and 16.0-fold increase in l-and d-limonene, respectively (Fig 3b).The highest production reached 29.4 mg/L in l-limonene from the S2341 strain and 24.8 mg/L of d-limonene from the S2343 strain.These findings highlight the effectiveness of increasing expression level in enhancing the production of both l-and d-limonene. Compartmentalization of limonene synthesis Peroxisome has been identified as a promising organelle for terpene production [20].This is primarily attributed to its high abundance of acetyl-CoA derived from β-oxidation and its ability to sequester toxic molecules, thereby detoxifying the rest of the cell [21][22][23]. To localize the expression of two genes, D-LS and ERG20m, in the peroxisome of Y. lipolytica, a peroxisomal targeting sequence (PTS1, MGAGVTEDQFKSKL from ICL1) was added [24].Individual expressions of Dd-LS and ERG20m in the peroxisome (S2644) yielded the highest limonene production at 3.4 mg/L without changes in cell growth (Fig. 4a).This co-expression resulted in a 14.9-fold increase in limonene production compared to the single overexpression of d-LS (S2642).Notably, the expression of fusion enzyme ERG20m/d-LS in peroxisome (S2649) led to a 4.8-fold increase in limonene production compared to the single overexpression of d-LS (S2642), but the level of limonene was lower than those observed in the individual expression of ERG20m and d-LS (S2644). The single-layer membrane of peroxisome allows for the passage of low molecular weight compounds, facilitating the utilization of intermediates from cytosol for limonene production.The overexpression of key enzymes (d-LS and ERG20m) in peroxisome enabled the production of limonene by utilizing intermediates from cytosol.To test the availability of a high pool of intermediates in the MVA pathway, several combinations of genes in the MVA pathway were overexpressed in the cytosol along with d-LS and ERG20m in the peroxisome.The production of limonene was increased from 0.5 mg/L to 3.0, 4.2, and 11.9 mg/L by overexpressing tHMG + ERG20, tHMG + ERG8 + ERG12, and tHMG + ERG8 + ERG20, respectively.Among the combinations tested in this study, the overexpression of tHMGR, ERG8, and ERG12 in the cytosol (S2948) resulted in the highest increase in limonene production, reaching a level 39.7 times higher than that without the boosting of the cytosolic MVA pathway (S2644).The results suggest that the precursors of limonene can be translocated from cytosol to peroxisome in Y. lipolytica as previously observed in S. cerevisiae [21]. To investigate whether a direct precursor supply in the peroxisome can lead to further improved limonene synthesis, the entire MVA pathway was overexpressed in the peroxisome using two different PTSs.PTS2 (GGGSSKL) was utilized to localize the entire MVA pathway in the peroxisome [22].In addition, both PTS1 and PTS2 were employed for the expression of D-LS to determine which PTS is more effective for limonene production.The expression of the entire MVA pathway in the peroxisome resulted in a significant increase in limonene production compared to co-expression of d-LS and ERG20m in peroxisome (S2644).There was a 102.6-fold increase in limonene production from d-LS PTS1 and a 159.3-fold from d-LS PTS2 compared to the control.The highest limonene titer reached 47.8 mg/L in the strain S3471, which is 8.1 times higher than the best-performing strain achieved by a multiround integration in the cytosol (S2343) under this experimental condition. Fed-batch fermentation We evaluated the D-limonene production of the bestperforming strain (S3471) harboring the peroxisomal pathway in fed-batch cultivation.The fed-batch fermentations were performed using a YP media with an initial glucose concentration of 100 g/L.Glucose was fed to maintain the level around 20 g/L.The fed-batch The effect of overexpressing d-lS in peroxisome with overexpressing genes in mevalonate pathway in cytosol or peroxisome.l-limonene, navy bar; d-limonene, dark yellow bar; OD, green dot; P1, peroxisomal expression with PTS1; P2, peroxisomal expression with PTS2; C, cytosolic expression.The strains were cultivated in a YPD (4% glucose) medium for 5 days.The values show the average and the standard deviation of the two biological replicates fermentation resulted in a continuous accumulation of d-limonene that was proportional to the biomass (Fig. 5).The highest titer of d-limonene, 69.3 mg/L, was achieved at 120 h of cultivation which represents the production of d-limonene at 1.81 mg/g DCW.The result demonstrated that peroxisomal engineering and fed-batch cultivation are promising strategies for limonene production in Y. lipolytica. Discussion Limonene boasts a substantial industrial exploitation value, finding applications in the fragrance, pharmaceutical, and food industries [1,25,26].While traditionally sourced from plants, there is a growing interest in microbial production circumventing the drawbacks associated with plant-derived extraction.However, the microbial synthesis of limonene has posed challenges, primarily coming from the low expression of the heterologous enzyme, competition with native metabolic pathways, the inherent toxicity of limonene to host cells, and so on [1].Previous research endeavors into limonene biosynthesis have explored diverse strategies, including enhancing the MVA pathway, increasing acetyl-CoA availability, and mitigating the toxicity of limonene.Here, we explored the biosynthesis of limonene within cytosol or the peroxisome of Y. lipolytica as a promising strategy to address these challenges. For cytosolic production of limonene, we elevated the expression of key enzymes, namely LS, tHMGR, and ERG20m.The sole expression of LS did not yield detectable limonene which is consistent with prior studies.However, limonene was detected upon co-expression of a fusion protein comprised of ERG20m and LS, which was not the case in the separate expression of these two enzymes.It is noteworthy that protein fusion, a strategy often employed for enzymes catalyzing sequential reactions, serves to enhance substrate channeling, minimize intermediates loss, and thus improve overall enzyme activity.This approach has previously had demonstrated success in the production of various terpenes including farnesene, geraniol, and sabinene [16,18,27].Particularly in the context of monoterpene synthesis, the limited availability of GPP, a pivotal precursor for monoterpene production, has been identified as a bottleneck due to the bifunctional enzyme ERG20p.This enzyme's proclivity for diverting GPP towards the formation of FPP rather than monoterpene compounds can lead to inefficiencies in monoterpene synthesis.The fusion of ERG20p and LSp, as applied in this study, offers a solution by promptly sequestering GPP and directing its conversion to limonene before it can be used in FPP synthesis.Furthermore, the significant enhancement of limonene production, by 5.3-and 51.8-fold, was achieved through the elevated expression of target genes by multiple integration.This strategy represents a synergistic approach that combines the strengthening of the upstream pathway, fusion protein-mediated precursor supply, and the enhancement of expression of the key enzymes.These results align with previous studies that have improved the production of target compounds [14,22,28]. Utilizing organelle engineering to compartmentalize partial or complete biosynthetic pathways presents distinct advantages when compared to rewiring cytoplasmic metabolic pathways.This approach offers a conducive physicochemical environment for target compound synthesis, ensuring an adequate supply of precursors or enzymes [22,29].Peroxisomes, in particular, emerge as a valuable organelle for terpene production, owing to their rich reserves of acetyl-CoA, a critical precursor for terpene biosynthesis.In addition, the separation of monoterpene synthesis from GPP within peroxisome, distinct from the native pathway employing GPP in cytosol, effectively minimizes competition [21].In this study, we introduced LS into the peroxisome both with and without the inclusion of MVA pathway enzyme.The overexpression of D-LS and ERG20m within peroxisome led to an enhancement in limonene production, yielding 3.4 mg/L.This represents a significant improvement over strains expressing the target genes in the cytosol (Fig. 2), underscoring the efficacy of peroxisome-based engineering for monoterpene synthesis.Additional overexpression of MVA pathway genes in the cytosol also showed an increase in the limonene titer, ranging from 10-to 39.5-fold, a finding consistent with the previous study carried out in S. cerevisiae [21].This suggests that intermediates from the MVA and terpene biosynthetic pathways can be effectively transported into the peroxisome in Y. lipolytica.To further enhance precursor transport, the implementation of engineering strategies, such as channeling proteins, including peroxisomal ATP-binding cassette transporters (PXA1 and PXA2), can be considered to increase production further [30].We observed a substantial increase in limonene production through the incorporation of the entire MVA pathway within the peroxisome, achieving 47.8 mg/L.This shows the potential of peroxisomes as a key organelle for monoterpene production in Y. lipolytica.Future strategies may include peroxisome engineering to increase its size and quantity [29] and the optimization of cofactor supply (ATP, NADPH) [22,26,30].Monoterpenes, including limonene, often exhibit toxicity to cells by affecting membrane integrity, a phenomenon recognized as a significant impediment to achieve high-titer production [1,3].In this study, we applied a two-phase cultivation approach, incorporating dodecane to mitigate the toxic effect of the produced limonene on the cell.However, even with the dodecane phase, we observed a reduction in biomass in the strain containing the MVA pathway within the peroxisome.This might be attributed to an internal metabolic imbalance and external environment interference, potentially stemming from metabolic burden [25].This result contrasts with previous result where growth was maintained after the implantation of the entire MVA pathway within peroxisome for producing sesquiterpene, α-humulene, in Y. lipolytica [22].In other contexts, engineering peroxisomes as a production module for fatty alcohol in Ogataea polymorpha resulted in reduced growth [30].However, the introduction of further engineering strategies aimed at reducing stress on peroxisome homeostasis, enhancing precursor and cofactor supply, led to improved growth and production.Consequently, further peroxisome engineering holds promise for alleviating limonene toxicity and improving biomass and production. To further enhance limonene production, it would be interesting to explore several strategies aimed at increasing acetyl-CoA levels, a tactic previously demonstrated to be effective for improving the production of farnesene and squalene [31][32][33][34].Moreover, it is important to consider that monoterpene synthesis necessitates four molecules of NADPH, six molecules of ATP, and six molecules of acetyl-CoA.Therefore, engineering cofactor availability by modifying the pentose phosphate pathway or inhibiting NADPH-consuming pathway may contribute to more efficient and productive limonene biosynthesis, as shown in S. cerevisiae [25,26]. Conclusions Limonene is of great interest in the field of biotechnology due to its versatile application.However, achieving microbial production of limonene at economically feasible titers remains a substantial challenging.Here, Y. lipolytica was engineered to produce limonene by metabolic engineering both in the cytosol or peroxisome.By combining precursor supply enhancements with elevated gene expression, we accomplished the biosynthesis of d-and l-limonene, yielding 24.8 mg/L and 29.4 mg/L, respectively, in flask cultivation.Notably, the strategic incorporation of peroxisomal compartmentalization elevated d-limonene production, reaching 47.8 mg/L in flask cultivation.Through the fed-batch fermentation, we achieved a yield of 69.3 mg/L of d-limonene.This study presents a pioneering approach of using peroxisomes as a platform for limonene production in Y. lipolytica and opens new avenues for the efficient synthesis of other monoterpenes in Y. lipolytica via harnessing the high potential of organelle compartmentalization strategies. Strains, media, and culture conditions The E. coli strain DH5α and TOP10 were used as the host in this study for the cloning and plasmid construction.E. coli strains were grown at 37 °C in Luria-Bertani (LB) medium (containing 1% tryptone, 0.5% yeast extract, and 1% sodium chloride) or on an LB agar plate.When necessary, appropriate antibiotics such as chloramphenicol, spectinomycin, ampicillin, or kanamycin were added at concentrations of 34 µg/mL, 50 µg/mL, 100 µg/mL, and 50 µg/mL, respectively. Y. lipolytica was routinely grown at 30 °C in YPD medium which consists of 1% yeast extract, 2% peptone, and 2% glucose, or yeast synthetic medium (YNBD) which includes 0.17% yeast nitrogen base without amino acids and ammonium sulfate, 0.5% ammonium chloride, 50 mM phosphate buffer (KH 2 PO 4 -Na 2 HPO 4 , pH 6.8), and 2% glucose.To prepare the solid medium, 1.5% agar was added to the respective liquid medium.To complement auxotrophic processes, uracil, leucine, or tryptophan was added at a concentration of 0.1 g/L, as necessary.The strains and plasmids used in this study are listed in Table 1. Construction of plasmids Restriction enzymes were obtained from New England Biolabs (Ipswich, MA, USA).PCR amplifications were performed in a PCR ProFlex ™ (Applied Biosystems, Waltham, USA) with GoTaq DNA polymerases (Promega, Madison, USA) and Q5 High-Fidelity DNA Polymerase (New England Biolabs, Ipswich, USA).PCR fragments were purified with a QIAgen Purification Kit (Qiagen, Hilden, Germany). The plasmids used in this study were constructed by Golden Gate Assembly, as described in Yuzbashev et al. [35].In brief, each gene was cloned to Lv0 plasmid using BsmBI.Lv1 plasmid containing the specific overhang for Lv2 plasmid was then constructed by assembling the Lv0 plasmid of promoter, gene, and terminator using BsaI.Finally, the Lv2 plasmid containing two or three transcription units was constructed using BsmBI.To verify the correct construction of plasmids, colony PCR and digestion by restriction enzyme were carried out.The primers used for cloning and verification are listed in supplementary Table 2. Construction of Y. lipolytica strains To introduce gene expression cassette into Y. lipolytica, the plasmids were first linearized using NotI and then transformed into competent cells using the lithium acetate/DTT method.The gene expression cassettes were randomly integrated into the genome of Y. lipolytica with the zeta sequence.Transformants were selected on YNBD media containing the appropriate amino acids for their specific genotype.Positive transformants were then confirmed by colony PCR with Phire Plant Direct PCR master mix (Thermo Fisher, Waltham, USA).The removal of the selection marker was carried out via the Cre-LoxP system. Cultivation of Y. lipolytica for producing limonene at flask scale Y. lipolytica seed cultures were cultivated overnight at 28 °C and 220 rpm in 50 mL culture tubes containing 5 mL of YNBD media, supplemented with the appropriate amino acids if necessary.Pre-cultured cells were inoculated with initial OD at 0.05 in 50 mL of YP medium consisting of 10 g/L yeast extract and 20 g/L peptone with either glucose (40 g/L) or glycerol (20 g/L) as substrate and cultivated at 28 °C and 220 rpm.An overlay of 20% (v/v) dodecane was added to each flask, and the flasks were covered with aluminum foil and sealed with parafilm to prevent the evaporation.We used two biological replicates and calculated the value of average and standard deviation. Cultivation of Y. lipolytica for producing limonene at a bioreactor scale The strain was initially cultivated in YNBD medium at 28 °C and 220 rpm overnight.Subsequently, the culture was inoculated into 2000 mL YPD medium (10% glucose) within a 6.6 L Sartorius BIOSTAT bioreactor (Sartorius, Germany), incorporating a 20% (v/v) dodecane phase as an organic extractant.Fermentation conditions were maintained at 28 °C, with agitation speeds ranging from 300 to 900 rpm and an airflow of 2 Lpm, while pH was adjusted to 5.4 using 20% (w/v) KOH or 20% (w/v) H 3 PO 4 .A fed-batch strategy was implemented, maintaining glucose at around 20 g/L by feeding of 70% (w/v) glucose. Analysis (OD, Limonene) Cell growth was monitored by measuring OD at 600 nm using either a spectrophotometer Biowave II (WPA, UK) or a 96-well TECAN Infinite ® 200 PRO plate reader (TECAN, CH). Limonene was quantified by an Accela 1250 pump (Thermo Fischer Scientific, USA) connected to an Accucore C18 column (Thermo Fischer Scientific, USA), heated to 60 °C, and coupled with a TSQ Quantum Access MAX MS/MS mass spectrometer (Thermo Fischer Scientific, USA).The sample injection volume was 10 μL with the mobile phase consisting of 85% (v/v) methanol and 12% (v/v) Milli-Q water with a flow rate of 500 mL/min.Milli-Q water was obtained through a Milli-Q Millipore filter system (Millipore Co., USA).This study APCI was used for sample ionization, the vaporizer temperature was set to 450 °C, and the scan width was set to 1000 m/z with a scan time of 0.2 s and a MS acquire time of 10 min.Limonene in the dodecane phase was quantified by HPLC-MS/MS with a standard curve of limonene, with a linear response from 0.8 to 100 mg/L.Two biological replicates were used for each measurement and the data presented are the calculated average and standard deviation. Fig. 1 Fig.1Synthetic pathway of limonene with two different strategies.a Limonene production through investigation of the genes in native mevalonate pathway or heterologous genes was carried out in cytosol (in grey).b The strategy of limonene production by establishing the mevalonate pathway and limonene synthase in the peroxisome (in yellow).The overexpressed genes are in blue (cytosolic expression) or brown (peroxisomal expression).TAG triacylglycerol, FFA free fatty acid, IPP isopentenyl diphosphate, DMAPP dimethylallyl diphosphate, GPP geranyl diphosphate, NPP neryl diphosphate, FPP farnesyl diphosphate, ERG10 acetyl-CoA acetyltransferase, ERG13 HMG-CoA synthase, tHMGR truncated HMG-CoA reductase, ERG12 mevalonate kinase, ERG8 phosphomevalonate kinase, ERG19 mevalonate diphosphate decarboxylase, IDI isopentenyl diphosphate delta-isomerase, ERG20m geranyl diphosphate synthase with mutation, LS limonene synthase, NDPS1 neryl diphosphate synthase d-limonene, the overexpression of tHMGR, ERG20m, and (d)-LS led to a very low level of d-limonene.However, when NDPS1 was overexpressed instead of ERG20m, d-limonene production was not observed.Interestingly, the fusion of ERG20m and (d)-LS (S1172) exhibited a 14.8-fold increase in limonene production compared to individual overexpression (S1188).Further overexpression of NDPS1 (S1175) did not result in an increase in limonene production.The introduction of the orthogonal pathway by overexpressing NDPS1 commonly had a negative effect on the production of both l-and d-limonene in this study. Fig. 2 Fig. 3 Fig.2The effects of overexpressing the native or heterologous genes involved in the MVA pathway on l-and d-limonene production.l-limonene is in navy and d-limonene is in dark yellow.The strains were cultivated in a YPG (2% glycerol) medium for 5 days.The values show the average and standard deviation of two biological replicates Fig. 4 Fig.4 Limonene production by targeting gene expression in peroxisome.a The effect of overexpressing d-lS and ERG20m in peroxisome in d-limonene production.b The effect of overexpressing d-lS in peroxisome with overexpressing genes in mevalonate pathway in cytosol or peroxisome.l-limonene, navy bar; d-limonene, dark yellow bar; OD, green dot; P1, peroxisomal expression with PTS1; P2, peroxisomal expression with PTS2; C, cytosolic expression.The strains were cultivated in a YPD (4% glucose) medium for 5 days.The values show the average and the standard deviation of the two biological replicates Table 1 Plasmids and strains used in this study Sequence ID: 7VPC_A) were codon optimized to Y. lipolytica and then synthesized by TWIST Biosciences HQ (CA, USA).Native genes were amplified from Y. lipolytica by PCR.The sequences of heterologous proteins are listed in the supplementary Table
2024-07-05T06:17:17.511Z
2024-07-03T00:00:00.000
{ "year": 2024, "sha1": "60340c9db246f6bdfa8aac47095bd18fd0e94341", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13068-024-02535-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "232e59c6a006c383ac9a266a782457ef8147676b", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
219761219
pes2o/s2orc
v3-fos-license
Reconstruction of Different Religion Inheritance through Wajibah Testament , A. Introduction There is no statutory provision that explicitly grants the al-waṣiyyat al-wājibah right [compulsory will (Ahmad, 2017: 22) or mandatory will (Nasir, 1990: 272)] to the non-Muslim heir to get a share from the Muslims heir. However, there are implicit gaps that allow the text of Law to be interpreted that non-Muslim heiress gets a share from Muslim heir from through the al-waṣiyyat al-wājibah or the other provision, in the field of Islamic heir law (Andi Syamsu Alam, interview at November 10, 2015). Muzāḥim (d. 114/732), and Muslim ibn Yasār (d. 100/718) argued that the verse is a muḥkamah (perfected) which means that the law of testament is compulsory (wājib) to both parents and relatives who get an heir and who do not. Several other classical scholars of Qur'anic commentators, like al- Baghawī (d. 516/1122) (al-Baghawī, 1989 and Ibn Kathīr (d. 774/1373) (Ibn Kathīr, 2000: 166-172) So that, the heir is given to those who receive heir or those who do not gain it. This opinion is based on the hadith as follows: Narrated by Ibn `Abbas: The custom (in old days) was that the property of the deceased would be inherited by his offspring; as for the parents (of the deceased), they would inherit by the will of the deceased. Then Allah canceled from that custom whatever He wished and fixed for the male double the amount inherited by the female, and for each parent a sixth (of the whole legacy) and the wife an eighth or a fourth and for the husband a half or a fourth (al-Bukhārī, 2015(al-Bukhārī, : 1073. Al-Shāfi'ī (d. 204/820) is one of 'ulamā' who argues that the verse of 180 of Q.S. Al-Baqarah is abrogated by the verse of heir law. He sees that as Allah has ordered testament and set the portion of the heir, the testament is regarded to be voluntary (sunnah or tatawwu') (al-Bayhaqī, 1990: 163-166;al-Shāfi'ī, 2009: 22). Regarding legal formal status in determining the wājibah testament (compulsory or mandatory will), the religious court judges refer to the provisions of the Islamic Law Compilation (Kompilasi Hukum Islam [KHI]) as stated in Presidential Instruction No. 1 the Year 1991, especially article 209 stating that the compulsory or mandatory will applies only to the adopted child and adopted parent. The religious court judge in Indonesia as one of the law enforcement apparatus has implemented the verdict function on matters submitted to him by initial judicial consideration to the verdict. It is no room for doubt, through the verdict judge has taken part in Islamic legal thinking especially when his verdict covers the renewal of Islamic law (Manan, 2006: 311-327). B. Literature Review Al-waṣiyyat al-wājibah is a branch of science that is closely related to inheritance, but rarely covered in society. It was originally introduced in Egypt in 1946 as a provision in the law known as the code of will. An introduction was then made in several other Arab countries, such as Syria, Lebanon, and Morocco (Jusoh et al., 2019). Conditions are provided based on cases when a person needs and has the legal right to inherit from his ancestors, but this is hindered by certain reasons. Their rights are usually blocked because their father or mother overtook their grandparents. Because of this, they were excluded from the heirs of their grandparents, although they needed to be able to inherit mainly because they were orphans (Jusoh et al., 2019). In Indonesia, al-waṣiyyat al-wājibah is recognized as an "Indonesian fiqh" forruling adopted children in the case of inheritance. According to the general role of Islamic law, adopted children cannot receive any inheritance from the property of their adoptive parents. However, a collection of Islamic law states that the adopted child must accept alwaṣiyyat al-wājibah from their parents. So, the adopted children have the right to receive a certain portion of the property of their adoptive parents (Minhaji, 2008: 296). The provision of the compulsory or mandatory will to the interfaith family members does not merely happen without judicial consideration that the person who gives heir and gets heir to have a harmonious relationship, mutual respect, and mutual assistance (Hamdanah, 2018;Jusoh et al., 2019;Ropiah, 2018;Sewenet et al., 2017). For Indonesia context, the basis of the Supreme Court verdict in article 368 K/AG/1995 (Arif, 2017;Maulana, 2013;Novandy, 2018), the Panel of Judges to the Court of Cassation argues that the interfaith family members may obtain heir through the compulsory or mandatory will that the authority has to issue part of the dead's heir as a testament even though he previously did not make will, based on thinking that authority has to guarantee the rights of people. The reason of the Supreme Court in case number 51K/AG/1999 (Mukhlis, 2017;Nugraheni et al., 2010;Syafi'i, 2017)is only referred to the compulsory testament. As for the case number 16 K/AG/2010, the Supreme JIP-The Indonesian Journal of the Social Sciences {331 Court based verdict on the marriage of the heir with his wife that has lasted for 18 years. Supreme court judge sees the fact that the wife has devoted herself to her husband and family for a long time (Abubakar, 2017;Ahmatnijar, 2019;Ropiah, 2018;Sufiati & Anggraeni, 2017). In addition, there is a legal reason that their marriage is legitimate and recorded in civil records so as to refer to civil law and the judge considers that the wife is not included in the ḥarbī disbeliever or the warring enemy (Averroës, 1994: 440) but dhimmī disbeliever or non-Muslim who enjoys the dhimma, or -protection‖ of the Muslim state (Lumbard, 2009: 147), that the non-Muslim wife is entitled to inherit the heir based on the the wājibah testament . Thus, the reasons underlying the judge's decision in the Supreme Court verdict number 16K/AG/2010 on giving heir to non-Muslims: 1. Supreme Court considers it as a wājibah testament. 2. Wife has been serving her husband for about 18 years. 3. Wife is not supposed as heir. 4. Law argues that the relationship between heir and the heiress (husband and wife) is recorded in the civil record. Based on the above argument, judge has the authority to deviate from the provisions of written law, but the deviation intended to make justice among community. The judge has to satisfy clear and sharp legal considerations by considering various aspects of law. The judicial verdict from judge which then serves as the basis for a verdict which has a similar case is referred to as jurisprudential law intended to avoid any disparity of the judge's verdict in the same case. For this context, this study is focused on the inheritance construction through the wājibah testament and the inheritance reconstruction of interfaith marriage through the wājibah testament. Interviews are conducted directly through informants (Madinger, 2000: 149).In this case the authors conducted interviews directly (Frances et al., 2009) Book Review is a data collection technique by taking the main ideas (Snyder, 2019) about the inheritance of different religions. Thisarticle examines written data such as books (Ramdhani et al., 2014), like fiqh books (Hadi, 2017), Qur'anic commentaries (Hussin et al., 2012), hadith (Idri & Baru, 2018), papers, articles, and other written sources (Palmatier et al., 2018) relating to inheritance of different religions, such as Islamic Law Compilation and the Supreme Court Decree on the inheritance of different religions. Documentation is carried out by means of a review of written data (Bowen, 2009) in the form of official state documents issued by the Supreme Court ruling on the inheritance of different religions. D. Discussion Provisions regarding the number of wills that are allowed to other than the heirs who get the inheritance, in addition to the words of the Prophet to Sa'adibn AbīWaqāṣ when the farewell pilgrimage (ḥajjwadā'), according to al-Qurṭubī (d. 671/1273), adopted children in Islamic law, not including heirs but are entitled to get a share of the inheritance of their foster father based on the verse of 180 of Q.S. al-Baqarah. Some scholars of Qur'anic commentators argue that the understanding of relatives in the verse is not limited to people who are related by blood or marriage so that the circumcision can be performed for the benefit of others who have a special relationship with the testator or those who need help can receive testament or will (al-Qurṭubī, 2006: 91-114). JIP-The Indonesian Journal of the Social Sciences {333 Ibn Kathir (d. 774/1373) explained that the verse of 180 of Q.S. al-Baqarah contains mandatory wills before the arrival of verses on inheritance. After the recitation of the verses on inheritance, the inheritance obligation is eliminated specifically for parents and close relatives who inherit. He further asserted that the verses of inheritance did not invoke overall inheritance law, but only raised a part of the material of the common testamentary obligations. Therefore, the inheritance verses only raise the wills law for people who get the inheritance only (Ibn Kathīr, 2000: 166-172). In a general explanation of the fundamentals of Islamic law, according to al-Ghazalī (d. 505/1111), any problems that contradict the Koran, sunnah (Prophetic traditions) and ijmā' (Islamic scholars'consensus) must be discarded. There is not a single Islamic law that is contradictory to benefit or in other terms, there is not a single Islamic law that makes mankind harmless (al-Ghazālī, 1993: 172, 174-175). Ibn Qayyim al-Jawziyyah (d. 751/1350) reaffirmed theoretically by equating sharia with justice. The decision of political authority he sees as legitimate as sharia if it contains the values of justice because sharia is a representation of justice. On the other hand, justice initiated by Ibn Qayyim refers to the judge's efforts to find the truth and provide the law if there are violators who have no formal strict rules. He stressed that judges were able to capture the truth, even in conditions of minimal evidence and minimal formal rules. Judges' efforts to find the truth on a practical level are a form of procedural justice (al-Jawziyyah, 1994: 21). Wahbah al-Zuḥaylī argued that the transfer of assets through a will is a necessity that must be done by someone who has property, while he/she feels his/her end is near. On the other hand, Allah has not revealed verses of the Koran that contain provisions regarding inheritance. This will serve as a guide for the relatives he left behind in distributing the assets he left behind. However, when the verse about inheritance has come down, then the area of validity of the will is only two things. First, a will can no longer be given to heirs, as the Apostle Saw said when giving a sermon on the farewell pilgrimage. Second, the provisions regarding the number of wills that are allowed to other than Yūsuf al-Qaraḍāwī is one of the scholars who stated that there was an inheritance between Muslims and non-Muslims. According to him, Islam does not obstruct and does not reject the path of goodness that is beneficial to the interests of its people. Moreover, inheritances that can help to monotheism (tawḥīdullāh), obey God, and help uphold His religion. In fact, wealth is intended as a means to be obedient to Him, not to immoral to Him. In addition, scholars who allow the inheritance of different religions are based on good fortune in the context of preserving religion, descent, and wealth. Therefore, they only allow Muslims to inherit from non-Muslims, not vice versa (al-Qardāwī, 2001: 128). Yūsuf al-Qarḍāwī states on opinions that allow Muslims to accept from infidels with the following reasons: First, Islam must not prevent the property from being used to defend monotheism, obedience, and help the religion of Allah. Second, in the use of property, it is intended for obedience to God, not disobedience. Third, the first person to use property is a person of faith. If the laws and regulations in force in a country determine the ownership of inheritance, it is not appropriate to refuse and let the infidels enjoy using it while doing so, it will endanger the Muslims (al-Qardāwī, 2001: 128). Based on the principle of legal flexibility, according to Satria Efendi, judges need to focus more on upholding justice than just focusing and 'rigid' on the legal text. Therefore, the legal text is the media to achieve the goal, namely to uphold justice (Zein, 2004: 78). For Indonesian context, Hazairin stressed the need for a formulation of the Islamic law that is unique to Indonesian people (Hazairin, 1963: 15). Relating to the wājibah testament, Abdul Manan said that legally and formally in determining it, the judges of the Religious Courts use Islamic Law Compilation's provisions in accordance with Presidential Instruction No. 1 Year 1991. Like one of the Islamic law enforcers in Indonesia, the Religious Court Judges have been able to settle and decide cases submitted through the court. Through these decisions, it can be concluded that the judges have played an active role in developing ideas and reforms in Islamic law. The judge's decision to give compulsory testament to non-Muslims was made in consideration of the benefit. This consideration is related to the condition of non-Muslim heirs who are in dire need. In addition, when he was still alive, the heir was never harmed by the heir of the non-Muslim (Manan, 2006: 311-327). According to Andi SyamsuAlam, the most important aspect of the decision was the aspect of justice and community conditions that differed from then to now. In ancient times, the Religious Court firmly did not give inheritance to non-Muslims because it relied on the hadith. However, further developments show that the relationship between the people in Indonesia has been such a situation, it is necessary to think justice among the judges. Therefore, the Supreme Court judge has the thought that non-Muslim wives are not heirs, but because he has a marital relationship, it is considered that it is very fair for wives of different religions to get a share. Justice for non-Muslim wives is analogous to justice for adopted children who get part of the obligatory will. It would be unfair if an adopted child gets an obligatory will, especially since he is a wife just because of differences in faith. Based on this, the provision of compulsory testaments to non-Muslims is appropriate, because non-Muslims are not heirs'law In Islam the tradition of child adoption is acceptable with the following amendments: 1. The genealogy of the adopted child is not connected to his adopted parents, but attributed to his biological parents. 2. The status of child adoption does not cause a legal relationship of inheritance between adopted child and adopted parents as well as their families (Pagar, 2001: 11-14). The position of adopted child and adopted parents in the inheritance law of Islamic Law Compilation has been explicitly set forth in article of 209. In general, the status of adopted child and adopted parents in Islamic Law Compilation remains as their original status, that the adopted child has a biological relationship with his biological parents based on the opinions of the jurists. Therefore, it is clearly noted that child adoption does not change his genealogy and biology status in terms of family relations. The above concept of child adoption differs from the adoption concept regulated in today's growing positive law saying that adopted child is connected to his adopted parents, which leads to sharing inheritance. Although child adoption does not change thedescent status, the adoption, however, does not decrease the value and meaning of child adoption, especially when seen from the following points:first, child adoption makes the law of nurturing child daily life change which is initially under the control of his biological parents to his adopted parents; second, the responsibility of tuition fees that is initially handled by his biological parent is moved to his adopted parents; third, child adoption is seen to be inadequate if it happens only under agreement of two families as formalized through traditional and religious ceremonies, but the adoption must be legalized through a court verdict, so that the adopted child will have clear and legal status before the law; and fourth, the status of legitimate adopted child as mentioned above will lead to the legal consequence of inheritance, in which the child will receive a wājibah testament of maximally one-third of his adopted parent's property. Conversely, if the adopted child dies the adopted father will receive the wājibah testament of one third of his adopted child's property (Pagar, 2001: 9-14). According to Compilation of Islamic Law, adopted parents are required to testify the wājibah testament for the benefit of their adopted child as they have received the responsibility to take care of all the needs of their adopted child. Therefore, the Qur'anic verse states that even though the adopted child does not get the inheritance from his adopted parents, when considering the benefits for the child that is emotionally and socially closed to his adopted parents, the adopted parent still has responsibility as Allah says in Q.S. al-Zariyat, verse 19 -And do not make [as equal] with Allah another deity. Indeed, I am to you from Him a clear warner‖. 338} Based on the verse above, regarding the obligation of adopted parents to fulfill their responsibility to their adopted child, the adopted child is similar to a poor person, in which he needs help from his adopted parents to get a good future, especially in terms of his economy. The Compilation of Islamic Law consistently remains in accordance with the farāid which places the position of adopted child outside his heir, the same as the law in fiqh; however it adopts limited customary law into the value of Islamic law due to transferring responsibility of biological parents to adopted parents in nurturing his everyday life. A principle applied to make rules on the wājibah testament of adopted child as provided in the Compilation of Islamic Law as part of fiqh law is merely the consideration of al-maṣlaḥat al-mursalah. This means that with benefit consideration and the custom of our community (for instance, the refusal to polygamy though a couple have no child for many years) the wājibah testament for the adopted child may be granted. The adopted child can be formulated as a person worthy of being a child of the couple's family, who is raised, educated and nurtured and by turns he will nurture his adopted parents in the future time. This wājibah testament is applied as a way to equate inheritance for that who cannot inherit, but the person has very close inner relationship with adopted child even though they do not have biological relations. Thus, the wājibah testament is essentially set to create benefits for a person who deserves it. As a testament has the potential to realize a specific justice related to personal interests and has the effectiveness in spending property, the development of social and family relations is reflecting the concern of the heir towards the heiress. The article of 209 of Compilation of Islamic Law that regulates the wājibah testament differs from the wājibah testament in Muslim countries that generally identifies an orphan as the recipient of the wājibah testament. Indonesian Islamic jurists through Compilation of Islamic Law, have used the wājibah testament to grant the right to adopted child and parents with a maximum amount of one third of the inheritance. The idea behind the spirit of the wājibah testament construction is that Indonesian Muslim jurists have obligation to bridge the gap between Islamic law and customary law. As a matter of fact, that Islamic law strongly rejects the adoption institution, while Muslim families in Indonesia practice adoption. Thus, the Islamic jurists in Indonesia try to accommodate the existing value system in both laws by adopting the wājibah testament derived from the Islamic law as a means to receive the moral value behind the practice of adoption in customary law. According to RatnoLukito this effort should be taken because social reality shows that people who practice adoption, adoptive parents always think about the welfare of their adopted child when they pass away (Lukito, 2008: 111). The presence of Compilation of Islamic Law that establishes the wājibah testament of adopted child and adopted father which is different presence, some articles governing the inheritance law in Indonesia can be seen to be -native Indonesian law‖, as this law applies to all native Indonesian citizens, this inheritance law is seen to be the fulfillment of the Indonesia's Islamic jurists that has emerged since the 1950s, which was introduced among others by Hasbi Ash-Shiddieqy who advocated for the jurisprudence (Islamic law) applied in Indonesia is a purely Indonesian Jurisprudence, which is in accordance with the culture of Indonesian society (Ash-Shiddieqy, 1966: 42;Najib, 2011: 57). In addition to Hasbi, Hazairin emphasized the need for Islamic unique law formulation for Indonesian people. This idea is presented at the opening of Islamic Higher Education in Jakarta (Hazairin, 1963: 15). According to Muhammad Daud Ali, (Baharuddin, 2008: 105) the compilation of Islamic Law Compilation, in particular the article related to the wājibah testament, always concern the benefits in matters categorized as ijtihādī. Therefore, it is hoped that in addition to maintaining and accommodating the law aspiration and the justice of society, Islamic Law Compilation will be able to act as a social ingeneering for the Indonesian Muslim community (Ali, 1993: 268). Abdul Manan stated the similar opinion to Daud Ali's, that the rule renewal, (Manan, 2008: 298) when viewed from its substance has a purpose to realize maslaḥah for the benefit of man, in terms of preserving religion, soul, intellect, wealth and descendants which is called in the fiqihterm, al-kulliyatal -khamsah. The practice of maṣlaḥah theory in solving various legal issues has inspired Islamic law experts in Indonesia to use this theory in the framework of Islamic law reform, either by establishing laws and incorporating Islamic legal values into national legalization; while Umar Syihab's comment that the Indonesian regulation that tightens polygamy except in the most urgent circumstances is a law which is in line with Islamic law. And the fulfillment of the conditions within the rule, a polygamous man will not get any difficulty in his household due to his wives insistence (Syihab, 1996: 120-121). This is in line with Satria Effendi's statement that judges need to focus more on enforcing justice rather than focusing on rigid legal texts because legal texts constitute means to achieve the goal of justice enforcement (Zein, 2004: 78). The reconstruction of interfaith inheritance through the wājibah testament of Islamic Law Compilationis not only applied to adopted and adopted parents but also to non-Muslim wife from Muslim husband, where religious difference remains one of the barriers in inheritance. Regarding religious difference as the barrier of inheritance, this rule makes many difficulties in areas where the members of family embrace different religions, (Anderson, 1994: 85) including Indonesia. As for the reconstruction, due to the absence of positive law that underlies the provision of wājibah testament for a non-Muslim encourages the Supreme Court to make legal discovery in deciding these cases based on the provisions of Article 10 of the Judicial Power Law stating that -Court is prohibited to refuse examining, judging and deciding a case that was proposed under the pretext that the law is absent or unclear, but the court is obliged to examine and judge the case it‖ (Law Number 48 Year 2009 on the Judicial Power, contained in the State Gazette of the Republic of Indonesia Year 2009 Number 157). In other words, the court must find its own law independently. In this regard, in Article 5 paragraph (1) of the Law, there is stated that -Judge and judge of Constitution must explore, follow and understand the legal values and sense of justice that lives in society. This provision is the legal basis for Islamic Law Compilation to be applied as an indirect reference or a guide. The provision of above article is in line with article 229 of Islamic Law Compilation that says -Judge in resolving cases filed to him must pay attention to the legal values that live in society, so that his decision is in accordance with a sense of justice‖. The Supreme Court's verdict expresses that the wājibah testament has justice value (philosophical aspect) and benefit value (sociological aspect) required by the surah al-Baqarah verse 180. The legal reasoning developed by the Supreme Court is in line with the way of thinking in maqāṣid al-sharī'ah. In addition, it is based on a consideration to provide substantive justice to litigants. This means that the Supreme Court attempts to fulfill the sense of justice of all parties by developing and discovering law (ijtihihād) which does not violate the Islamic law of heir. Andi Syamsu Alam states that the justice is based on surah al-Nisa verse 58 as follows: Indeed, Allah commands you to render trusts to whom they are due and when you judge between people to judge with justice. Excellent is that which Allah instructs you. Indeed, Allah is ever Hearing and Seeing. The main characteristic of progressive law is that the law serves human interests and refuses the status quo in law. This is in line with the principle of maslaḥah determination in Islamic law, as follows: alḍararuyuzālu (all harmful things should be avoided), dar'u almafāsidmuqaddamun 'alājalb al-maṣāliḥ (avoiding the harmful things is more preffered that taking meaningful things) and al-mashaqqah tajlib al-taysir (difficulties can bring easiness), (A. Rahman, 1986: 3-4;Mubarok, n.d.: 151) the three principles of maslaḥah determination in Islamic law are in line with the characteristics of the progressive law, that the law is for man. Based on these rules, it is concluded that sharia has great attention to the The presence of testament system in Islamic law is very important as problem solver in a family as there are some family members who are not entitled to receive property by inheritance, whereas they have contribution in keeping the property or hindered poor grandchildren, or have different religions and so forth. Thus, the testament system regulated in Islamic law can overcome disappointment (Rofiq, 2002: 448). The ‗illat [or, in another term, ratio legis (Athoillah& Al-Hakim, 2013)] in the article 209 paragraph (2) of Islamic Law Compilation, not the presence of genealogy relationship, as adopted child does not have genealogy relationship with his adopted father, but the adopted child's emotional closeness with his adopted father and being a part of the family members become illat (ratio legis). The breakthrough of inheritance law in the wājibah testament institution is a compromising approach to realize substantive justice without breaking the legal provisions of inheritance as law is a means not a goal. Based on the explanation above, inheritance construction and reconstruction of people from different religions are described as follows: Reviewed Construction Recontruction Philosophical principle Having special human relations in terms of closeness and mutual assistance. The results of the analysis of the reasons underlying the Judge's Ruling on MA Decision No. 16K / AG / 2010 include: First, the Supreme Court Judge assumed that the heirs of non-Muslims were given a part of the mandatory wills. This is in accordance with the verse of 180 of Q.S. al-Baqarah, which means: It is obliged upon you if someone among you arrives (signs) of death, if he leaves a lot of wealth, wills to his parents and close relatives in a complete way, (this is) the obligation of the righteous. Because the will of the will applies to anyone who leaves the property, so if someone dies and the person does not have a will, his property must be dedicated in part to fulfill the will. Those who are entitled to determine the affairs of the Muslims are the rulers, including probate matters, the steward must act to provide a portion of the inheritance as mentioned above in order to fulfill the testamentary obligations. Second, wives of different religions are not declared as heirs. This is in accordance with Islamic law which states that religious diversity is a barrier to inheritance. So, wives of different religions are not heirs. This is in accordance with Islamic Law Compilation, article 171, part c, which states that the heirs of Muslim heirs must be Islam. Therefore, the Supreme Court stated that non-Muslims are not heirs. Fifth, wives of different religions have lived in harmony for 18 years. Related to this, time or period does not become the main benchmark in the determination of compulsory testaments to wives of different religions. But caused by the devotion and loyalty of the wife to a husband who lives in harmony and peace. D. Conclusion The construction of inheritance through the wājibah testament in article 209 of Islamic Law Compilation is given to adopted child and adopted parent considering the special relationship between adopted child and the parents in terms of emotional closeness and gaining benefit and justice. Moreover, the reconstruction of the wājibah testament is given to non-Muslim considering that the non-Muslim wife has lived in harmony and devoted herself to her husband, and the judge decides the testament to realize the substantive benefit and justice to maintain the family unity without opposing the provisions of Islamic law.
2020-06-04T09:10:28.158Z
2020-05-30T00:00:00.000
{ "year": 2020, "sha1": "310f35c3939be9f95207829ef9dc1bbb88c02264", "oa_license": "CCBYSA", "oa_url": "https://journal.scadindependent.org/index.php/jipeuradeun/article/download/466/448", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "77c30adade0d87da26fd190ccc908d3459baf042", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Philosophy" ] }
216408020
pes2o/s2orc
v3-fos-license
Adaptation of local wisdom in contemporary mosque design for achieving good building physics and earthquake resistance The paper aims at proposing the implication of the local wisdom value in designing a contemporary mosque. Local wisdom works successfully in determining a suitable design of vernacular buildings such as traditional mosque. It comprises the considerations on building physics such as sufficient thermal comfort, good daylighting and room acoustics which meets the standard. These parameters are also running fitly with traditional wooden structure of the mosque which is resistant against the earthquake. This article analyses two old mosques in Aceh, Indonesia which are still well standing across their ages. The wooden mosques are Indrapuri mosque in Aceh Besar and Tengku Dipucok Krueng Mosque in Pidie Jaya. The evaluation of the mosques was carried out through observations and some field measurements. This article further examines the adaptations of the local wisdom of the mosque pertaining the buildingphysics and the earthquake resistance that can be applied in the contemporary mosque. Introduction Local wisdom as defined in Cambridge Advanced Learner's Dictionary is indigenous, which is naturally existing in a place or country rather than arriving from another place. This indigenous knowledge has been modified through accumulated practical experiences and passed on from one generation to the next. Local wisdom also expresses community knowledge, which is transmitted through tradition [1]. Manugeren emphasizes that Local wisdom is a set of ideas or policies based on the values of virtues found in a community and often applied, believed to be the guidance of life, and handed down from time to time. Based on the definitions, that local wisdom can be understood as a human effort by using their mind to act towards something, object, or events that occur in a particular space [2]. In architecture, local wisdom works on many parts such as facade, structure, and ornaments where those are born from the specific value of indigenousity. Many articles, report as well as personal observations show that the local wisdom secures the building successfully against disasters such as an earthquake [3,4,5]. In Aceh, the local knowledge is entirely closed to Islam due to the religion of the majority of local people. However, Islam itself appreciates the indigenousity which existed earlier than its presence; therefore, Islamic functions were adopted harmoniously in original character [6]. The flashback of the earthquake in 2014 crashing Pidie Jaya ruining many mosques gave a good lesson. The massive concrete mosques with dome roof style collapsed while the wooden traditional still well stood with just less damage that can be repaired. The traditional mosque also works manually in giving good and comfort building physics to the worshippers such as thermal comfort, sufficient daylight, and excellent room acoustics. This is what the new mosque mostly absence. All amenities present through electrical tools which mean running the high cost of the energy. This article, therefore, explores the local pearls of wisdom of traditional mosques in Aceh that can be substituted in contemporary mosque design for achieving durability against the disaster and comfort building physics. This study analyses two traditional mosques in Aceh, namely Indrapuri mosque in Aceh Besar and Tengku Dipucok Krueng Mosque in Pidie Jaya. The traditional design of the mosque in Aceh has a standard configuration of roof, which is tiered. Based on the history the two mosques studied in this research has the roof made from rumbia leaf arranged in three significant overlappings with the apertures along the perimeter of the tiered roof. The mosque has typical wall remained open above the base wall. However, in Tengku Dipucok Krueng, partitions with traditional ornaments creating leaks for air circulation are installed along the wall perimeter. The base wall, which is around 1 meter high, is made from the river stone. The pond or water container which was functioned for ablution is built at the front integrated into mosque design. The performance shows the roof height, which is about two times of the wall height, which is believed by the previous religion i.e., Hinduism as part of being close to God. Indrapuri Mosque, Indrapuri Indrapuri mosque was built on the 12th century which was functioned initially as the Hindus temple. Later, when Islam came to Indrapuri, Sultan Iskandar Muda converted the Hindu kingdom to be Islamic [7]. This mosque is located in Indrapuri, Aceh Besar, Indonesia, where the place looks peaceful with abundant green space and a large river running some distance beneath the hill where the mosque stands. The mosque, which is an open layout, has three tiered roofs which are supported by 36 wooden columns. The roof was initially made from rumbia leaf which provides upper apertures for circulating out the hot air. The western pulpit was built continuously connected to 1.5 m height of stone fence surrounding the layout plan [8]. The open terrace with steps surrounding the mosque creates a magnificent view of the mosque Tengku Dipucok Krueng Mosque in Pidie Jaya Teungku Dipucok Krueng mosque was established during the Sultan's reign Iskandar Muda in 1607 M-1636 AD. This mosque has an octagonal pole which is varied in size depending on the function and location. The 12 pillars of the first roof were 23 cm in size, while the four pillars were 27 cm in length and the central column was 35 cm in size. The mosque building has been repaired several times. The first one was in 1947, where the Beuracan community independently managed the mosque reparation due to several damaged parts. The size of the mosque at that time was expanded from 10 x 10m to 13 x 13m. The roof was changed to zinc from previously thatched roofs. Then the community also made cement walls as a barrier around the mosque with a height of about 95 cm [10]. Research methods This study collected the data through observations and some field measurements. The observations worked on the mosque design and structure which were recorded through measurable and architectural drawing; and photos. Some interview and related literature review supporting the data were carried out simultaneously. Findings The study analyses the character of local wisdom of the mosque, which represents the contributor of good building physics and toughness again disasters. In the figures, Indrapuri mosque and Tengku Dipuok krueng mosque are abbreviated to M.I and M.T respectively. The analysis works based on table 1 which figures out the observation data. Building physics Relating to science architecture and building physics, the performance of the two mosques shows the character of a passive cooling strategy. Aceh, which is located in warm-humid climate, has high relative humidity, which the average is nearly up 80% and air temperature, which is 27,5 0 C. The high relative humidity should be best reduced through the sufficient air ventilation [11]. As the high-temperature rise, the high roof integrated with the openings circulates the hot air out of the mosque. The light material such as leaf installed on the roof also works perfectly in reducing the high air temperature due to the low conductivity value of the blade. Based on the interview to the worshipper of the two mosques as well as the field observation, the mosque has good sound distributions. Without any loudspeakers, the worshipper in any directions could listen to the imam sound well. In, some theory, the floor plan of the mosque, which is less than 15x15m works perfect in distributing the sound. The two traditional mosques have the dimension roughly 14x 14 m which meet the standard of proper sound distribution. It is supported by the pitched roof which can distribute the sound better than the concave or dome roof which is distributing the sound to the center. The repeated sound distribution to the same area due to the arched ceiling will create an echo that can distract the quality of the sound itself. Daylight is also well distributed in the overall room. The clerestories on the tiered roof and the apertures surrounding the wall create the large inlet for the daylight (figure 3). The sunlight comes into the room is the reflected light either internally or externally. Therefore there is no glare, and at the same time, the heat from outside is also minimized. Building structure The two mosques have wooden structure supporting the tiered roof. The two mosque Indrapuri mosque has 36 wooden poles standing in a grid dimension of 3 All the poles erect on the swear foundation, which is not connected to the ground. The local wisdom brought from this pole type is the mosque stay flexible if there are some shakes. This condition is agreed by a lady living in a traditional wooden house with a swear foundation. She said that during the tsunami and the earthquake in 2004, her home was removed to roughly 20 meters distances without any significant breaks. Meanwhile, the concrete houses collapsed and left in ruins. The wooden structure as commonly installed in many traditional buildings is installed without nails, yet using the pen and wedges (figure 7). The peg system creates the building flexibly moving following the shakes in any case of an earthquake. The two mosques have a different way of supporting the roof. Indrapuri mosque equally distributes the roof loads to 36 wooden poles. Meanwhile, Tengku Dipucok Krueng mosque has one first pole located in the center bearing the center highest tiered roof load. The second tiered roof load is carried by 12 wooden poles, and 20 wooden poles support the remaining tiered roof at the lowest level. Adaptation of Local Wisdom in Contemporary Building Design Considering the two traditional mosque design, the local wisdom that may be applied to the contemporary design are analyzed through the following aspects. First, local knowledge must be integrated with the understanding of the surrounding nature and culture. Second, local wisdom is dynamic. It is flexible to the global situation. Third, the use of local wisdom must be sufficient to provide income, reduce cost/expenses, production efficiency, and improve quality of life. Fourth, it is elaborative but straightforward and comprehensive. It is usually oral in nature. It is adapted to local, cultural, and environmental conditions. It is dynamic and flexible. It is tuned to the needs of local people. It corresponds with the quality and quantity of available resources. It copes well with changes. While the current mosque design needs some essential considerations such as large size for accommodating a high number of worshippers, proficiency of comfort including thermal, daylight, and room acoustics comfort; the mosque also should be a place for rescue or prepared with disaster mitigation. From these considerations, in the case of developing wooden mosque with natural ventilation and daylight for contemporary design, the local knowledge that probably could be adopted are the floorplan size, buiding structure and building materils. The floorplan size could be design in 15x 15 m grid. Where each grid is supported with one loudspeaker. In that size, the roof ceiling is designed with the tilt of 30 0 to 40 0 to allow the thorough sound distribution. The roof in the grid should have apertures for allowing the daylight and natural air circulation. In order to avoid hard wind, the barrier such as secondary facade or any wooden ornament is essential to locate nearby the apertures. This grid can also be a base in designing structure construction for bearing the load. Peg system can be still adopted in the grid of 15 x 15 m. For considering larger space for accommodating the worshipper the wooden pole strategy in Tengku Dipucok Krueng can be implied by installing only one primary big pole in the middle for supporting the roof load. To mitigate the mosque in case of flooding and any damage that will harm the floor, the floor can be raised above the ground. This way is traditionally applied in Acehnese traditional house, which is about 2,5 m built above the ground. The roof materials, wall, and floor are also analyzed considering local wisdom approach. For a good design of the roof, some approach should also consider the followings [13]:  Weather resistance;  Water and frost resistance;  The strength of roofing material is the ability to carry dynamic (wind, atmospheric precipitation) and static (the weight of the snow masses, etc.) loads;  Biostability and corrosion resistance;  Good sound insulation properties -the ability to create reliable and highly efficient protection against external noise;  Durability;  Environmentally friendly;  Fire safety -an advantage have a non-combustible or slow-burning materials; and Efficiency The current condition of the roofing material applied in the mosque is the zinc roof. From the other research evaluating the modifications of the two mosque which is the use of zinc roof instead of rumbia leaf, zinc roof causes higher indoor globe temperature and the possibility of background noise due to the hard wind blowing the roof [9]. Rumbia leaf is quite useful for roof material. However, it is fastburning material, and the availability has been scarce and rare. In some studies, clay has been analyzed as good material due to its excellent corrosion resistance, high sound insulation, and low thermal conductivity hence small surface temperature. The weight is about 50kg/ m2, which is heavier about 15kg than the leaf roof (thatch roof). However, this is still fine as the load for a wooden structure. Therefore, in this case, the writer proposes the use of a clay roof. A clay roof is better compared with the metal roof [13,14]. A mosque is also locally functioned as a place for escaping from the disaster. Tsunami in 2004 proved that some mosques were safe and many victims run and saved themselves to the mosques. They dedicated themselves to have their last breath in the mosque which is closed to the God. Being died in the mosque is very valuable rather than being died in another place. The local people extend the height of the mosque above the ground. This indegenous knowledge as well as the existance of tower in a mosque meet the standard of the need escape building in disaster prone area [15,16,17]. Conclusion This study analyses two traditional mosques in Aceh, namely Indrapuri and Tengku Dipucok Krueng mosques. The mosques that have traditional building structure and material and have been able to withstand for more than 100 years. The Local wisdom in the mosques includes the wooden peg structure, the building material, and the architectural design. The possible local pieces of knowledge that can be adopted in the contemporary wooden mosque design for gaining the sustainability in building physics and earthquake resistance are the layout dimensions, wooden peg system, especially for supporting the roof load, architectural design, including roof form and aperture design. This study is still a proposal for BCEE4 IOP Conf. Series: Materials Science and Engineering 737 (2020) 012021 IOP Publishing doi:10.1088/1757-899X/737/1/012021 8 designing the modern mosque applying the local knowledge. However, it has not analyzed yet further either with simulations or lab works. Therefore this study recommends a further investigation in developing the mosque design. Acknowledgment We acknowledge the Ministry of Higher Education of Indonesia (Menristekditi) that have funded this work through Hibah Strategi Nasional institusi. We also thank all the contributors i.e., Architecture students of Syiah Kuala University that have collected the data and drawn the architectural drawing of the traditional mosque as the research documentation.
2020-03-12T10:45:17.984Z
2020-03-06T00:00:00.000
{ "year": 2020, "sha1": "23fc89f5c5aed9cdb53fa4f1b685695d1446141a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/737/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1a135c6f48f6df48c47d2b27f497560ab56637c9", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Engineering" ] }
115980881
pes2o/s2orc
v3-fos-license
Loads of Sewer Manholes within Mining Area ABSTRACT Purpose The purpose of this paper is to present the method of taking into account additional external horizontal loads acting on sewer manholes within mining areas, caused by the impact of horizontal strains on the subsurface soil layer. Methods The determination of the dependencies of the changes in the cross-sections of flexible manholes' riser pipes (with different circumferential stiffness) on the values of horizontal soil strains, based on laboratory tests. Results The results include formulas for determining the values of external horizontal loads acting on sewer manholes within mining areas, in particular flexible manholes made of thermoplastics. Practical implications The results will be used for the assessment of conditions in which sewer manholes can be used within mining areas – and will be beneficial when considering the following: design, protection and assessment of resistance to horizontal strains. Originality/Value The presented method is an original concept. It enables the determination of additional external horizontal loads acting on sewer manholes within mining areas, in particular the flexible manholes made of thermoplastics. It also enables the determination of dependencies of changes in the cross-sections of risers of flexible sewer manholes (with different circumferential stiffness) on the horizontal soil strains. INTRODUCTION The impact of mining deformations in subsurface soil layer on the structure of sewer manholes is manifested mainly by the interaction of horizontal strains. This interaction may result in the bearing capacity of the components of manholes being exceeded, causing them to fail as well as leading to changes in the shape of cross-sections from the flexible (deformable) objects. In the case of modular manholes made of plastics, prefabricated concrete or reinforced-concrete elements, the interaction can also cause angular deviations of such elements and even a loss of tightness. The evaluation of the conditions in which the sewer manholes can be used within mining areas consists of determining the foundation depth for the expected values of horizontal strains in subsurface soil layer as well as the type and condition of the soil. In order to determine this depth, it is necessary to know the values of the external horizontal loads. These values constitute the basis for the calculation of bending moments and axial forces acting on the walls of such objects and, afterwards, they are compared with their bearing capacity. These loads are also used for determining the values of the relative deformation of the flexible man-hole cross-section, which is compared to the permissible deformation (Kalisz, 2010). This paper focuses on the impact of horizontal strains in subsurface soil layer caused by underground mining on the walls of, both flexible and rigid, sewer manholes. It was assumed that the level of groundwater is does not reach the bottom of the manhole. The exemplary results of preliminary laboratory tests concerning the impact of horizontal soil strains on models of flexible risers of sewer manholes made of plastics with different circumferential stiffness have been presented. As in the case of pipelines (Mokrosz, 1998), the cross-section deformability of the riser of a sewer manhole means its susceptibility to change of shape under the impact of unevenly distributed external horizontal forces. This change affects the values and distribution of external loads which are induced by soil. An important parameter, which characterizes the flexibility of sewer manholes made of plastics, is the circumferential stiffness of the manhole riser pipes, determined based on test results (Rydarowski & Walczak, 2000). These riser pipes are characterized by the following stiffness classes: 2, 4, 8 as well as 12 and 16 kN/m 2 . The initial state of loads of flexible and rigid manholes It is assumed that the distribution of external horizontal loads on sewer manholes with a circular cross-section is even before mining deformations in subsurface soil layer occur and their value increases along with the foundation depth of the manhole (Fig. 1a and Fig. 1b). In the case of sewer manholes made of plastics, a slight unevenness of loads on their circumferences can occur. This is caused by the uneven compacting of the non-cohesive soil layers of backfill. The stresses in the soil layer at a depth of z amount to (Wiłun, 2013) q q z n    γ σ 11 (1) and ) (2) where: σ 11  vertical stress of soil, σ 22  horizontal stress of soil at the main direction x (Fig. 2), σ 33  horizontal stress of soil at the main direction y (Fig. 2), ξ 0  at rest soil pressure coefficient,   unit weight of soil, z  foundation depth of the analysed cross-section of the manhole, q n  surface weight, q  useful load of surcharge over the analysed cross-section of the manhole. External horizontal loads on sewer manholes lead to the creation of circumferential compressive forces in their walls. Thermoplastics are materials with viscoplastic properties and thus, they creep under loads. The walls of manholes deform and thus, the diameter of manhole riser pipes reduces slightly. The soil, around the manholes, loosens which leads to an active limit state. The lateral soil pressure coefficient reduces to the limit value ξ r (Petroff, 1994) which is assumed in static calculations instead of the at rest soil pressure coefficient ξ 0 . In such cases, 21% of load unevenness at the circumference of such objects is assumed. Loads of flexible and rigid manholes Depending on the position of the exploitation edge, the impact of horizontal soil strains around the walls of a sewer manhole can be separated into three stages: horizontal soil loosening, compacting and loosening again. Loosening of the subsurface soil layer results in the reduction of external horizontal loads on objects buried within it, both in a parallel and perpendicular direction to the exploitation edge. At the same time, the horizontal loads on the walls of objects are unevenly distributed and it contributes to changes in axial forces and bending moments. In contrast to flexible objects, deformation of the cross-section of rigid manholes is so small that it has no impact on the values and distribution of the uneven loads acting on them (Mokrosz, 1998). During the horizontal loosening of the subsurface layer of non-cohesive soil for the horizontal strains of 2-3 mm/m, an active limit state occurs. The horizontal stresses acting in a perpendicular direction to the exploitation edge (x-axis, Fig. 2) are determined according to the following formula where: ξ r  active soil pressure coefficient, ν  Poisson's ratio. ... Ri g id ma n h o le sco mp a cti n g o f t h e s ub s ur fac e so i l l a yer Compacting of the subsurface soil layer results in an increase in the values of the external horizontal loads on rigid manholes buried in it. In extreme cases, the pressure value can reach the value of passive pressurein non-cohesive soils with deformations of 30-35 mm/m. In the immediate vicinity of the walls of the rigid manholes, there is a significant concentration of horizontal soil strains which results in a spatial deformation state and spatial stress state within that area, however slight changes in the values of vertical stresses are ignored. Changes in horizontal stresses z 22 σ  and z 33 σ  during soil compacting, taking into account the concentration of horizontal strains ε defined with the coefficient k 0 (Kwiatek, 1998), can be determined based on the following dependences ν In cases where an active limit state has not occurred yet as a result of creeping plastics, loosening of the subsurface soil layer results in the reduction of external horizontal loads on flexible manholes. Simultaneously, there is an uneven distribution of horizontal loads at the circumference of such objects and it results in the deformation of their cross-section (the axis in a perpendicular direction to the exploitation edge increases). Due to the flexibility of cross-sections of the manholes, there is a less unevenness of their horizontal loads in relation to rigid objects. Moreover, the changes in soil strains are smaller in the immediate vicinity of flexible manholes than in the vicinity of rigid objects where disorder occurs. An active limit state in the area of the soil adjacent to the flexible object occurs later than in the area of the soil which is distant from this object. The complete deformation of the cross-section of a flexible manhole 2s 1 depends on the circumferential stiffness of the riser pipe and the values of horizontal soil strains and at the soil layer loosening stage this can be determined using the following formula the increment of the relative deformation of the riser cross-section of the flexible manhole, caused by soil loosening, r 1 αthe coefficient of the relative deformation of the riser cross-section of the flexible manhole for soil loosening, dthe average diameter of the manhole cross-section. The coefficient of relative deformation of the object's cross-section  1 has been implemented based on the scientific work (Kalisz, 2001) which is a derivative of the function () describing the dependence between the increment of deformation of the object's cross-section and the horizontal strains of the subsurface soil layer caused by mining exploitations, after such strains  In the case of the linear function, the coefficient  1 is a constant determined by the ratio of the increment of relative deformation of the objects' cross-section to the increment of strains which induce it The value of the coefficient  1 depends on the circumferential stiffness of the object, as well as the properties and compacting of the non-cohesive soil which constitutes its backfill and can also depend on the foundation depth of the object. The values of the coefficient  1 can be determined based on the dependencies (obtained during experimental tests) of the deformation of cross-sections of manhole riser pipes with different circumferential stiffness on the values of horizontal soil strains. Assuming that the horizontal stresses r Soil compacting leads to significant unevenness in the distribution of external horizontal loads on the manhole. In the case of a flexible manhole, its cross-section deforms and thus, this unevenness is reduced in comparison with a rigid manhole. The wall of the flexible object moves back in a perpendicular direction to the exploitation edge under the pressure of soil. In such cases, the value of the horizontal strain of soil in the area adjacent to the manhole is lower in comparison with a rigid object. It can also be lower than the values of horizontal strains, caused by mining exploitation in the soil layer outside of the impact from the object placed in it. The value of pressure is also lower. In the parallel direction to the exploitation edge, deformation of the object's cross-section causes an additional deformation and passive soil pressure in the vicinity of the manhole walls. Compacting of the subsurface soil layer causes much greater changes in loads and their greater unevenness at the circumference of flexible manholes than occurs during loosening. Therefore, the impact of horizontal soil compacting on the structures of flexible sewer manholes has the most adverse effect. Figure 2 shows the distribution and values of loads at the circumference of a flexible sewer manhole for horizontal soil compacting, using a modified Molin model (Janson, 2003;Kalisz 2010). It is assumed that the axis of the circular cross-section of a flexible object is reduced by the value 2s 1 both in a parallel direction (y-axis) and a perpendicular direction (x-axis) to the exploitation edge (Kalisz, 2010) where z Δα is an increment of relative deformation of the object's cross-section caused by horizontal compacting of the soil layer, ε. In a direction parallel to the exploitation edge, the value of horizontal stresses also increases. Additionally, as a result of the deformation of the flexible manhole cross-section, passive soil pressure occurs. The extreme horizontal stresses acting on the walls of the riser pipe of a flexible manhole in this direction can be presented by the general formula where σ omax is the extreme passive soil pressure in the y-axis. The value of passive soil pressure is proportional to the value of displacement of the flexible object's wall and depends on the type and condition of soil, characterized by the secant modulus of horizontal soil reaction . ' s E Analogously, as for the flexible pipes, it is assumed that the passive soil pressure has parabolic distribution (Janson, 2003) with the maximum value in the direction parallel to the exploitation edge equal to In the assumed distribution of horizontal loads, soil pressure value z 33 σ is the sum of the at rest soil pressure charac-terized by the lateral earth pressure coefficient  0 , the stresses resulting from the compaction of the soil layer in a perpendicular direction to the exploitation edge and the passive soil pressure caused by displacement of the object's walls with a parabolic distribution athe coefficient resulting from the distribution of passive soil pressure, a = 0,51, Emodulus of soil elasticity. The difference of horizontal stresses σ acting on the walls of flexible manholes at the soil compacting stage, assuming that LABORATORY TESTS CONCERNING THE IMPACT OF MINING EXPLOITATION ON MANHOLES PLACED IN THE SUBSURFACE SOIL LAYER Only preliminary laboratory tests consisting of determining the dependence of changes in the deflection of circular cross-sections of flexible manhole models on the horizontal strains of the surrounding non-cohesive soil layer and circumferential stiffness of such models have been carried out so far (Kalisz, 2010;Zięba & Kalisz, 2012). Equipment which enables the simulation of the impact of horizontal soil strains on the lateral loads of underground pipelines (Kalisz, 2001) and sewer manholes was used in order to carry out preliminary laboratory tests. The changes in the diameter of the cross-section of the flexible manhole model in the direction where the strains are induced were measured. The purpose of these measurements was to determine the impact of the horizontal compacted soil (as the most adverse stage of the impact of mining exploitations on these objects) on the changes in the shape of the cross-section of flexible object models tested. Examples of results from two experiments (Zięba & Kalisz, 2012), which were carried out on flexible object models made of plastics and with various circumferential stiffness, are presented below. The relative deformation α of the model's cross-section was determined from the dependence (13), and in the case of linear dependence of deformation of the flexible object cross--section on the horizontal strains of soil layer, the coefficient  1 was determined from the dependence (14). The graph (Fig. 3) presents the obtained test dependencies of the relative deformation of the cross-section of flexible direction at which horizontal soil strains  act manhole models on the horizontal strains induced in compacted non-cohesive soil. The pictures (Photo 1 and Photo 2) present the condition of the cross-sections of flexible manhole models before and after the laboratory tests. Photo 2. Model of manhole made of polyvinyl chloride before (left) and after (right) laboratory tests Experiment no. 1 was performed on the sample of a pipe made of polypropylene (PP) with the following parameters: average external diameter d = 157.5 mm, wall thickness s s = 2.5 mm and circumferential stiffness SN 0.5 kN/m 2 . The value of coefficient of relative deformation of the tested sample cross-section obtained in these strains ranges from 2.5 to 10 mm/m was α 1 = 1.85. Experiment no. 2 was performed on a sample of a pipe made of polyvinyl chloride (PVC) with the following parameters: average external diameter d = 200 mm, wall thickness s s = 6.1 mm and circumferential stiffness SN 8 kN/m 2 . The value of the coefficient of relative deformation of the tested sample cross-section obtained in the strains ranges from 0 to 12.5 mm/m was α 1 = 0.22. SUMMARY AND CONCLUSIONS The impact of underground mining exploitation on sewer manholes placed in the subsurface soil layer is manifested mainly by the interaction of horizontal soil strains on manhole walls. These strains cause changes in the values and distribution of external horizontal loads. In comparison with rigid manholes, the unevenness of the distribution of such loads is lower for flexible objects, however this is, in turn, the cause of the deformation which takes place in their cross--sections. Horizontal soil strains can lead to failures in the structure of manholes as well as a loss of tightness in the joints of elements. The evaluation of possibilities and conditions in which sewer manholes can be used within mining areas consists of the determination of the foundation depth for the expected values of horizontal strains of the subsurface soil layer, as well as the type and condition of soil. In order to determine this foundation depth, it is necessary to know the values of external horizontal loads. In order to describe the distribution and values of horizontal loads of flexible objects, it is necessary to know the dependencies of the cross-section deformation of such objects on the values of horizontal soil strains with different circumferential stiffness and foundation conditions. The foundation depth of standard produced sewer manholes, generally, should be limited, especially for category III and IV mining areas or additional reinforcement to their bottom parts should be applied. Based on the results obtained from laboratory tests concerning the determination of experimental dependencies of deflection of circular cross-sections of flexible manhole models made of polypropylene (PE) and polyvinyl chloride (PVC) on the values of the horizontal strains of the surrounding soil and different circumferential stiffness of such objects in the condition of compacted non-cohesive soil, it was stated that:  deformation of the flexible manhole cross-section depends, not only on the value of horizontal soil strains caused by mining exploitation, but also on the circumferential stiffness of the riser pipe of this objectthe lower the value of circumferential stiffness, the higher the value of the coefficient of relative deformation of cross-section α 1 is,  the initial compaction of non-cohesive soil (which constitutes its backfill) plays an essential role in placing a sewer manhole. In addition, it determines an even load and proper cooperation of such objects with the surrounding soil, including the resistance of these objects to the settlement caused by dynamic loads deriving from traffic. Along with an increase in the value of the soil concentration coefficient, greater deformation of the flexible object cross-section takes place. Moreover, the foundation depth of the flexible object in the subsurface soil layer is also important, as well as the value of vertical stresses  11 . Consequently the value of pressure from the backfill on the object's walls depends on this foundation depth.
2019-04-16T13:24:22.003Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "5e5c7368a984d5308af7ac09e91ced3d60c5d179", "oa_license": "CCBY", "oa_url": "https://doi.org/10.46873/2300-3960.1248", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "67f8c099eddb966e97818fbfa834818b104a3a53", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
255835058
pes2o/s2orc
v3-fos-license
Elevation of tumour markers TGF-β, M2-PK, OV-6 and AFP in hepatocellular carcinoma (HCC)-induced rats and their suppression by microalgae Chlorella vulgaris Chlorella vulgaris (ChV), a unicellular green algae has been reported to have anticancer and antioxidant effects. The aim of this study was to determine the chemopreventive effect of ChV on liver cancer induced rats by determining the level and expression of several liver tumour markers. Male Wistar rats (200–250 g) were divided into 4 groups according to the diet given: control group (normal diet), ChV group with three different doses (50, 150 and 300 mg/kg body weight), liver cancer- induced group (choline deficient diet + 0.1% ethionine in drinking water or CDE group), and the treatment group (CDE group treated with three different doses of ChV). Rats were killed at 0, 4, 8 and 12 weeks of experiment and blood and tissue samples were taken from all groups for the determination of tumour markers expression alpha-fetoprotein (AFP), transforming growth factor-β (TGF-β), M2-pyruvate kinase (M2-PK) and specific antigen for oval cells (OV-6). Serum level of TGF-β increased significantly (p < 0.05) in CDE rats. However, ChV at all doses managed to decrease (p < 0.05) its levels to control values. Expressions of liver tumour markers AFP, TGF-β, M2-PK and OV-6 were significantly higher (p < 0.05) in tissues of CDE rats when compared to control showing an increased number of cancer cells during hepatocarcinogenesis. ChV at all doses reduced their expressions significantly (p < 0.05). Chlorella vulgaris has chemopreventive effect by downregulating the expression of tumour markers M2-PK, OV-6, AFP and TGF-β, in HCC-induced rats. Background Cancer formation is a complex process involving several stages, namely initiation, promotion and progression [1,2]. These stages are described as a series of successive mutations that ultimately lead to malignant tumour growth [3]. Increasing evidences showed that accumulation of free radicals in the body causes a variety of biochemical and physiological abnormalities associated with cardiovascular disease, cancer and the aging process [4][5][6]. Mutation of tumour suppressor genes and activation of protooncogenes would transform normal cells into cancer cells which grow rapidly and metastasize to other parts of the body [7]. To preserve the integrity of an organism, the production of free radicals must be kept in balance by antioxidants. Antioxidants are enzymes [8] or non-enzymes mainly found in our diet that can either scavenge free radicals directly, or regulate its level [9][10][11][12]. Fortunately, the presence of most human cancers can be detected by tumour markers, which should have high specificity and few false positive for specific tumours. They must be undetectable in non-neoplastic conditions. Effective screening strategies using molecular markers for most types of cancers have saved or improved quality of life for many patients. Alpha-fetoprotein (AFP) is the most commonly used tumour marker for hepatocellular carcinoma (HCC) screening [13] whereby patients with a high AFP level indicates a bad prognosis than patients with lower AFP levels [14]. Sangiovani et al. (2001) [15] found that cirrhosis patients with elevated AFP level have higher risk of developing HCC. Increased level of AFP in the serum has also been reported to increase in chronic hepatitis B patients [16], fatty liver disease [17] and metabolic syndrome [18]. Although AFP is considered as the gold standard marker for HCC, it is however not useful in the early diagnosis of the disease, particularly AFP-negative HCC, suggesting that new biomarkers are needed [19]. Other liver tumour markers of interest are pyruvate kinase isoenzymes M 2 -PK, transforming growth factor -β (TGF-β), and specific antigen for oval cells, OV-6. M 2 -PK is highly expressed in oval cells, the precursor of liver tumour and it catalyzes the conversion of phosphoenolpyruvate to pyruvate [20]. When a normal cell transforms into a tumour cell, M 2 -PK expression is upregulated due to the uncommonly high anaerobic glycolysis [21], followed by its release into the blood fluid system, which enables it to be measured quantitatively [22]. TGF-β has been recently considered as a possible liver tumour marker. TGF-β1 and TGF-β1 mRNA were shown to be sensitive indicators in the diagnosis of HCC induced by HBV, with the sensitivity and specificity being 89.5 and 94.0%, respectively [23]. Oval cells were first discovered by Farber [24]. They are oval-shaped cells that are not present in the normal liver [25,26], but their levels are increased during liver regeneration and early stage of HCC. The specific tumour marker for oval cells is OV-6 [27]. Chlorella vulgaris (ChV), a unicellular green algae, has long been used as a health supplement especially in Japan and Korea [28,29]. ChV contains high content of nutrients including vitamins and minerals [30]. It has been shown to strengthen the immune system [31,32], and exhibit anti-inflammatory effect [32,33]. In addition, our laboratory has shown that ChV reduced cellular proliferation and induced apoptosis in hepatoma cell line, HepG-2 [34][35][36], and exhibited antioxidant property in hepatoma-induced rats [35] and STZinduced diabetic rats [37,38]. The main objective of this study is to evaluate the antitumour property of ChV in liver cancer-induced rats, by determining the level and expression of novel tumour marker AFP, and comparing with other possible markers M 2 -PK, OV-6 and TGF-β. Methods Chlorella vulgaris culture C. vulgaris algae Beijerinck 072 obtained from UMACC (University of Malaya Algae Culture Collection) Malaysia was grown in Bold Basal Media in laboratory setting with 12 h of dark and light cycle, and harvested by centrifugation (3000 RPM, 10 min, three times at 4°C). The pelleted algae were diluted in distilled water to arrive at concentrations of 50, 150 and 300 mg/kg body weight. Animals, chemicals and treatment A total of 144 male Wistar rats (200 to 250 g) were obtained from the Animal Unit, National University of Malaysia, and were lodged in polycarbonate cages in a room with controlled temperature, humidity, and lightdark cycle, housed in the animal house of the Institute for Medical Research (IMR), Malaysia. All experiments were conducted following the guidelines of National Institute of Health for the Care and Use of Laboratory Animals. The study was approved by the Animal Ethics Committee of the National University of Malaysia (Approval number: BIOK/2002/YASMIN/30-SEPTEM-BER/082). All animals received adequate human care. The rats were divided into four groups (6 rats per group): control group (normal diet), ChV group with three different doses (50, 150 and 300 mg/kg body weight), liver cancer-induced (choline deficient diet +0.1% ethionine in drinking water to induce cancer) or CDE group [31,36,39], and the treatment group or CDE group treated with three different doses of ChV. The rats in the control group were given both normal diet and drinking water (normal rat chow) via ad libitum. The rats in the ChV group were administered with only ChV at three different doses (50, 150 and 300 mg/ kg body weight), per day via oral gavage. The rats in CDE + ChV group were administered with CDE and ChV at 50, 150 and 300 mg/kg. The duration of the experiment was three months and the rats were sacrificed at 0, 4, 8, and 12 weeks. Animals were anesthetized for liver perfusion procedure prior to excision of the liver. Liver tissue was excised and fixed in formalin and embedded in paraffin for immunohistochemistry analysis. Blood was taken via orbital sinus prior to killing the rats at 0, 4, 8 and 12 weeks for determination of TGF-β. Hepatic perfusion Rats were intraperitoneally anesthetized with Zoletil 50 (0.1 ml/100 g body weight), followed by an injection of 0.2 ml of heparin (25,000 U / ml) into the inferior vena cava. The portal vein was canulated with an intravenous catheter needle (16 G IV catheter, 2:25 in.) for the perfusion. Liver was perfused with phosphate-buffered saline (PBS) for 1 min at a flow level of 10 ml/min, followed by perfusion with 4% paraformaldehyde and 0.1% glutaraldehyde (1:1) for three minutes at room temperature. The liver was then perfused again with PBS for two minutes, and rinsed with PBS. A portion of perfused liver was cut and fixed in 10% formalin followed by tissue processing and embedding in paraffin. Determination of serum TGF-β Blood was obtained via the orbital sinus and collected in a tube, to allow it to set for two hours before centrifugation (3000 RPM, 10 min, 4°C). Serum obtained was stored at −80°C. The levels of transforming growth factor-β (TGF-β) in the serum was determined by ELISA (BD Pharmingen, USA), according to the protocol by the manufacturer. Immunohistochemistry staining for AFP, M 2 -PK, TGF-β and OV-6 Sequential tissue sections (3 μm) were mounted on poly-L-lysine-coated slides. Archival samples were dewaxed by gradual washings in xylene and then dehydrated in various concentrations of alcohols (100, 80, 60 and 40%). Slides then were incubated in 3% hydrogen peroxide in distilled water to quench endogenous peroxidase activity, after which slides were washed under running water. Antigen retrieval was performed by incubating slides in a preheated Colin jar containing Target Retrieval Solution (TRS) pH 9 (Dako, Glostrup, Denmark) for 20 min in a water bath with temperatures ranging from 95°C to 99°C. After thermal treatment, the slides were allowed to cool for 20 min at room temperature. Slides were then washed under running water for three minutes and were placed in Trisbuffered saline (TBS), pH 7.6. Sections were then incubated for 35 min with primary monoclonal antibody: rabbit anti-human AFP (Dako, USA) at 1:200 dilution, or goat anti-human M 2 -PK (Biodesign, USA) at 1:600 dilution, or mouse anti-rat OV-6 (a gift from Dr. Stewart Sell, USA) at 1:400 dilution or mouse anti-human TGF-β at 1:200 dilution. Reaction products were incubated with horseradish peroxidaseconjugated secondary antibody. Diaminobenzidine (DAB) was used as a chromogenic substrate (LSAB = HRP kit, Dako) to visualize the antibody-antigen reaction. All sections were then counterstained with hematoxylin and mounted with permanent mountant DPX. Sections were visualized under light microscopy for assessment of immunoreactivity. Human liver cancer known to be positive for AFP expression was included as positive control for AFP, cancer-induced liver tissues from previous experiment were used as positive control for both M 2 -PK and OV-6 immunostaining, and lesion-induced gaster tissue was used as positive control for TGF-β. Immunoreactivity Assesment A researcher with no knowledge of clinicopathologic data on the samples evaluated the slides in a blind fashion. Confirmation of the diagnosis was performed by a pathologist evaluating the same slides independently. Most of the slides were classified similarly by both investigators. Results were expressed in percentage of stained cells over total cells counted from ten different fields. A total of 100 stained or non-stained cells were counted from each field at a 40× objective [40]. Statistical analysis Statistical analysis was performed with ANOVA using SPSS program ver.11.0. Results were represented as mean ± SEM with p < 0.05 considered as significant difference. Figure 1b confirms the formation of oval cells as opposed in the control group (Fig. 1a), indicating the cellular changes expected in liver cancer rats. This result shows that rats treated with CDE is a good model for liver cancer rats. Figure 2 clearly showed the hepatoprotective effect of ChV in reducing the elevated levels of TGF-β seen in serum of liver cancer rats (CDE). As seen from the figure, TGF-β level was significantly increased (p < 0.05) at all weeks following carcinogen (CDE) diet, compared to control. TGF-β level was significantly higher after eight and 12 weeks (p < 0.05), compared to four weeks. Treatment of the rats with ChV at all doses reduced TGF-β level at all weeks. Rats fed with ChV at 300 mg/kg body weight brought the level of TGF-β to almost control level. ChV alone did not raise the levels of TGF-β and seemed to be the same level as in the control, showing no toxicity effect to the liver. Results As can be seen from Fig. 3, liver cancer rats resulted in brown-stained cells (shown by arrows in the middle panel) indicating positive expressions of AFP (Fig. 3b), OV-6 ( Fig. 3e), M 2 -PK (Fig. 3h) and TGF-β (Fig. 3k), as compared to rats fed with normal diet (Fig. 3a, d, g, and j). Liver cancer rats treated with 300 mg/kg ChV resulted in suppression of the proteins, as evidenced by the smaller amount of brown-stained cells for AFP (Fig. 3c), OV-6 ( Fig. 3f), M 2 -PK (Fig. 3i) and TGF-β (Fig. 3l). Figure 4a shows that the expression of AFP in liver tissues was significantly increased (p < 0.05) in the CDE (liver cancer) group for weeks 8 and 12, but treatment with all concentrations of ChV significantly (p < 0.05) reduced its expression. Greater reduction of AFP expression was seen at 300 mg/kg body weight (p < 0.05). Similarly, OV-6 ( Fig. 4b) was significantly (p < 0.05) expressed in CDE group, but showed no significant difference between 8 and 12 weeks of HCC induction. Treatment of the CDE group with 150 and 300 mg/kg body weight ChV significantly reduced (p < 0.05) the OV-6 expression for weeks 8 and 12. Treatment with 300 mg/kg body weight ChV for 12 weeks resulted in significant (p < 0.05) OV-6 suppression compared to week 4. M 2 -PK ( Fig. 4c) was significantly expressed (p < 0.05) in the CDE group, where its expression reached the highest point at 8 weeks, and reduced by more than 2% at 12 weeks. Treatment with 300 mg/kg body weight of ChV resulted in a significant reduction of M 2 -PK expression for all weeks of supplementation compared to CDE group. ChV at lower doses (50 and 150 mg/kg body weight) significantly (p < 0.05) reduced M 2 -PK expression for weeks 8 and 12. TGF-β (Fig.4d) was also significantly expressed (p < 0.05) in increasing manner in the CDE group with time of exposure to the carcinognen. Its expression peaked at 12 weeks. Treatment with 150 and 300 mg/kg body weight ChV resulted in a significant reduction of TGF-β expression for all weeks of treatment. However a lower dose of ChV (50 mg/kg body weight) significantly reduced TGF-β expression for weeks 8 and 12 only. Interestingly the changes of serum level of TGF-β is in concordance with its expressions in tissues (figs. 2 and 4) when treated with ChV. We have also documented the same changes in serum level of AFP in a previous report [41]. Discussion Hepatocellularcarcinoma (HCC) affects approximately one million individuals annually worldwide, making it one of the world's most lethal cancer [42,43], and the most common form in adults [43]. In 2016, the American Cancer Society estimated there would be over 39,000 of new cases of primary liver cancer and intrahepatic bile duct cancer in the United States alone. Out of this staggering figure, over 27,000 people are estimated to die from these types of cancer [43]. Worldwide, the most common primary liver cancer that occur is HCC, which accounts for 70% to 90% of cases [43]. Thus it is crucial to diagnose HCC early to improve the survival rate of the patients afflicted with the disease. AFP is a the gold standard tumour marker for HCC and its expression is upregulated during hepatocarcinogenesis, hence the use of AFP as a standard biomarker of liver cancer screening [13,44]. However the specificity and sensitivity of AFP used in liver cancer screening are not satisfactory [45,46], although it is useful in HCC surveillance in patients with cirrhosis [47]. The outcome of this research indicate the potentials of OV-6, M 2 -PK and TGF-β as liver tumour biomarkers besides AFP, while it attests the beneficial effect of ChV extract as an anti-cancer treatment. OV-6 and M 2 -PK have been previously shown to overexpress in oval cells following HCC induction by choline-deficient + ethionine (CDE) diet [26,48,49]. Oval cells play an important role in the development of HCC [50], where their presence is thought to be one of the first cellular changes in hepatic neoplasia, following exposure of the tissue to chemical carcinogens such as ethionine [39,51] Fig. 4 Quantitative expressions of (a) AFP, (b) OV-6, (c) M 2 -PK and (d) TGF-β in liver cancer tissues. a significant difference (p < 0.05) compared to control, bsignificant difference (p < 0.05) compared to CDE, 1 significant difference (p < 0.05) compared to Week 4, 2 significant difference (p < 0.05) compared to Week 8 in nature whereby when triggered by toxic compounds or insults they will proliferate. However normal hepatocytes do not express oval cells as seen in this study and others [52,53]. Our earlier study has shown the development of liver cancer nodules [36] and increased serum AFP in rats fed with CDE diet [41]. Here we showed that AFP is detected in the hepatocytes of liver cancer rats. High level of AFP has been found in 60-70% of patients with HCC and is associated with poor prognostic and survival rates in untreated patients [54]. AFP also correlates closely with the growth rate (number of dividing cells) and size of the tumour [55] as well as progressive elevation of alpha fetoprotein in biopsied liver samples of patients with liver cirrhosis and hepatocellular carcinoma [56]. Its reexpression in patients with HCC suggests abnormal or altered liver cell regeneration, or dedifferentiation of hepatocytes into tumour cells [57]. Lowes et al. [58] have documented three types of oval cell population; 1) primitive oval cells that are not expressing AFP, OV-6, CK-19 and π-GST, 2) oval-shaped cells resembling hepatocytes and express AFP but not OV-6, CK-19 and π-GST and 3) oval-shaped cells that resemble duct cells expressing OV-6, CK-19 and π-GST but not AFP. However, in this study, the OV-6 was expressed in the cytoplasm of oval-shaped cells resembling hepatocytes and bile duct cells. These results are supported by studies stating that oval cells are multipotent, where they are able to differentiate into hepatocytes [59] and bile ducts [60]. The increased number of oval cells in CDE rats was depicted by the increase in the expression of OV-6, which is consistent with other studies that observed the increasing number of oval cells is directly proportional to the severity of the disease [25,58,61]. Expression of OV-6, which is a specific marker for the presence of oval cells, has been reported in the early stages of liver regeneration in human and animals, either due to injury or inhibition of hepatocyte replication [52,62]. These cells play an important role in the development of HCC [27]. The presence of oval cell population is deemed to be the first cellular changes that occur in neoplastic liver, following intrusion of toxic substances into the liver, in particular carcinogens such as ethionine [58,61,63]. In rats supplemented with the carcinogen Nnitrosomospholine (NNM) and CDE, M 2 -PK expression can be observed in the cytoplasm of oval cells found in the liver tissue [64][65][66]. However, not all oval cells are M 2 -PK-positive. The outcome depends on the fate of the oval cells; either to differentiate into a hepatocyte or a bile duct cell [59,60]. In this study, the level of M 2 -PK expression was elevated in liver cancer-induced rats as compared to control, plausibly due to the increase in respiratory rate [21], following the transformation of the cell into a cancerous state [20,64]. However M 2 -PK may not be selective biomarker for liver cancer since its level in the blood has been observed to be raised in other types of cancer such as lung [67], breast [68], cervical [69], esophagus and gastro [21] cancers. Antioxidants, especially those derived from plant sources are reported to prevent carcinogenesis through the suppression of cell proliferation, stimulation of apoptosis and scavenging free radicals [28]. Our previous study has successfully shown that ChV (300 mg/kg body weight) significantly reduced the percentage of CDE-induced preneoplastic liver nodules (ranging in size from 0.1 to 0.5 cm) from 100% to 17% [36]. ChV has anti-proliferative as well as apoptosis effects against HepG2 liver cancer cells [34,36,70]. In addition, Sulaiman et al. [35] showed that ChV treatment resulted in the decline of superoxide dismutase and catalase activity levels, increased level of glutathione peroxidase activity, and the reduction of malondialdehyde in rats supplemented with CDE diet [35]. They suggested that ChV exerted its chemopreventive effect by replacing or compensating for endogenous antioxidant enzyme activity and inhibiting lipid peroxidation. The study also pointed out that free radicals generated by carcinogenesis were scavenged by ChV, which led to the reduction of oxidative stress, thus reducing the formation of cancer cells. The role of ChV as anticancer agents can be seen clearly in this study. Based on immunohistochemistry results, AFP, OV-6, M 2 -PK and TGF-β were undetectable in the liver tissue of rats supplemented with ChV, thus implying its non-toxic nature. Interestingly, our study showed that treatment of CDE rats with 300 mg/kg body weight ChV as early as four weeks was adequate to suppress AFP expression, while the lowest dose (50 mg/ kg body weight) managed to suppress AFP expression after prolonged supplementation for 12 weeks. The efficacy of ChV is also observed with OV-6 expression where prolonged supplementation with ChV for 12 weeks suppressed its expression. ChV was also able to reduce the expression of M 2 -PK, as was also seen with AFP and TGF-β in liver cancer induced rats. The actual mechanism of ChV as an antioxidant and anticancer agent has yet to be elucidated. This study documented the reduction of oval cells formation by ChV supplementation in the liver cancer-induced rats. ChV contains a variety of antioxidants such as ascorbic acid, tocopherols and reduced glutathione [71], and has the potential as an anticancer agent which can restrict and suppress the growth of initiated clonal cell populations into foci or preneoplastic nodules and HCC [72,73,74]. Conclusions This study documented the reduction of oval cells formation as well as the expression of tumour markers M 2 -PK, OV-6, AFP and TGF-β in HCC induced rats supplemented with Chlorella vulgaris (ChV). Based on this study as well as from our previous studies [36,38,41], we postulate that the chemopreventive mechanism of ChV, which is rich in antioxidants, is by scavenging ROS found high in tumour cells, as well as inducing antiproliferative and apoptosis effect resulting in the reduction of neoplastic nodules as reflected by the reduction in tumour markers M 2 -PK, OV-6, AFP and TGF-β. Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Authors' contributions Study conception, design and supervision of experimental workflow: YAMY and WZWN. Acquisition, analysis and interpretation of data: SS and SMS. Drafting of manuscript: KTA. Technical assistance in drafting of manuscript: HAD. All authors read and approved the final manuscript. Ethics approval All experiments were conducted following the guidelines of National Institute of Health for the Care and Use of Laboratory Animals. The study was approved by the Animal Ethics Committee of the National University of Malaysia (Approval number: BIOK/2002/YASMIN/30-SEPTEMBER/082). All animals received adequate human care. Consent for publication Not applicable. Competing interests The authors declare that there is no competing of interest in our part in the publication of this article. The authors have no connection with any or any financial company.
2023-01-16T14:18:54.120Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "f58b9eaf5dce57aa51a544306875a477ac5315e5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12885-017-3883-3", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f58b9eaf5dce57aa51a544306875a477ac5315e5", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [] }
232364184
pes2o/s2orc
v3-fos-license
Liver and Pancreatic Injury in Response to ALK Inhibitors in a Patient with Primary Signet Ring Cell Carcinoma of the Lung: A Case Report We report a patient with stage IV anaplastic lymphoma kinase (ALK)-rearranged non-small cell lung cancer (primary lung signet ring cell adenocarcinoma) who received serial crizotinib, chemotherapy, and lorlatinib over more than 4 years. The patient discontinued crizotinib after approximately 4 months due to crizotinib-associated hepatotoxicity. Twenty-five days later, when transaminases had normalized, crizotinib was resumed. However, the patient's liver enzymes rapidly increased again, and crizotinib was discontinued. After 6 cycles of platinum-based chemotherapy, lorlatinib was initiated. Hepatotoxicity did not recur with lorlatinib, a next-generation ALK inhibitor, but grade 4 hypertriglyceridemia and acute pancreatitis were induced by lorlatinib after 4 months. To our knowledge, this is the first case report of acute pancreatitis with lorlatinib. Additionally, stereotactic body radiation therapy (SBRT) was performed for residual small primary lesions in the lung without stopping lorlatinib. Given the rarity of radiation pneumonitis, especially with the relatively small fields treated by SBRT, we suspect that lorlatinib enhanced the pulmonary toxicity. Physicians should be aware that ALK inhibitors, such as lorlatinib and crizotinib, have potentially lethal side effects. Introduction The majority of cases of signet ring cell adenocarcinoma (SRCA) of the lung originate from the gastrointestinal tract. Primary SRCA of the lung is an extremely rare subtype of lung adenocarcinoma with a poor prognosis [1]. The presence of a signet ring cell (SRC) component is considered to be a prominent clinicopathological characteristic of EML4-anaplastic lymphoma kinase (ALK)-positive non-small cell lung cancer (NSCLC) [2]. ALK rearrangement is a therapeutically targetable oncogenic driver found in 3-7% of patients with NSCLC [3]. Crizotinib is the first tyrosine kinase inhibitor (TKI) targeting ALK, MET, and ROS1 and has shown marked antitumor activity compared to traditional chemotherapy in ALK-positive NSCLC patients. More recently, second-generation ALK TKIs (including ceritinib, alectinib, and brigatinib) and third-generation ALK TKIs (lorlatinib) have increased therapeutic options. Given the rapid pace of drug discovery and development in this area, reporting the adverse effects of ALK inhibitors is crucial. Here, we report a case of a 36-year-old man diagnosed with metastatic ALK-rearranged NSCLC who received lorlatinib after crizotinib and platinum-doublet chemotherapy. The patient developed toxic hepatitis after 3 months of crizotinib and acute pancreatitis due to hypertriglyceridemia after 4 months of lorlatinib. To our knowledge, this is the first case report of lorlatinib-induced pancreatitis. Case Report A 36-year-old man who was a nonsmoker was admitted to our hospital with a cough and back pain for 1 month. He had no comorbid diseases. Fluorodeoxyglucose (FDG)-positron emission tomography (PET) computed tomography (CT) scan was performed for staging in October 2016 and revealed a mass of approximately 4 cm in the right upper lobe of the lung, accompanied by multiple mediastinal, hilar, bilateral supraclavicular, left retroclavicular lymph node metastases and bone metastasis. Tru-Cut biopsy from the mass in the upper lobe of the right lung revealed SRC carcinoma. An inversion of the EML4-ALK gene was detected by immunohistochemistry and fluorescence in situ hybridization. Crizotinib, which is a firstgeneration ALK inhibitor, was initiated twice daily (250 mg) as the first-line treatment in November 2016. The size and metabolic activity of the primary lesion and lymph nodes remarkably decreased based on follow-up PET after 3 months. However, 4 months after treatment initiation, laboratory data revealed major hepatic cytolysis (ALT 1,719 IU/L [12-63] and AST 371 IU/L ). Other biochemical tests, including ALP and bilirubin, were within normal limits. Crizotinib was discontinued, and ursodeoxycholic acid, N-acetylcysteine, and vitamin E capsules were started by a gastroenterologist. Liver tests progressively improved, and there was no other cause of acute hepatitis identified in the patient (liver metastases, viral hepatitis, and concomitant medications). Liver biopsy findings were compatible with toxic hepatitis. This clinical manifestation was diagnosed as crizotinibinduced liver injury. Twenty-five days later, crizotinib was reintroduced at the same dose together with methylprednisolone (1 mg/kg), but after 1 week of treatment, liver enzymes rapidly increased again to more than 5 times the upper limit of normal. Crizotinib was therefore stopped, and liver tests returned to normal in 7 days. Gemcitabine-cisplatin was initiated in April 2017. The patient achieved a stable response after 6 cycles of chemotherapy. No hepatotoxicity was observed. Then, since it was available with an early access programme, lorlatinib, which is a third-generation ALK inhibitor, was started in December 2017. The hemogram, routine biochemistry tests, and lipid profile (triglyceride 160 mg/dL, total cholesterol 262 mg/dL) were within normal limits at that time. After 3 months, the patient presented with fever, abdominal pain, and hyperglycemia to the emergency department. He was diagnosed with acute pancreatitis due to hyperlipidemia. Triglycerides were 4,107 mg/dL (250-150), total cholesterol was 512 mg/dL (82-200), amylase was 479 IU/L (25-115), lipase was 3,272 IU/L (73-393), AST was 69 IU/L (10-37), and other liver enzymes were normal. After being admitted to the intensive care unit, his treatment continued for 7 days in the gastroenterology department, and then he was discharged. Atorvastatin (10 mg), fenofibrate (250 mg), and metformin (2 g) daily were started. During lorlatinib treatment, mild side effects, such as weight gain, peripheral oedema, and carpal tunnel syndrome, were also observed. After the symptoms and blood test abnormalities completely resolved, lorlatinib was re-initiated in March 2018. PET-CT was performed in May 2018 and showed a partial response of the primary lesion and complete response of the metastatic lesions. Additionally, stereotactic body radiation therapy (SBRT) was performed for primary residual lesions in the lung in May 2018. In August 2018, he developed a mild nonproductive cough. The lesion in the right lung had worsened based on CT. There were no abnormalities in blood tests. Radiation pneumonia was considered because the lesion areas did not contain air bronchograms and did not have very active metabolism on PET-CT. We suspected that lorlatinib enhanced pulmonary toxicity when it was administered with SBRT. However, during this process, lorlatinib was never interrupted because the patient had no symptoms. After 3 months, chest CT revealed a significant decrease in the size of the right upper lung mass with no evidence of disease progression. There were improvements in the previously noted interstitial and posttreatment changes throughout the lung fields. Subsequently, there was no progression on chest CT, and PET-CT was performed every 3-4 months. Lorlatinib is still ongoing, with a near complete response for 2 years with no recurrence of pancreatitis or hepatitis with antihyperlipidemic therapy. Discussion We reported this case because of its interesting and extremely rare features. This case involved primary SRCA, which is a rare histologic subtype, crizotinib-associated hepatotoxicity, hepatotoxicity relapse upon the reinduction of crizotinib despite supportive treatment, but not with lorlatinib (another TKI), and lorlatinib-related pancreatitis, which are very interesting. Additionally, we suspect that lorlatinib enhanced pulmonary toxicity when it was administered with SBRT. Patients with primary lung SRCA have failed to respond to traditional chemotherapy. The significance of SRCA was not truly appreciated until the publication of recent study results, which linked SRC to EML4-ALK NSCLC [4]. In 1 case report, a patient with primary lung SRCA developed resistance to crizotinib treatment in a short time [5]. It remains unclear whether such resistance is specific to primary SRCA of the lung, which is a rare subtype, or whether crizotinib is responsible for generating extensive drug resistance [5]. However, our patient is still receiving lorlatinib and has maintained a near complete response for 2 years. Thus, we think newer TKIs may be superior in cases of primary lung SRCA. Crizotinib is known to usually cause mild elevations in liver function tests, although the exact mechanism is still not well understood. The drug is metabolized in the liver by CYP3A4, and liver injury may be due to the accumulation of toxic metabolites or immune-related mechanisms. The symptoms of drug-induced hepatitis are generally nonspecific; thus, diagnosis is often delayed [6]. Our patient presented with cytolytic hepatitis 117 days after the treatment initiation of crizotinib. Other potential causes of liver failure, such as viral hepatitis, hepatic metastasis, alcoholic liver disease, or other drug-induced liver injuries, were excluded. Relapse rapidly occurred after the re-initiation of crizotinib despite oral methylprednisolone treatment. However, hepatotoxicity of any grade did not develop with lorlatinib, which is another ALK TKI. Structural differences of the molecules may explain why there does not seem to be any cross-toxicity. Lorlatinib is a potent, brain-penetrating, third-generation, macrocyclic ALK/ROS1 TKI with broad-spectrum potency against most known resistance mutations that develop during treatment with existing first-and second-generation ALK TKIs. Lorlatinib has a unique safety profile, which is distinct from other ALK TKIs, and is generally well tolerated with a low incidence of permanent discontinuations due to adverse reactions (2.0%). Hyperlipidemia is the most common adverse drug reaction associated with lorlatinib and is largely manageable with lipid-lowering therapy. Grade 3/4 hypercholesterolemia and hypertriglyceridemia both occurred at a frequency of 15% [7]. Our patient has also received atorvastatin and fenofibrate as lipid-lowering therapy. SBRT delivers a very high dose of ionizing radiation to a relatively small region encompassing the tumor and spares a significant portion of the remaining lung from high radiation doses. However, predisposing factors, such as contralateral pneumonectomy, immunosuppression, the administration of concurrent chemotherapy, and interstitial lung disease, may cause an increased risk for radiation pneumonitis, and the higher doses delivered by this technique may lead to an increase in radiation pneumonitis [8]. ALK inhibitors may increase sensitivity to radiation and the risk of radiation necrosis [9]. Additionally, interstitial lung disease has also been reported in response to ALK inhibitors [10]. Although SBRT was performed in a small area in our patient, radiation pneumonitis developed. There were no risk factors for radiation pneumonitis. Thus, we think that lorlatinib may enhance pulmonary toxicity when administered with SBRT. In conclusion, lorlatinib may be a viable alternative when crizotinib causes hepatitis, and it has an antitumor effect in ALK-positive primary SRCA of the lung. Despite having a favorable toxicity profile, lorlatinib can cause lethal side effects, such as acute pancreatitis. The proactive counseling of patients on how to manage adverse events, as well as preemptive monitoring and treatment, is an integral component of patient care when initiating lorlatinib or any new treatment regimen. Acknowledgement I would like to thank Dr. Nigar Rustamova and Şeyma Bahşi for their help. Statement of Ethics Written informed consent was obtained from the patient for the publication of this case report. Conflict of Interest Statement The author has no conflicts of interest to declare.
2021-03-27T05:13:03.505Z
2021-02-26T00:00:00.000
{ "year": 2021, "sha1": "487b435bba4da91d2b695dd02cf9da05db3d9fd8", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/512829", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "487b435bba4da91d2b695dd02cf9da05db3d9fd8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214448635
pes2o/s2orc
v3-fos-license
Teachers’ Perception Regarding the Level of Principals’ Ethical Leadership Behaviours in Secondary Schools in Anambra State, Nigeria This study determined the perception of teachers regarding the level of principals’ ethical leadership behaviours in state government owned secondary schools in Anambra State. The study was guided by 4 research questions. The population consisted of 6,328 teachers in the 257-state owned government public secondary schools in the State. A sample of 672 teachers was drawn using multi-stage sampling procedure. Data were collected using Ethical Leadership Scale (ELS) which was adapted from Yilmaz (2006). The instrument was validated by three experts. Internal consistency reliability index of 0.72 was obtained using Cronbach’s alpha method. Data analysis was done using mean. The findings revealed that teachers’ perception of the level of principals’ communicative, climatic, decisional and behavioural ethical behaviours in secondary schools in Anambra State is high. The study recommended that secondary school principals should at all times imbibe ethical principles in their leadership behaviours. Communicative ethics consist of behaviours such as the leader accepting his failures, not being selfish, being fair, being constructive in discussions, being patient, respectful, sincere and modest. Behavioural ethics consists of behaviours like self-awareness, being veracious, honest and courageous, protecting individual rights and being respectful for values (Yılmaz, 2005). Decisional ethics examines behaviours in terms of making morally correct decisions, being able to differentiate what is correct and what is wrong, and being ethical in making decision concerning the management of the organization (Turhan, in Bağrıyanık & Can 2017). Management of an organization such as the school by a leader with relevant moral values, norms, rules, integrity, and high sense of responsibility and discipline is important to ensure that teachers and students are inspired towards school organizational goal attainment. When principals are perceived to be ethical in the disposition of their duties, issues such as indiscipline, employee burnout, turnover and poor attitude to work are reduced. According to Fleet, (1999), unless a principal has the quality of personal honesty, he can never inspire his staff towards effectiveness and efficiency. Likewise, Arslantaş and Dursun, (2008) noted that leader's behaviours perceived as ethical are considered as a source of motivation for the staff, in terms of increasing their commitment, performance, trust and efficiency. Statement of the Problem One of the responsibilities of principals is to create effective learning community, one that is built and sustained by ethical practices such as honesty, tolerance, modesty, determination, righteousness and flexibility. However, in secondary schools in Anambra state, it appears that most principals are characterised by various forms of unethical behaviours and practices. Some principals in Anambra State seem to arrogate powers to themselves. They in some cases fail to carry the teachers along in decision making and overall school management. Some principals in the state had been accused of mismanagement of financial resources that were meant for school improvement. These series of unethical behaviours from principals probably are the reason most teachers seem poorly involved in school activities. Some of the teachers in the state have been seen selling and marketing their private goods within the school. There are also cases of lateness to school, cheating, bullying, failure to do assignments, damage to school facilities and untidy dressing habits among the students. These situations may not be unconnected to principals' ethical leadership behaviours in secondary schools in Anambra State. It therefore became necessary to ascertain the perception of teachers regarding the level principals' ethical leadership behaviours in secondary schools in Anambra State. Purpose of the Study The main purpose of this study is to ascertain the perception of teachers regarding the level of principals' ethical leadership behaviours in secondary schools in Anambra State. Specifically, the study ascertained:  The level of principals' communicative ethical behaviour in secondary schools in Anambra State.  The level of principals' climatic ethical behaviour in secondary schools in Anambra State.  The level of principals' decisional ethical behaviour in secondary schools in Anambra State.  The level of principals' behavioural ethical behaviour in secondary schools in Anambra State. Research Questions The following research question guided the study  What is the level of principals' communicative ethical behaviour in secondary schools in Anambra State?  What is the level of principals' climatic ethical behaviour in secondary schools in Anambra State?  What is the level of principals' decisional ethical behaviour in secondary schools in Anambra State?  What is the level of principals' behavioural ethical behaviour in secondary schools in Anambra State? Method A descriptive survey research design was adopted for the study. This design According to Nworgu (2015) aims at collecting data on, and describing in a systematic manner, the characteristics, features or facts about a given population. The study was guided by four research questions. The study was carried out in Anambra state on a population of 6,382 teachers in the six education zones of the state. The sample consisted 672 teachers drawn using multi-stage sampling technique. Questionnaire instrument titled Ethical Leadership Scale(ELS) was used to collect data for the study. The instrument was validated by three experts. A reliability coefficient of 0.72 was obtained for the ELS using Cronbach's Alpha method. Data collected for the study were analyzed using mean. Discussion of Findings The findings of this study show that teachers' perception of the level of principals' communicative, climatic, decisional and behavioural ethical behaviour in secondary schools in Anambra State is high. This finding is in line with that of Karaköse (2007) who in his study found that teachers perceive principals to demonstrate these behaviors at a very high level. This finding is consistent with Sungu and Sağlam (2015), their results indicated that teachers rated school principals' display of ethical leadership behaviors to be high is their schools. Likewise, Gülcan, Kılınç and Çepni (2012) who also found that school principals demonstrated ethical leadership behaviors in their schools according to teachers. The findings of the present study are in line with the finding of Bellow (2012) and Yukl, Mathsud, Hassan, and Russia, (2013) who found that principals are perceived to be fair, sincere, trust worthy, open, moral decision makers and care for their staff as well as their students. This finding is also in agreement with the findings of Ezeugbor (2015), her study showed that teachers perceived the four sub-scales (communicative ethics, climatic ethics, ethics in decision making and behavioural ethics) of principals' ethical leadership behaviours to be high. The finding of this study however, is not in agreement with the findings of a number of other scholars such as Abg Hut (2005), Zulkafli (2008) and Mihelic, Lipicnik and Tekavcic, (2010). These scholars found that the level of ethical behaviour of leaders of the school remained at a low level. The difference between this finding and the findings of the present study could possibly be as a result of difference in time frame between the two studies. Over time principals of secondary schools in Anambra state appear to have participated in several developmental programmes in terms of seminars and workshop which may have improve their ethical consciousness in their leadership behaviours towards the teachers, students and other members of the school. Conclusion Based on the findings of the study presented, analyzed and discussed, the study concludes that teachers' perception regarding principals' ethical leadership behaviours in secondary schools in Anambra state is high. It is therefore imperative that secondary school principals should at all times imbibe and display ethical principles in their leadership behaviours in the school in order to gain the confidence of teachers in the school.
2020-01-30T09:04:07.084Z
2019-08-31T00:00:00.000
{ "year": 2019, "sha1": "5d6cdff7b0e4250ab24f974d1540131b35836ea4", "oa_license": null, "oa_url": "http://www.internationaljournalcorner.com/index.php/theijhss/article/download/147161/103127", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "b2309f32136827e9a6a958584a59bb925e0e270d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
25716079
pes2o/s2orc
v3-fos-license
Dynein and Star interact in EGFR signaling and ligand trafficking. Intracellular transport and processing of ligands is critical to the activation of signal transduction pathways that guide development. Star is an essential gene in Drosophila that has been implicated in the trafficking of ligands for epidermal growth factor (EGF) receptor signaling. The role of cytoplasmic motors in the endocytic and secretory pathways is well known, but the specific requirement of motors in EGF receptor transport has not been investigated. We identified Star in a screen designed to recover second-site modifiers of the dominant rough eye phenotype of the Glued mutation Gl(1). The Glued (Gl) locus encodes the p150 subunit of the dynactin complex, an activator of cytoplasmic dynein-driven motility. We show that alleles of Gl and dynein genetically interact with both Star and EGFR alleles. Similarly to mutations in Star, the Gl(1) mutation is capable of modifying the phenotypes of the EGFR mutation Ellipse. These genetic interactions suggest a model in which Star, dynactin and dynein cooperate in the trafficking of EGF ligands. In support of this model, overexpression of the cleaved, active Spitz ligand can partially bypass defective trafficking and suppress the genetic interactions. Our direct observations of live S2 cells show that export of Spitz-GFP from the endoplasmic reticulum, as well as the trafficking of Spitz-GFP vesicles, depends on both Star and dynein. Introduction Intracellular transport is an essential function of the microtubule motors, dynein and kinesin. In order to carry out this function, the cytoplasmic motors must be attached to, and released from, a variety of cellular cargoes at the right time and place. How cytoplasmic motors are linked to specific cargoes and how these linkages are regulated is still unclear. Dynactin (dynein activator protein) is one complex thought to be involved in linking membrane vesicles to dynein (Karki and Holzbaur, 1999;Muresan et al., 2001;Schroer, 2004;Waterman-Storer et al., 1997). However, whether dynactin is required for the binding of cargo or instead acts in the regulation of binding and/or motor activity is still controversial (Haghnia et al., 2007;Kim et al., 2007;Berezuk and Schroer, 2007). The dynactin, or Glued complex was originally identified as a stimulator of dynein-mediated vesicle motility in vitro (Gill et al., 1991;Schroer et al., 1996;Schroer and Sheetz, 1991). It consists of at least 10 different polypeptides ranging in size from a 24 kDa subunit to the p150/160 polypeptide, also known as the p150/160 Glued polypeptide (Gill et al., 1991;Holleran et al., 1996;Lees-Miller et al., 1992;Paschal et al., 1993;Schroer and Sheetz, 1991). p150/160 Glued binds directly to the dynein intermediate chain Vaughan and Vallee, 1995) and is proposed to facilitate the association of the dynein motor with its cellular cargoes, which include Golgi vesicles, endosomal vesicles, synaptic vesicles and kinetochores (Burkhardt, 1998;Gill et al., 1991;Holleran et al., 1998;Holzbaur et al., 1991;King and Schroer, 2000). Other components of the dynactin complex have been shown to associate with membranous vesicles through an interaction with the spectrin membrane skeleton (Holleran et al., 2001;Holleran et al., 1996;Muresan et al., 1996;Muresan et al., 2001), and with kinetochores during mitosis via the cytoplasmic linker protein CLIP-170 (Dujardin et al., 1998;Vaughan and Vallee, 1995). In order to further characterize pathways that require dynein function, we conducted a screen for P-element insertional mutations that dominantly modify the eye phenotype of the Glued allele Gl 1 . In Drosophila, the Gl 1 mutation causes a dominant rough eye phenotype, with ommatidial disarrangements and defects in optic lobe connections (Plough and Ives, 1935). Gl 1 encodes a truncated product because of the insertion of a B104 retrotransposon in its coding sequence (Swaroop et al., 1985). We previously showed that the truncated Gl 1 product no longer assembles into the dynactin complex, but does functionally interact with certain dynein heavy chain (Dhc) mutants (McGrail et al., 1995). Mutations in Dhc also modify (either suppress or enhance) the dominant rough eye phenotype of Gl 1 , and a previously identified suppressor of the Gl 1 phenotype, Su(Gl)102, is an allele of Dhc (McGrail et al., 1995). Here, we report that mutations in Star act as dominant modifiers of the Gl 1 rough eye. Star is an essential gene involved in the proper processing of the EGF receptor ligand Spitz (Bang and Kintner, 2000;Golembo et al., 1996;Guichard et al., 1999). Spitz activation of EGF receptor signaling is critical throughout development, and its requirement during eye morphogenesis is well established (Klambt, 2002;Shilo, 2005). The dominant Star mutation S 1 results in a rough eye phenotype similar to that of Gl 1 (Kolodkin et al., 1994;Ruden et al., 1999). Star encodes a type II single transmembrane domain protein (Kolodkin et al., 1994) that concentrates at the nuclear periphery and is contiguous with the endoplasmic reticulum (ER) (Pickup and Banerjee, 1999). Star Intracellular transport and processing of ligands is critical to the activation of signal transduction pathways that guide development. Star is an essential gene in Drosophila that has been implicated in the trafficking of ligands for epidermal growth factor (EGF) receptor signaling. The role of cytoplasmic motors in the endocytic and secretory pathways is well known, but the specific requirement of motors in EGF receptor transport has not been investigated. We identified Star in a screen designed to recover second-site modifiers of the dominant rough eye phenotype of the Glued mutation Gl 1 . The Glued (Gl) locus encodes the p150 subunit of the dynactin complex, an activator of cytoplasmic dynein-driven motility. We show that alleles of Gl and dynein genetically interact with both Star and EGFR alleles. Similarly to mutations in Star, the Gl 1 mutation is capable of modifying the phenotypes of the EGFR mutation Ellipse. These genetic interactions suggest a model in which Star, dynactin and dynein cooperate in the trafficking of EGF ligands. In support of this model, overexpression of the cleaved, active Spitz ligand can partially bypass defective trafficking and suppress the genetic interactions. Our direct observations of live S2 cells show that export of Spitz-GFP from the endoplasmic reticulum, as well as the trafficking of Spitz-GFP vesicles, depends on both Star and dynein. facilitates trafficking of inactive, membrane-bound Spitz precursor from the ER to an endosomal or Golgi compartment where it is cleaved by the protease Rhomboid Tsruya et al., 2002;Urban et al., 2002). Cleavage is required to transform Spitz into active ligand. Thus, understanding the regulation of intracellular Spitz transport is critical to understanding the activation of EGF signaling. Our observations provide evidence that the Star-dependent export of Spitz ligand from the ER requires cytoplasmic dynein. Results A lethal P-element insertion in Star enhances the Gl 1 eye phenotype To identify potential genes that regulate dynein-based functions in Drosophila, we screened for dominant modifiers of the rough eye phenotype exhibited by the dynactin mutation Gl 1 . A collection of 300 lethal P-element insertion lines spanning all four chromosomes was tested. One of the P-element insertion lines, P2036, enhanced the Gl 1 rough eye phenotype (Fig. 1). Although the P2036 line had no obvious eye phenotype by itself, in combination with Gl 1 it produced a significant reduction in eye size and disrupted the hexagonal packing of ommatidia. This enhancement of the Gl 1 phenotype was indeed linked to the P-element insertion, since it was reverted by excision of the P-element. The gene disrupted by P2036 was identified as Star, which produces a protein that regulates the intracellular trafficking of the EGF receptor ligand Spitz in several developmental pathways (Kolodkin et al., 1994;Lee et al., 2001;Tsruya et al., 2002). Southern blot and sequence analysis showed that only a single P insertion was present in the parental stock and that the insertion was in the 5Ј untranslated region of the Star gene (data not shown). To confirm that the disruption of Star was responsible for the interaction, we conducted genetic complementation tests with four additional Star alleles (S 1 , S P2333 , S IIN and S 05671 ). The P-insertion line is lethal in combination with all the Star alleles tested, indicating that it is allelic to the Star locus. The BDGP database confirms that the insertion in line P2036 is an allele of Star. We will refer to this new allele as S P2036 . Other alleles of Star, including S P2333 , S IIN and S 05671 , also enhance the Gl 1 eye phenotype (not shown). Interaction of Star with Gl 1 is dosage sensitive A deficiency that removes the Star locus, Df(2L)S3, was tested for its ability to modify the Gl 1 dominant eye phenotype. Gl 1 flies showed a mild perturbation of the ommatidia ( Fig. 2A), whereas Df(2L)S3 flies were near wild type in appearance (Fig. 2B). In flies carrying Gl 1 in combination with the Df(2L)S3, the eye was small, very narrow, and rough, with fewer ommatidia compared with the deficiency alone (Fig. 2C). To determine whether the interaction was specific to the Gl 1 dominant allele, genetic crosses were set up using flies that carried a deficiency for the Gl locus (Gl +R2 ) or a recessive lethal mutation in the Gl locus (Gl 1-3 ). Unlike Gl 1 , these loss-of-function alleles of Gl did not exhibit dominant eye phenotypes. We found that they showed little or no interaction with S 1 (e.g. Fig. 2D) or the other three Star alleles (data not shown). Moreover, the enhancement of the Gl 1 rough eye phenotype by S 1 (Fig. 2E) was reverted by the introduction of a full-length Star transgene, hsStar-HA (Fig. 2F). We conclude that the interaction of Star with the Gl locus is specific to the Gl 1 allele, and that reduction of Star gene dosage by 50% strongly enhances the Gl 1 eye phenotype. Mutations in Dhc modify Star The interactions described above, between Star and Gl 1 , resembled previously observed genetic interactions between Dhc and Gl 1 (McGrail et al., 1995). To address whether this similarity reflects a common function, we asked whether Star also interacts with Dhc. The recessive allele, Dhc 1-1 (Gepner et al., 1996), enhances the S 1 rough eye phenotype (Fig. 3A,B). In S 1 /+; Dhc 1-1 /+ flies, the hexagonal packing of ommatidia was more disrupted than in the S 1 background alone, and the size of the eye was reduced. This interaction is reverted back to the S 1 eye phenotype by the introduction of a wild-type Dhc transgene (data not shown). In addition, triple heterozygotes containing the S 1 , Gl 1 and Dhc 1-1 alleles (S 1 /+; Dhc 1-1 +/+ Gl 1 ) exhibited a more severe eye phenotype than the S 1 /+; Gl 1 /+ double heterozygotes (Fig. 3C,D). Other Dhc alleles tested did not significantly modify the S 1 eye phenotype, but did interact with Star to produce a wing vein phenotype. Both Dhc γ4163A and Dhc 6-10 , in transheterozygous combinations with the Star allele S 05671 , produce a wing phenotype in which the L5 vein was incomplete and did not reach the wing margin (supplementary material Fig. S1). This interaction appeared to be specific to the S 05671 allele, because S 1 in combination with Dhc alleles did not show any wing vein phenotype (data not shown). Although the Gl 1 eye phenotype was enhanced by S 05671 , a wing vein phenotype was not produced (data not shown). Star is epistatic to Dhc in its interaction with Gl 1 Having found that both Star and Dhc interact with Gl 1 , we assessed the epistasis between the three gene products by analyzing eye phenotypes in different combinations of mutations. We have previously reported that certain Dhc mutations enhance the Gl 1 rough eye, whereas other Dhc alleles suppress it (McGrail et al., 1995). More recently, we have established that another mutation originally isolated as a suppressor of the Gl 1 rough eye phenotype, Su(Gl)77 (Harte and Kankel, 1982), is a Dhc allele (see Materials and Methods). Flies expressing both Gl 1 and a Dhc mutation that suppresses Gl 1 were crossed to S 1 flies, and the eye phenotypes of the progeny were examined (Fig. 4). As expected, flies carrying either of the Dhc alleles (Su(Gl)77 or Dhc 8-1 ) that suppress the Gl 1 rough eye had wild-type eye morphology (Fig. 4C). With the addition of the S 1 mutation, the Gl 1 eye is enhanced, despite the presence of a suppressor (Fig. 4D). Even in the presence of both Dhc mutations that suppress Gl 1 , the rough eye phenotype was still enhanced by S 1 (compare Fig. 4E,F). These results suggest that Star function is required for the suppression of Gl 1 eye phenotype by the Dhc mutations, and provide additional evidence that Star, dynein and dynactin act in a common pathway. Biochemical assays of Star-dynein interactions The association between the dynein complex and Star was first examined with a partitioning assay. Flies expressing a functional hemagglutinin (HA)-tagged Star transgene, hsStar-HA (Pickup and Banerjee, 1999), were used to analyze the relative amounts of Dhc and Star present in fractions enriched for vesicles. A crude preparation of vesicles was clarified by high-speed centrifugation to yield vesicle membranes in the pellet and soluble proteins in the supernatant. As expected, the transmembrane protein Star-HA partitioned into the vesicle pellet fraction (Fig. 5A). Although much of the dynein was soluble, some was also present in the membrane pellet, consistent with an association with vesicles. Dynein and Star also exhibited overlapping, but not identical, sedimentation profiles on Nycodenz density gradients. This result could indicate that a subpopulation of Star-containing vesicles also associates with dynein (Fig. 5B). Dynein is known to bind microtubules with high affinity in the absence of ATP and low affinity in the presence of ATP. This property has been used previously to co-sediment rhodopsin-bearing vesicles with microtubules, in the presence of dynein and in an ATP-sensitive manner (Tai et al., 1999). Similarly, if dynein and Star are present on the same vesicles, then Star should also show an ATP-sensitive association with microtubules. We polymerized microtubules in vesicle preparations derived from hsStar-HA flies and looked for Star in the microtubule pellet fraction. An increased amount of both Dhc and Star were found to pellet with microtubules in the absence of ATP, suggesting that the association of Starcontaining vesicles with microtubules is mediated by dynein (Fig. 5C). We also conducted chemical crosslinking experiments to investigate the interaction between Star and dynein. S2 cells transfected with Star-HA were used to prepare membranes by flotation on step gradients (Haghnia et al., 2007). Fractions containing both Star-HA and dynein were treated with EDC [1ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride], a zero-length chemical crosslinker. Immunoblot analysis showed that the reaction products include an increasing amount of a high molecular mass complex that was recognized by antibodies to both Dhc and Star-HA (Fig. 5D). A corresponding decrease in the amounts of noncrosslinked Dhc and Star-HA is observed. Dhc and Glued interact with other components of the EGFR signaling pathway Ellipse 1 (Elp 1 ) is a hyperactivating mutation in the EGF receptor. Elp 1 flies have small eyes with a reduced number of ommatidia, as shown in Fig. 6A (Baker and Rubin, 1989). Alleles of Dhc (Dhc 8-1 , Dhc 6-10 , Dhc 6-6 , Dhc 4-19 , Dhc 1-1 ), as well as the Gl 1 allele, enhance the Elp 1 eye phenotype ( Fig. 6B-D, and data not shown). In addition to the dominant eye phenotypes, Elp 1 produced wing vein phenotypes (Fig. 7B,E) (Baker and Rubin, 1989;Lindsley and Zimm, 1992). Mutations in Star suppress the wing phenotypes produced by Elp 1 (Sturtevant et al., 1993), and also suppress wing phenotypes produced by mutations in Delta (Dl), a Notch receptor ligand (Heberlein et al., 1993;Sturtevant and Bier, 1995). To further test the contribution of dynein to these pathways, we asked whether Gl and Dhc alleles also modify wing phenotypes in Elp 1 and Dl mutants. We found that Gl 1 suppressed the wing vein phenotype exhibited by Elp 1 (Fig. 7F) and by Dl alleles (Fig. 7G,H). Gl 1 also interacts with Rhomboid (rho), which operates in concert with Star and the EGF receptor during wing development (Sturtevant et al., 1993). The overexpression of rho produced an extra wing vein phenotype that was suppressed by Gl 1 (Fig. 7I,J). These observations indicate that dynein function has a role in EGF receptor signaling during both wing and eye development. Overexpression of secreted Spitz rescues the rough eye phenotype It has been proposed that Star acts to chaperone Spitz precursor from the ER to the Golgi, where cleavage by Rho produces the Journal of Cell Science 121 (16) active, secreted form of Spitz ligand Tsruya et al., 2002). The transgenic expression of a truncated form of Spitz mimics the secreted ligand (sSpitz), and activates the Drosophila EGFR pathway in embryos mutant for Star and/or rho (Schweitzer et al., 1995). We reasoned that Star mutations might enhance the Gl 1 rough eye phenotype because of the role of dynein in transporting Spitz. To test this hypothesis, we asked whether overexpression of UAS-sSpitz could rescue the Gl 1 rough eye phenotype. Instead of the original Gl 1 line, we used an inducible Gl construct, UAS-⌬Gl (⌬Gl), to express the truncated product (Mische et al., 2007). Expression of ⌬Gl driven by actin-GAL4 produced small eyes with disruptions in the hexagonal packing of the ommatidia (Fig. 8A). This rough eye phenotype was indeed suppressed by expression of UAS-sSpitz (Fig. 8B). Our result is in agreement with other data showing that Spitz requires transport from the ER to another compartment before cleavage and activation can occur Tsruya et al., 2002;Tsruya et al., 2007), and suggests that this trafficking is defective in the Gl 1 mutant. Spitz-GFP is actively transported by dynein in Drosophila S2 cells To directly visualize the transport of Spitz, we transfected S2 cells with Spitz-GFP. Spitz-GFP accumulated in the latticework of the endoplasmic reticulum (ER) that encompasses the nucleus and extends into the cytoplasm (Fig. 9A) Tsruya et al., 2002). Previous studies have shown that in the presence of Star, Spitz-GFP exits the ER in vesicles that are trafficked to the Golgi and/or endosomal compartments Tsruya et al., 2002;Tsruya et al., 2007). We used live imaging techniques to examine the transport of Spitz-GFP following the coexpression of Star, and quantified the changes in transport following the reduction of dynein levels by RNAi. In cells coexpressing both Spitz-GFP and Star, the distribution of Spitz-GFP was not limited to the ER lattice, but accumulated in numerous small vesicles that transiently moved through the cytoplasm in a linear fashion ( Fig. 9B; Table 1; supplementary material Movie 1). This movement was characteristic of microtubule-based transport of cytoplasmic organelles, and was blocked by the microtubule inhibitor colcemid (data not shown). In fixed immunocytological preparations, dynein was present throughout the cytoplasm and could be observed to colocalize on a subpopulation of the Spitz-GFP vesicles (Fig. 9D). Next, we asked whether dynein is involved in the transport of Spitz-GFP from the ER. We used two sets of dsRNA to effectively deplete dynein heavy chain to levels undetectable by western blot (data not shown). Following the elimination of dynein activity, the number of vesicles per cell was reduced by 60% compared with control cells (Fig. 9C,E). In addition, the motility of Spitz-GFP vesicles was significantly inhibited (Fig. 9C,F; Table 1). The velocity of motile vesicles is reduced, and at least half of the RNAi-treated cells show no transport of Spitz-GFP vesicles. The microtubule organization of the interphase cells was undisturbed after Dhc RNAi treatment (data not shown). Our results show that dynein acts together with Star to transport the Spitz-GFP ligand in S2 cells. Discussion Activation of the Drosophila EGF receptor is primarily regulated through the controlled intracellular trafficking and proteolytic activation of its ligand, Spitz (Klambt, 2002;Shilo, 2005). Spitz is critical for mediating EGF receptor signaling during many aspects of development, including eye development. Spitz ligand is produced as an inactive transmembrane precursor and requires Star for its transport from the ER to the site of proteolytic cleavage in the Golgi and/or endosomal compartment Tsruya et al., 2002). Proteolytic cleavage by Rho, an intramembrane serine protease, activates the Spitz ligand Tsruya et al., 2002;Tsruya et al., 2007;Urban et al., 2001). Our results extend these observations to suggest that Star-mediated trafficking of the EGF ligands and the consequent activation of EGF signaling depend on dynein function. We provide evidence that components of the dynein-dynactin pathway interact with Star to regulate transport and signaling by Spitz. First, mutations in Star dominantly interact with the Gl 1 mutation. Reduction of Star gene dosage by 50% severely enhances the Gl 1 eye phenotype. This interaction between Gl 1 and the Star alleles is specific to the loss of Star function, since the altered eye phenotype is reverted by the presence of a Star transgene. The rescue suggests that the wild-type proteins interact in vivo, and that the phenotype does not reflect neomorphic protein interactions. Second, Star interacts with mutations in dynein itself. The observed interactions for both Star and Dhc are allele-specific, suggesting that specific domains within the Star and Dhc products mediate the interactions. Third, the suppression of the Gl 1 eye phenotype by certain Dhc alleles (e.g. Su(Gl)77), requires Star function. The suppression is reversed in the presence of a Star mutation, emphasizing the common pathway in which these gene products function. Finally, genetic interactions between the Dhc and Star loci are observed in both the eye and the wing, supporting a bona fide interaction, and suggesting that a common pathway operates within different tissues. What do the functional interactions between components of the dynein motor and EGFR signaling pathway mean? One intriguing possibility is that the dynein-dynactin complex is bound through Star to ER vesicles that contain EGFR ligands. Previous work has Journal of Cell Science 121 (16) suggested an essential role for Star as an adapter in the trafficking of ER vesicles Tsruya et al., 2002). In Drosophila embryos, Star protein is enriched in the nuclear membrane and contiguous ER (Pickup and Banerjee, 1999). In the present study, we show that vesicle membrane preparations enriched for Star also contain dynein, and associate with microtubules in an ATP-sensitive fashion. Our chemical crosslinking experiments provide additional evidence for the physical association of Star with the dynein complex, and support a model in which dynein mediates the trafficking and processing of the Spitz ligand through its association with Star. Our data are consistent with a direct interaction, but do not exclude the possibility that other proteins mediate the interaction between Star and dynein. Our analysis of Spitz transport in living S2 cells extends previous studies that show Star, Spitz and Rho are each transported from the ER to Golgi following heterologous expression in COS cells or S2 cells (Tsruya et al., 2002). Our results confirm that the export of Spitz from the ER, and its accumulation in Golgi vesicles, require Star. We further show that the number of Spitz-GFP-labeled vesicles formed, as well as their transport along microtubules, is dynein dependent. This result is consistent with previous studies suggesting that dynein and dynactin associate with ER-and Golgi-derived vesicles, and mediate their transport along microtubules (Burkhardt et al., 1997;Presley et al., 1997;Watson et al., 2005). In mammalian cells, exit of newly synthesized cargo from the ER is driven by the sequential assembly of vesicles (Aridor et al., 2001;Scales et al., 1997); cargo is initially concentrated into COPII-coated vesicles and then subsequently moved to the Golgi in transport vesicles in which COPII coatamer is replaced by COPI. Recent studies have provided evidence that the association of dynactin with COPII vesicles is coupled to ER exit (Watson et al., 2005). Further observations suggest that Cdc42 temporally regulates dynein association with COPI vesicles and the retrograde transport of vesicles from Golgi to ER (Chen et al., 2005). The diversity of vesicular cargo raises the question of how the binding of dynein, as well as other motors, is targeted to distinct vesicle populations and how transport is regulated. Dynein is known to participate in secretory vesicle trafficking, but whether there are specific transmembrane proteins that mediate the trafficking of specific receptor ligands is not understood. Although direct interaction of dynactin and the Sec23p component of the COPII complex has been reported, coatamer-independent recruitment of dynein to vesicles has also been proposed (Matanis et al., 2002). Our observations are consistent with the possibility that Star acts in the attachment of the dynein-dynactin motor complex to ensure the transport of Spitz-GFP vesicles. However, Star may alternatively interact with dynein indirectly, through other vesicle-associated proteins that mediate its connection to the dynein-dynactin complex. In either case, transport of Spitz from the ER by dynein would permit its proteolytic cleavage and activation in another cytological compartment. Dynein is also reported to facilitate vesicle transport between endosomal compartments (Lebrand et al., 2002). Recycling of Star protein appears to be important for the maintenance of signaling and may also involve dynein-based transport. Recent work has suggested that Star itself is cleaved by Rho (Tsruya et al., 2007). Cleaved Star fails to recycle to the ER and thus the trafficking of additional Spitz ligand is restricted. The cleavage of Star may modulate the amount of active ligand and the level of signaling. The interactions described -both genetic and biochemicalindicate that Star, Rho, dynein and dynactin function cooperatively to achieve the proper regulation of Spitz trafficking and signaling. Star might also serve as a common link in the trafficking pathways of multiple ligands, as previously suggested by Lee and co-workers . Two other EGFR ligands found in Drosophila, Keren and Gurken, are also activated by proteolytic release and require Star for trafficking from the ER, albeit to different extents (Ghiglione et al., 2002;Urban et al., 2002). The binding of Star to ligands within the ER lumen may promote motordependent transport from the ER to the Golgi complex by revealing an ER export signal, or masking an ER retention signal . Notch, EGFR and sevenless mutants interact with Star mutants (Heberlein et al., 1993;Kolodkin et al., 1994), as well as with Dhc and Gl mutants (our unpublished data). Yet, beyond these signaling pathways, mutations in Star do not appear to affect general vesicle transport. We propose that the Gl 1 and Dhc mutations enhance the Star phenotype by disrupting Spitz transport, thereby inhibiting the cleavage and secretion of active Spitz ligand. It is known that the Gl 1 dominant mutation produces a truncated product that competes with wild-type protein for binding to the dynein motor complex (McGrail et al., 1995;Waterman-Storer et al., 1995). We speculate that in the double heterozygous mutant backgrounds, the reduced level of transport activity is unable to deliver sufficient Spitz ligand for processing, and thereby compromises signaling at a critical period during development. In a test of this hypothesis, we found that transgenic expression of the active form of Spitz (sSpitz) can partially bypass the requirement for dynein-based transport of inactive Spitz. Our results demonstrate that dynein specifically contributes to the trafficking of the Spitz ligand from the ER, and to its activation by proteolytic cleavage. It will be important to discover exactly how dynein associates with the putative adapter, Star, and whether this association is regulated in a developmental context to control EGFR signaling. Future experiments will need to elucidate whether diverse adapters specify the attachment of specific transport machineries to vesicles containing distinct ligands. Fly stocks Dhc and Gl mutations have been described previously (Gepner et al., 1996;McGrail et al., 1995;Silvanovich et al., 2003). The mutations Gl 1 and Su(Gl)77 are described by Harte and Kankel (Harte and Kankel, 1982). We established that Su(Gl)77 is a hypomorphic allele of Dhc; females expressing the Su(Gl)77 mutation in combination with a deficiency that removes Dhc are sterile, but the sterile phenotype is completely rescued by introduction of a Dhc transgene. A recombinant Su(Gl)77 Gl 1 Sb chromosome containing both Su(Gl)77 and Gl 1 was generated by meiotic recombination. S 05671 was obtained from the Berkeley Drosophila Genome Project. Flies that carry a recessive lethal mutation in the Gl locus (Gl 1-3 ) or deficiencies that remove the Gl locus (Df(3L) fz-GF3b and Df(3L) Gl +R2 ) were gifts from Douglas Kankel (Yale University, New Haven, CT). UASp-⌬Gl was described previously (Mische et al., 2007). hsStar-HA and hsrho30A were described (Pickup and Banerjee, 1999;Sturtevant et al., 1993). UAST-sSpitz was a gift from Ben-Zion Shilo (Weizmann Institute of Science, Rehovot, Israel) (Tsruya et al., 2002). All other lines were obtained from the Bloomington Stock Center. Average calculated velocities and run lengths of Spitz-GFP vesicles were directly compared in control and Dhc siRNA-treated S2 cells. Values represent mean ± s.d. We conducted an F1 screen of a collection of lethal P-element insertion lines obtained from the Bloomington Stock Center. P/Balancer males were crossed to virgin Gl 1 Sb/Balancer females, and progeny carrying both the P insertion and Gl 1 Sb were examined for modification of the Gl 1 rough eye phenotype. In the case of lethal interactions, this class was absent. Eye phenotypes were evaluated by light and scanning electron microscopy (SEM). DNA analysis For plasmid rescue, DNA isolated from flies heterozygous for the P element was digested with XbaI and SpeI. The DNA was ligated and transformed into E. coli XL1-Blue cells. Plasmids that contained DNA flanking the Pelement were isolated and sequenced using a primer specific to the P element. Scanning electron microscopy Fly heads from three-day-old female flies were dissected and immediately dehydrated in an ethanol series as described previously (Carthew and Rubin, 1990), then prepared for SEM by critical point drying using liquid CO 2 . The dried heads were coated with gold-palladium in an Ernst Fullam Sputter Coater. The SEM images were collected using Hitachi SH50 scanning electron microscope and recorded onto film. Biochemical methods Flies expressing the HA-tagged Star transgene (hsStar-HA) were heat shocked at 37°C for 2 hours. Samples highly enriched in vesicles were prepared from head tissues according to a method based on a published procedure (Nakagawa et al., 2000). Briefly, fly heads were homogenized in PMEG (100 mM PIPES pH 6.9, 5 mM magnesium acetate, 5 mM EGTA, 0.1 mM EDTA, 0.5 mM DTT, 0.9 M glycerol) plus protease inhibitors, and centrifuged sequentially at 13,000 g and 100,000 g. The low-speed supernatant contains vesicles and membranes that are further enriched in the highspeed pellet. Vesicles were fractionated on a 20-60% nycodenz step gradient, run for 22 hours at 40,000 rpm in a SW50.1 rotor at 4°C. Microtubule co-sedimentation assays were carried out as previously described (Hays et al., 1994). In brief, Star-HA vesicles from above were resuspended in wild-type embryo extracts. Microtubules were polymerized from endogenous tubulin and pelleted with associated MAPs. Parallel experiments either depleted or supplemented MgATP, and either included or omitted paclitaxel (taxol). Pellets were analyzed by western blotting. Chemical crosslinking experiments used membranes prepared from S2 cells by flotation on sucrose step gradients (Haghnia et al., 2007). Briefly, cells transfected with Star-HA as described below were homogenized in PMEG buffer plus protease inhibitors, and centrifuged briefly at 1000 g to remove debris. The low-speed supernatant (1.5 mg total protein) was brought to 40% sucrose, loaded into a 13ϫ51 mm tube, and overlaid sequentially with 35% sucrose and 8% sucrose. Following centrifugation at 40,000 rpm for 90 minutes in a SW50.1 rotor, the gradient was collected into 250 μl fractions and analyzed by immunoblotting. 20 μl from a fraction near the top of the gradient, enriched for both Star-HA and dynein, was used in a reaction with the chemical crosslinking agent 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide HCl (EDC) (Pierce, Rockford, IL). Aliquots were withdrawn at different time points, quenched in gel loading buffer and analyzed by immunoblotting. S2 cell culture, RNA interference and colcemid treatment Schneider S2 cells were cultured in M3 insect medium (Sigma-Aldrich) with 10% Insect Medium Supplement (Sigma-Aldrich) plus 2% FBS and penicillin/streptomycin. Transfections were performed as described (Han, 1996). pUAST-Spitz-GFP and pUAST-Star-HA plasmids were generously provided by Ben-Zion Shilo (Weizmann Institute of Science, Rehovot, Israel) (Tsruya et al., 2002). Expression of the pUAST constructs was driven by cotransfection with an actin-GAL4 plasmid. To examine the effect of Star expression on Spitz transport from the ER, Spitz-GFP and actin-GAL4 were transfected into S2 cells for 24 hours, followed by Star-HA for 8 hours. To disrupt microtubules, S2 cells plated on concanavalin-A-treated coverslips were treated with 2 μg/ml demecolcine (colcemid) (Sigma-Aldrich) for 1 hour at room temperature. Live imaging of S2 cells and analysis Images were acquired using a Nikon Eclipse TE200 inverted microscope equipped with the PerkinElmer Confocal Imaging System (PerkinElmer, Waltham, MA) and Hamamatsu's Orca-ER digital camera. Spitz-GFP vesicle movements were captured at 1 second intervals using 2ϫ2 binning with a 100ϫ planapo (NA 1.4) objective. The vesicle number and rate of transport were measured for control (n=9 cells), Dhc RNAi (n=24 cells) and colcemid treatment (n=9 cells). The number of vesicles in each sample was scored in the first frame of each time-lapsed sequence analyzed. Since the movies were collected from a single focal plane, our analysis underestimates total vesicle numbers. Owing to the significant decrease in the number of vesicles present in the Dhc RNAi-treated cells, more of these cells were examined so that the total numbers were comparable to control and colcemid-treated cells. Moving vesicles that displayed linear movement for at least three consecutive frames were selected for analysis. Velocity and run-length of Spitz-GFP vesicles were manually tracked with Metamorph (Molecular Devices, Sunnyvale, CA) image analysis software 'Track Points' function as described previously (Mische et al., 2007). Stationary vesicles of similar spherical shape and Spitz-GFP intensity were identified based on a qualitative comparison to the moving vesicle population. The average velocity and total run-length for each motile Spitz-GFP vesicle were calculated using Microsoft Xcel, as was the standard deviation (s.d.) for velocity and run-lengths for all vesicles measured in control and dsRNA-treated cells. The velocity and run length were directly compared with those of the control cells. All statistical significance calculations were determined using the Student's t-test on unpaired data. Significance was established if P<0.05.
2017-05-03T10:44:01.367Z
2008-08-15T00:00:00.000
{ "year": 2008, "sha1": "ec1256e5b4bf8b70f5b3a2c4cd72000a250fcd1a", "oa_license": "CCBY", "oa_url": "http://jcs.biologists.org/content/121/16/2643.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ec1256e5b4bf8b70f5b3a2c4cd72000a250fcd1a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
267433933
pes2o/s2orc
v3-fos-license
Sleep-disordered breathing: statistical characteristics of joint recurrent indicators in ЕЕG activity : The purpose of this study was to identify promising candidates for the role of biomarkers associated with different degrees of the apnea-hypapnea index in patients using polysomnographic recordings. Material — The study used polysomnography data recorded in 30 patients with nocturnal respiratory dysfunction in the form of obstructive sleep apnea syndrome. Methods — Analysis of polysomnographic recordings was carried out using a joint recurrent indicator, for which further statistical characteristics were assessed: average value, geometric mean, cubic mean, median, dispersion, standard deviation, the coefficient of variation, asymmetry indicator, kurtosis indicator. Results — For all polysomnographic recordings, joint recurrence diagrams were calculated to identify time points corresponding to specific sleep events in patients with high and low apnea-hypnea index. Based on the statistical characteristics of such events, possible candidates for the role of biomarkers to diagnose apnea syndrome are introduced. Conclusion — The article presents clustering parameters and the efficiency of dividing into clusters of statistical characteristics for two groups of patients - with high and low apnea-hypnea index. Characteristics have been identified that are promising candidates for the role of biomarkers associated with the apnea-hypnea index value. Introduction The search for functional biomarkers for automated detection of early stages of various diseases is one of the complex tasks of interdisciplinary data analysis.However, there are a large number of problems in this area.First of all, we must take into account that physiological signals naturally combine entire classes of signals from different systems of the human body.There are many methods aimed at attempting to separate such signals into relatively pure components [1].However, these methods are often complex and do not always work correctly.In turn, the complexity of the mentioned methods can lead to false interpretation of the results, as well as to deliberately erroneous recognition of biomarkers of pathological processes by medical personnel [2]. In addition, methods originally developed for the analysis of stationary processes are often used to process physiological signals.The dynamics of living systems differs from model stochastic and chaotic systems in their greater complexity, accompanied by changes in both system control parameters and, apparently, continuous evolution and bifurcations of the internal structure of living systems as such.Finally, when processing signals from living systems, it is necessary to remember the individuality of the subjects.In particular, even the characteristics of invasive recording of brain activity in genetically homogeneous strains of laboratory rats have significant differences, falling into many subtypes [3].Thus, automatic work with the functional activity of the brain of random patients, whose medical history, besides the diagnosis, may be burdened with a significant amount of comorbid pathology, is significantly complicated by the presence of "invisible" system parameters. At the moment, most systems for automatic detection of biomarkers process, in fact, images of rather slow processes, such as, for example, well-known and used in medicine algorithms for detecting precursors of the development of benign and malignant neoplasms, skin lesions [4], medical decision support systems for radiography and magnetic -resonance tomography [5], etc.At the same time, the issues of monitoring the main characteristics of the functioning of the human cardiovascular system using the parameters of the basic rhythm of peak R waves on cardiograms have been relatively successfully resolved.The latter are well recorded by signal shape evaluation methods [6].For example, automatic diagnosis of arrhythmias is now confidently carried out [7], the risk of developing episodes of atrial and ventricular fibrillation is identified [8], etc.In this regard, the interest of researchers in the search for simple informative biomarkers that are resistant to noise and do not require complex medical and computer equipment does not decrease [1,9]. In this work, one of the simplest and at the same time effective methods of nonlinear dynamics was used -the construction of recurrence diagrams and their further numerical analysis.The first attempts to use recurrent methods of quantitative analysis to solve applied problems appeared in the 1980s in Ekman's publications [10].Since then, methods based on recurrent analysis have been actively developed in various areas of biomedical signal analysis [11].Recurrent analysis has a number of advantages over other methods -simplicity of calculations and the ability to work with a small set of signal points [12].Moreover, other methods often require the assumption of initial stationarity of the experimental series under consideration [13,14].It is obvious that real signals of physiological systems, including EEG signals, do not at all satisfy the requirement of stationarity, exhibiting a complex set of features of dynamic systems with linear and nonlinear properties [15][16][17]. Because this study used polysomnography recordings, a key feature of which is the large experimental batch size, typically involving more than 7 hours of continuous recording at a sampling rate of about 500-1000 Hz, the signals were divided into small time windows, for which joint recurrent indicators and their statistical characteristics were calculated to highlight certain signal features that can be further used as biomarkers. Recurrence Analysis Recurrent analysis allows you to establish relationships and correlations between signals in complex distributed systems.This method has found application in a wide range of tasks for processing complex signals of various natures [18,19].The calculation algorithm itself is extremely simple.Let us consider a signal x(t), the values of which are known only at discrete times ti, where i=1, …, n.Let this signal x(ti) be equidistant, i. e. ti+1-ti=ti+1-ti=∆t for any numbers i and j.Then consider constructing a recurrent plane as follows: where Ri,j -element of the recurrent matrix for the signal x, ti and tj -time moments t, ε -an empirically determined threshold value that ensures the necessary accuracy of the method, θ -the Heaviside function, which is defined as: (2) Thus, the recurrent matrix, constructed according to expressions (1) and (2), is formed from elements of two types -«0» and «1».The matrix element is equal to «1» if the value of the signal x(ti) at time ti falls into the ε-neighborhood of the signal value x(tj) at time tj.At the same time, the matrix element is equal to «0» if the values of the signal x at times ti и tj are far from each other.These recurrent matrices (1) are often shown graphically in the form of recurrence diagrams, in which colored dots correspond to one values and white dots correspond to zero values of the matrix.Thus, the recurrent properties of the time series x(ti) are represented in the form of geometric structures and allow us to visualize the dynamics of the series in the form of a simple graphical convolution. Recurrence analysis includes methods for studying the location of points on the constructed surface of a recurrence diagram [20], which have been used in recent years to process stochastic time series of various natures [10,21].Further, with the development of machine learning methods, convolutional neural networks began to be used to directly recognize geometric structures appearing on recurrence diagrams [22,23]. Note that in the case of single-frequency periodic dynamics in the recurrence diagram one can observe the resemblance of a grating, the period of which will correspond to the period of oscillation of the system [24].In the case of multi-frequency periodic dynamics, superposition of gratings with different periods is observed.The greater the number of repetitions of a particular value, the more corresponding elements of the recurrent matrix are equal to «1».From this it is easy to conclude that the higher the oscillation frequency, the more points we get on the recurrence diagram.This fact makes it easy to identify the most frequently occurring values in the signal [11].Therefore, recurrent analysis, although it does not belong to the group of frequency methods, allows you to automatically take into account the frequency of signal oscillations.To estimate the number of repetitions in the signal as a whole, the following recurrent indicator is used: Such an indicator can be calculated for each analyzed signal x over the entire recorded length or over the required time fragment. To compare two signals, a similar calculation of joint recurrence diagrams and joint recurrence indicators can be used.Formula (1) changes quite slightly: where JRi,j -element of the joint recurrent matrix for signals x and у, ti and tj -times t, ε and θ -have the same meaning as in the formula (1).Formula (4), thus, gives the values 1 for the elements of the joint recurrent matrix only if at moments ti and tj both signals x and у are in their ε-neighborhoods.Then, by analogy with formula (3), we can calculate the joint recurrent indicator: This indicator is very useful, as it shows how often these signals demonstrate similar dynamics -returns of signal variables at the same points in time. Statistical Metrics In the course of this work, a large number of standard statistical metrics were calculated for the calculated joint recurrent indicators.This section provides the corresponding calculation formulas for all of them. Average value The formula for calculating the average is simple: Harmonic mean The harmonic mean is one of the ways in which one can understand the "average" value of some set of numbers. Geometric mean The geometric mean of several positive real numbers is a number that can be used to replace each of these numbers so that their product does not change., Cubic mean The cubic average is a characteristic of volumetric features.This is a special case of the power mean and therefore obeys the inequality about means.In particular, for any numbers it is not less than the arithmetic mean., Median The median or middle value of a set of numbers is the number that is in the middle of this set, if ordered in ascending order, that is, a number such that half of the elements of the set are not less than it, and the other half are not greater.Another equivalent definition: the median of a set of numbers is the number whose sum of distances (or, more strictly, moduli) from all numbers from the set is minimal. Dispersion Dispersion of a random variable is a measure of the dispersion of the values of a random variable relative to its mathematical expectation.The formula for calculating a biased estimate of the variance of a random variable from a sequence of realizations of this random variable has the form: Standard deviation The standard deviation is the most common indicator of the dispersion of the values of a random variable relative to its mathematical expectation (an analogue of the arithmetic mean with an infinite number of outcomes).Usually means the square root of the variance of a random variable, but sometimes it can mean one or another version of estimating this value. , (11) The coefficient of variation In probability theory and statistics, the coefficient of variation, also known as relative standard deviation, is a standard measure of the dispersion of a probability or frequency distribution. Asymmetry indicator The asymmetry indicator is a value in probability theory that characterizes the asymmetry of the distribution of a given random variable. , Kurtosis indicator The kurtosis indicator in probability theory is a measure of the sharpness of the peak of the distribution of a random variable.Based on these signals, joint recurrent indicators were constructed between EEG channels.The initial marking of the dependences of the joint recurrent indicator on time was carried out to identify special sleep events.If the joint recurrent indicator for a time of more than 150 seconds exceeded its average value by an amount greater than the variance (JRIi>J̅ R ̅ I̅ +σJRR), then this event was marked, that is, the start and end times of the event were recorded.It was noted as a positive sleep abnormality.Events were similarly marked when the indicator for a time of more than 150 seconds was lower by an amount greater than the dispersion of its average value (JRIi>J̅ R ̅ I̅ -σJRR).Such sleep events were noted as negative anomalies. It is worth noting that the signal may contain individual extremes that go beyond the established boundaries, but they were not marked as events, since their length does not exceed 150 seconds.Subsequently, statistical characteristics were calculated separately for positive and negative anomalies in order to divide patient groups into clusters and identify possible biomarkers from these clusters. Results The work calculated a large number of statistical characteristics of joint recurrent indices (JRIs) corresponding to special sleep events.Characteristics for positive and negative sleep anomalies were considered separately.In this case, the patients were initially divided into two groups.The first group included patients with an apnea-hypopnea index less than 25.In the second group, the apnea-hypopnea index exceeds 25.Special sleep events for these groups were calculated separately to compare their statistical patterns and identify, if possible, simple linear classifiers for clustering according to these patterns. Clustering was carried out based on the support vector machine and the k-means algorithm for each pair of statistical characteristics, constructed using joint recurrent indicators.The essence of the complete method: the centers of mass of the distribution for both groups are located in the two-dimensional space of statistical characteristics.Points located at a distance of more than three dispersion values from the center of mass are removed from further consideration to avoid the influence of statistical outliers on the results.Then a straight line is constructed through the centers of mass and the central point between the centers of mass lying on this straight line is calculated.A perpendicular is built through this point, which will separate the resulting clusters.It is this line that will be considered a linear classifier. To assess the accuracy of this method, it is proposed to calculate a certain specially introduced coefficient µ.Its calculation involves measuring the distance to the linear classifier for each point of each group.The normal is lowered onto the classifier from the point and using the Euclidean measure, the distance from the point to the line is calculated.Since different characteristics have different distribution widths, the resulting distance is normalized to the distance from the center of mass to the origin of coordinates.The coefficient is the sum of all distances, however, if the point is on the side opposite the center of mass relative to the linear classifier, then the distance is taken with a minus sign.Thus, the greater the value of the coefficient µ, the better the resulting linear classifier divides groups into clusters.Figure 1 shows examples of successful and failed clustering. For the example shown in Figure 1,а the coefficient µ takes the value 186.9, while for the example in Figure 1,б µ=14,29.In this case, the coefficient µ can also take negative values if the division into clusters was very unsuccessful. Tables 1 and 2 show all coefficient values for negative and positive sleep anomalies.From these tables, we can identify those pairs of statistical characteristics for each type of sleep event that best separate these events and are the most likely candidates for the role of biomarkers. Tables 1 and 2 show that the coefficient values are symmetrical relative to the main diagonal, which is left empty for obvious reasons.The filling symmetry is logical, since in this case the clusters will be identical up to the change of variables.It is worth noting, however, that in the work only values below the main diagonal were calculated, and values above the main diagonal were filled in mirror image.This remark is important, since Tables 3 and 4 show the coefficients for the linear classifier obtained for pairs of statistical characteristics located below the main diagonal.For simplicity, to reduce the number of tables due to the symmetry of the data in Tables 3 and 4, the coefficients а of the linear classifier are located above the main diagonal, and the coefficients b below it.Thus, using Table 1, you can determine the most suitable statistical characteristics for dividing patients into groups of patients for negative sleep anomalies, while using Table 3 you can restore the type of linear classifier that was used to separate the data. From the analysis of Table 1 it is clear that the most suitable characteristics for a biomarker are the average values (arithmetic, harmonic), as well as the median and, to a lesser extent, variance. Between them, as a rule, the coefficient µ takes values exceeding 100, which is a good result.In Table 2 it can be observed that for positive sleep anomalies, instead of dispersion, the asymmetry indicator allows clustering with average values.However, the coefficient µ achieves its greatest value precisely when constructing clusters based on dispersion and asymmetry index. Discussion The results list statistical characteristics that can be used as biomarkers to distinguish between patients with and without sleep apnea.However, these results are not enough to form full-fledged biomarkers.Firstly, the results can be improved by using nonlinear functions instead of a linear classifier.Some pairs that did not give a good separation into clusters according to the results of Tables 1 and 3 may give a good result when using non-linear classifiers, which will shift the priority for creating stable biomarkers. Secondly, before forming a final opinion on the effectiveness of the division, it is necessary to consider the division into clusters according to the given classifiers in the multidimensional space of statistical characteristics.Thus, for negative anomalies it makes sense to construct a five-dimensional space from the arithmetic mean, harmonic mean, geometric mean, median and variance.If division into clusters remains in five-dimensional space, then using these characteristics it will be possible to build a system for recognizing apnea syndrome. It also makes sense to cluster according to three or four statistical characteristics, building a multidimensional space.As with a nonlinear classifier, in this case the result may change.Thus, this article is the first important step in identifying biomarkers for the early diagnosis of apnea, however, a lot of additional research is still required to create an effective system for recognizing apnea syndrome in the early stages. Conclusion In this article, joint recurrent indices were calculated for polysomnographic recordings of patients with apnea and statistical metrics of special sleep events.Metrics were calculated separately for positive sleep anomalies (when the joint recurrent indicator for a long time exceeds the average value of the indicator by an amount greater than its variance), and separately for negative anomalies (when the recurrent indicator is more than a variance less than its average value).Each calculation included mean, geometric mean, harmonic mean, dispersion, standard deviation, coefficient of variation, skewness index, kurtosis index.Each pair of statistical metrics was used to find a linear classifier by which patients with different apnea-hypnea index can be distinguished.To assess the quality of separation using this linear classifier, the coefficient µ was calculated based on the distance from the points of each cluster to the linear classifier.In addition to the calculated coefficients µ, the used parameters a and b for the linear classifier are given (with the classifier equation y=ax+b).A comparative analysis of the coefficient µ showed that the classifier associated with the median values works best.The asymmetry index and harmonic mean are also often effective for dividing patients into groups.These metrics are the most promising for searching for biomarkers of early diagnosis of apnea using polysomnography data. , ( 14 ) Polysomnography data obtainedThe subjects were individuals with nocturnal respiratory dysfunction in the form of obstructive sleep apnea syndrome (N=30, age 48,0±19,1, median 43 years, male to female ratio = 18/12).Sleep duration was 6-9 hours, с 21.30-23.30until the patient's usual time of awakening.Polysomnographic recording included electrocardiogram (ECG), respiratory function, oculography (OCG), electromyogram (EMG) and two electroencephalogram (EEG) signals recorded during night sleep.The ECG signal was recorded in standard lead I according to Einthoven.Respiration signals were recorded using a flow-through oronasal temperature sensor and a snoring sensor.EMG signals were recorded on the patient's chin, right forearm and left shin.OСG signals included recordings of horizontal and vertical eye movements.EEG signals were recorded in 2 standard leads according to the 10-20 scheme.EEG signals were bandpass filtered 0.1-40 Hz and sampled at 500 Hz, ∆t=0,002 seconds.Registration of each EEG channel can be considered as a separate one-dimensional signal x(ti) for subsequent recurrent analysis. Figure 1 . Figure 1.Examples of successful and failed clustering.а -an example of successful separation by harmonic mean and median for negative sleep anomalies.б -an example of erroneous division based on kurtosis and median for positive sleep anomalies.Red and green dots show the obtained statistical characteristics in two-dimensional space, and the black line shows the linear classifier.
2024-02-06T18:43:23.052Z
2023-12-26T00:00:00.000
{ "year": 2023, "sha1": "43fb8af3ec111dc80795f80fa73ebade4aff9d96", "oa_license": "CCBYNC", "oa_url": "https://romj.org/files/pdf/2023/romj-2023-0401.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a32caa707ed15825d9e39f470286fc379510bb91", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
204026848
pes2o/s2orc
v3-fos-license
Evaluating Biogenicity on the Geological Record With Synchrotron-Based Techniques The biogenicity problem of geological materials is one of the most challenging ones in the field of paleo and astrobiology. As one goes deeper in time, the traces of life become feeble and ambiguous, blending with the surrounding geology. Well-preserved metasedimentary rocks from the Archaean are relatively rare, and in very few cases contain structures resembling biological traces or fossils. These putative biosignatures have been studied for decades and many biogenicity criteria have been developed, but there is still no consensus for many of the proposed structures. Synchrotron-based techniques, especially on new generation sources, have the potential for contributing to this field of research, providing high sensitivity and resolution that can be advantageous for different scientific problems. Exploring the X-ray and matter interactions on a range of geological materials can provide insights on morphology, elemental composition, oxidation states, crystalline structure, magnetic properties, and others, which can measurably contribute to the investigation of biogenicity of putative biosignatures. Here, we provide an overview of selected synchrotron-based techniques that have the potential to be applied in different types of questions on the study of biosignatures preserved in the geological record. The development of 3rd and recently 4th generation synchrotron sources will favor a deeper understanding of the earliest records of life on Earth and also bring up potential analytical approaches to be applied for the search of biosignatures in meteorites and samples returned from Mars in the near future. INTRODUCTION Elucidating the origin and evolution of life on Earth, as well as the possibilities of its occurrence and preservation outside the planet are topics of primordial interest to the astrobiological community. For the search of life in extraterrestrial worlds, distinguishing and understanding the preservation of the earliest records of life on Earth is a step of fundamental importance. Although evidences of life have been reported in rocks with more than 3.7 billion years (Ohtomo et al., 2014), several descriptions of very ancient chemical or morphological biosignatures have been debated by the scientific community over the past decades (Brocks et al., 1999;Brasier et al., 2015;Nutman et al., 2016;Dodd et al., 2017;Schopf et al., 2018). The challenge becomes even greater when the putative biosignatures are from outside Earth (whether in meteorites, or chemical or morphological signals measured in situ or that will be collected in future missions to Mars). The Viking missions on the 1970's found chemical traces that could be related to metabolism on the surface of Mars (Levin and Straat, 1977), which were later contested. The same happened in the 1990's, when McKay et al. (1996) described supposed biological microscopic structures on the ALH84001 meteorite. More recently, Noffke (2015) presented sedimentary structures resembling terrestrial microbially-induced sedimentary structures (MISS) on Mars, based on morphological analysis of images from the rover Curiosity, which instigated the astrobiological community in the hope of finding chemical biosignatures or other evidences that could support their biogenicity. The application of analytical techniques in paleo and astrobiology has contributed to the advancement of knowledge about the indicatives of past or present biological activity from macroscopic to nanometric scales. Techniques such as Raman Spectroscopy, X-ray Diffraction, X-ray Tomography, Electron Microscopy and nanoscale secondary ion mass spectrometry (Nano SIMS) are now being routinely applied on Earth sciences. Meanwhile, techniques based on synchrotron light are gaining more space as promising tools for the non-destructive investigation of paleobiological samples, giving rise to an emerging field known as paleometry (Gomes et al., 2019). Some of these techniques are summarized in Table 1. Synchrotron accelerators produce light with unique qualities, such as high brilliance (flux of photons per unit area), broad, continuous energy range, and high coherence, which are unattainable by conventional light sources, such as lasers, lamps or X-ray tubes. These characteristics allow the use of multiple physical phenomena to unveil the original elemental and molecular composition, mineralogy and morphology of samples, with higher sensitivity, penetrability and resolution than conventional techniques applied for the same purposes. In the present review, we will discuss recent applications and potentials of some synchrotron-based techniques for evaluating biosignatures in the geological record. Biosignatures include, among others, (i) morphological features, such as preserved cells or extracellular components, as fossilized extracellular polymeric substance (EPS); (ii) biogenic minerals, such as those biologically-induced (i.e., biogenic dolomite precipitated in microbialites and microbial mats) or biologically-controlled (i.e., endoskeletons and hard mineralized parts of metazoans), which sometimes present crystallographic characteristics and/or physical properties that can be distinguished for those abiotic minerals (i.e., biogenic magnetite, such as magnetites produced inside the cells of magnetotactic bacteria (magnetosomes); (iii) specific textures or biogenic fabrics in rocks originated by microbial activity, such as fenestral fabrics in microbialites; (iv) molecular vestiges linked to biological activity, such as organic ligands, lipids, organic macromolecules, etc; (v) specific chemical characteristics of bioprocessing, such as chirality in organic molecules; (vi) stable isotopes favorable to biogenic patterns and bioprocessing. Interpreting these signals as biosignatures is not always trivial, especially if these samples are from deep time context (Javaux, 2019). One example is the distribution pattern of bio-elements (trace or major) in the samples of interest. The interpretation of signals of past life in these distributions may be complex and others aspects need to be considered, such as the co-occurrence of other types of biosignatures, such as morphological and chemical (isotopes, co-localization of elements, combinations with organic signatures, etc.), or characteristics of the authigenic mineralogy and properties of the original geological context. The complexity of interpretation also increase considering the possibilities of posterior contaminations and the several changes in the original characteristics of mineralogy and geological features, the action of metamorphism and changes of temperature which can degrade or modify the organic biosignatures, besides several other alterations during the postburial processes over time. Thus, it is important to do critical interpretations of the signs of early life in order to avoid misinterpretations and/or equivocal biosignatures. There are several examples in the literature regarding the contestation or reassessment of alleged evidence of ancient life on Earth and outside it (McKay et al., 1996;Golden et al., 2000;Thomas-Keprta et al., 2002, 2009Steele et al., 2012). Some will be discussed in the following specific sessions. IMAGING MORPHOLOGICAL BIOSIGNATURES Morphological traces of microorganisms are important biosignatures for providing direct insights into ancient life ultrastructure, evolution, paleo-environment and also to help to understand the physico-chemical conditions that allowed their long-term preservation. High resolution microscopies applied so far, such as Scanning Electron Microscopy (SEM) or Transmission Electron Microscopy (TEM) have allowed significative advances on the understanding of the earliest fossilized microbes. Their application, however, is limited by the low penetration depth of electrons, thus requiring destructive sample preparation for exposing the structures from within the rock matrix (potentially introducing artifacts), or by preparing ultra thin (<100 nm) sections that can sample only a small fraction of the specimen. 3D images can be achieved by using a focused ion beam (FIB) coupled to a SEM (FIB-SEM), and combining sequential milling with concurrent Energy Dispersive Spectroscopy (EDS-SEM) imaging. Although being a destructive approach, it has allowed even some of the most ancient microfossils to be investigated at the nanoscale, providing insights into their chemistry, ultrastructure, taphonomy and investigating the biogenicity of these structures (Wacey et al., 2012;Brasier et al., 2015). X-rays can penetrate objects and provide information of their interior non-destructively, therefore having the potential of overcoming the limitations of electron microscopy. Conventional X-ray imaging, such as the widely known Computed Tomography (CT), is based in the absorption of X-rays, a physical interaction which relies on the densities and/or atomic numbers of the materials comprising the specimens. For mineralized structures such as some fossils, this can represent a limitation in contrast. Synchrotron sources allow other contrast modalities to be explored, such as the phase contrast. Phasecontrast µ-CT has become critical for paleobiological studies in the last decades (Tafforeau et al., 2006;Cunningham et al., 2012;Maldanis et al., 2016), due to its capacity of extracting 3D information from these homogeneously dense specimens non-invasively, revealing even the preservation of soft tissues. Recently, the limit of resolution of X-ray microscopy has been pushed forward by the development of techniques based on Coherent Diffraction Imaging (CDI), specially Ptychography. This lensless method allows specimens of tens of microns to be imaged with nanometric resolution, and can also be applied in 3D, receiving the name of Ptychographic X-Ray Computed Tomography, or PXCT (Holler et al., 2014). This imaging approach is based in the collection in the far-field of diffraction patterns partially superposed while scanning the sample. This redundancy of measurements allows the reconstruction of the specimen's complex refraction index (both absorption and phase components) using iterative algorithms instead of X-ray lenses. PXCT has been only briefly applied to the study of microfossils (Cunningham et al., 2015;Guizar-Sicairos et al., 2015), and its potential for evaluating the biogenicity of morphological Not all minerals will luminesce on the given condition; it has to be combined to other techniques for a complete interpretation of data Kolodny et al., 1996;Rakovan and Reeder, 1996;Dartnell and Patel, 2014;Gaft et al., 2015;Lin et al., 2015;Marshall et al., 2017;Shkolyar et al., 2018 Advantages and disadvantages in comparison with conventional techniques and examples of applicability are also shown. Frontiers in Microbiology | www.frontiersin.org biosignatures has still to be further explored. Nevertheless, its potential for unveiling whole fossilized microbes within their geological context also configures it as a potential methodology to be used for searching morphological biosignatures potentially preserved in rocks returned from Mars in the near-future. ELEMENTAL MAPPING OF CHEMICAL BIOSIGNATURES Metabolic processes of biological systems can generate traces or patterns uncommon to abiotic systems. For geobiological materials, these biosignatures can be present in the form of biominerals with different crystalline organizations or specific distributions, mineral assemblages in association with organic matter and modifications on the mineral surface (Banfield et al., 2001). Some elements are considered to be bio-essential and biofunctional (i.e., P, S, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Mo, As, and Pb) (Williams and Frausto da Silva, 1996), and when colocalized and/or associated to specific morphological patterns, such as layered distribution or in consonance with organic matter, they could represent biosignatures, including results of biological processes. Moreover, the geochemical composition and growth processes of microbial mats favor the binding of some metals, which can be reorganized during the development of the community, forming biominerals or being adsorbed on the surface of minerals. Identifying and mapping the chemical elements present in geobiological structures can provide means of supporting their biogenicity and also understanding their ecological interactions and modes of preservation. Nevertheless, identifying these elements and distinguishing patterns generated by biotic from abiotic processes require approaches with high-spatial resolution, sensitivity to a broad range of elements and to trace element concentrations. In questions about biogenicity studies, in particular for elemental distribution analyzes such as X-ray fluorescence mapping, it is important to consider the colocation of elements with morphological structures of interest, original lithology and its possible changes, the co-occurrence of biogeochemical elements of interest and also their abundance limits taking into account the geological and preservational context. The combination of as much evidence as possible that can explore the depositional history, and especially the diagenetic alterations that may have exerted any changes in the preservation of authigenic characteristics, it is important to minimize any misinterpretation, especially in deep time rocks. Synchrotron-based X-ray fluorescence (SR-XRF) allows the identification, mapping and semi-quantification of chemical elements even in concentrations of parts per billion (ppb), and has a spatial resolution primarily dependent on the size of the X-ray beam, which, in the case of the new generation synchrotrons, can reach nanometric dimensions. The range of elements that can be detected with this approach depends on the energy of the X-rays, once the fluorescence phenomenon relies on the removal of strongly-binded inner-shell electrons, creating vacancies that need to be filled by external electrons. The difference of energy between the electrons involved in this process is emitted as photons, generating a fluorescence pattern which is specific to each element. One example of case study using SR-µ-XRF was applied to discuss the biogenicity of pyrites in a microbial context in order to understand the role of bacteria in the tafonomical process of well-preserved fossilized organisms in Crato Basin, Brazil (Osés et al., 2017). The authors combined morphological biosignatures, such as putative fossilized bacterial EPS observed in SEM with framboidal pyrites, using the SRµ-XRF mapping to identify some metals (i.e., Fe, Cu, and Zn) which could be incorporated in the system by microbial activity (Figure 1). Sforna et al. (2016) used SR-µXRF and complementary approaches to assess the metal incorporation in living microbialites and their remobilization during a simulated diagenetic processes. They mapped the distribution of metals triggered by secondary abiotic processes and provided a basis of comparison for evaluating ancient Precambrian microbialites. Recently, SR-µ-XRF was also applied in combination with conventional techniques and magnetic analysis by Callefo et al. (2019), in order to evaluate the biogenicity of iron minerals in carboniferous rhythmites (periodic sedimentary depositions). The distribution pattern of iron in co-occurrence with putative MISS and organic matter, plus the magnetic signal compatible with biogenic magnetite, indicated a biological origin for the iron minerals, allowing a reassessment the depositional history of the geological site. FIGURE 1 | Application of synchrotron-based X-ray microfluorescence in fossil fish from Crato Member, Brazil (Osés et al., 2017), showing the potential of elemental mapping in providing information about the original elemental composition (interpreted as biosignatures) and diagenetic processes (interpreted as secondary incorporation). In this case, the authors used the elemental mapping to identify some metals (i.e., Fe, Cu, and Zn) related to the incorporation by microbial activity, in order to discuss the biogenicity of pyrites and understand the role of bacteria in the tafonomical process. For both maps the blue shows the highest X-ray intensity while the red shows the lowest intensity (counts per second); the maps are for K-shell X-rays except for Ba, which was detected using L-shell X-rays (Allwood et al., 2018). Reprinted by permission from Springer Nature. Studying organic-walled microfossils, Marshall et al. (2017) reported the application of XRF for mapping V, an element present in chlorophyll and heme porphyrin pigments. The authors propose that the co-localization of this element within microfossil-like morphologies and carbonaceous composition can be used as a biosignature for putative microfossils, and could also be applied in samples returned from Mars in the near future. Allwood et al. (2018), in order to reassess the biogenicity of putative microbialites from 3.7 Ga Isua supracrustal belt published by Nutman et al. (2016), used the distribution of trace elements relative to minerals in the putative structures (Figure 2). In combination of other techniques to reveal the three-dimensional, morphology and orientation of the structures, the X-ray fluorescence showed the distribution of major and trace elements inside the morphology of the putative stromatolites, revealing an alternative explanation for the origin of the structures. For the authors, it is plausible a non-biological formation of the structures, consisting of a feature caused after burial process by deformations in a carbonate-altered metasediments. The authors argue that in the previous interpretation (from Nutman et al., 2016) for the trace-elements in the three-dimensional structure as evidence of primary marine carbonate sedimentation, in fact was result as a mixture of samples, including micas, not only dolomite. Once micas can carry oligoelements in these rocks along the geological history, the biogenicity of the structures become questionable. This case may illustrate the need for further analysis and other evidences to complement the use of elemental biosignatures. DIFFERENTIATING BIOTIC AND ABIOTIC ELEMENTS SPECIES One of the main challenges in the search of chemical biosignatures associated to microfossils and microbialites is the fact that the original biochemical components of microorganisms are degraded and altered over time, while other non-biogenic elements are incorporated in the course of diagenetic processes (Lemelle et al., 2008). The metabolic activity of microorganisms, however, generates redox heterogeneities, which, sometimes in association to organic compounds, can constitute biosignatures (Miot et al., 2014). The speciation of elements provides information about oxidation state and chemical neighboring, which can provide valuable insights into microorganismminerals interactions and ancient metabolisms. The possibility to tune the energy of X-rays in synchrotron sources allows the application of X-ray Absorption Spectroscopy (XAS), an approach based on the analysis of the absorption profile of a given element prior to the core electrons removal with the incidence of X-rays (the same phenomenon described above for the generation of the fluorescence signal). The different regions analyzed on the absorption spectra gives rise to two different techniques, X-ray Absorption Near Edge Structure (XANES), and Extended X-ray Absorption Fine Structure (EXAFS). These techniques are strongly sensitive to the oxidation state of the elements and can also provide information about the coordination of the absorbing ions. When associated to the small beams available at scanning X-ray microscopes, speciation maps with nanometric resolution can be achieved. These techniques have already been applied at lower concentrations and at less-than-perfect sample conditions (Newville, 2014), as the ones usually present in natural geological samples. There are several examples of the use of these techniques as a complement both in reinforcing and in questioning the biogenicity of ancient microbialites and microfossils. Lemelle et al. (2008) found biomarkers related to organic S associated to microfossils by evaluating the sulfur absorption K-edge. Grosch et al. (2017) used XANES to question the biogenicity of 3.47 Ga filament-shaped titanite microtextures in early Archean samples from Barberton Greenstone Belt, South Africa, which was considered the oldest microbial trace-fossil on Earth. The authors used temperature maps combined with µ-XANES for Fe speciation profiles in chlorites present in metabasalts that contained the filaments. The data pointed to metamorphic constraints that indicated incompatibility with the biogenicity of the structures. In contrast, De Gregorio et al. (2009) used XANES with Scanning-Transmission X-ray Microscopy (STXM) and complementary techniques in order to reinforce the biogenicity of 3.5 Ga putative microfossils from Apex Chert, Western Australia. The authors found similar characteristics on these specimens with biogenic kerogen from the ca. 1.9 Ga Gunflint Formation. In this study, XANES provided information about the chemical complexity of kerogen, which presented aromatic carbon and oxygenated functional groups. Sancho-Tomás et al. (2018) applied XANES and µ-XRF in combination with conventional analysis and showed the relationship between the biological processing of As with the mineralogy in recent hypersaline microbial mats, by evaluating the mineral occurrence, the As speciation and the elemental distribution. With the intense photon beams of 3rd and 4th generation synchrotron sources, radiation damage of the samples should also be considered. Potentially preserved organic molecules are the most fragile ones, and can suffer from photooxidation and breakup depending on the measuring condition. However, even inorganic signatures can be altered, as X-rays can change the oxidation state of elements (e.g., photo-reduction of sulfur as reported by Moussallam et al., 2014), or produce defects on the crystal lattice of minerals. It is possible to use the advantage of the brilliant beams while minimizing exposition by performing very fast scans, both in energy (for spectroscopy) and in space (for imaging). This should be taken in consideration on the design of new beamlines that intend to be used on radiation-sensitive materials. Theoretical calculations and test measurements with standards can optimize the systems before the measurement, ir order to collect enough signal to allow the study, but delivering the minimum dose possible. Different strategies of analysis for mitigating and monitoring these effects for ancient materials have been reviewed by Bertrand et al. (2014). It is possible to use the chemical information obtained with XAS as a contrast for an imaging approach, called STXM is a type of X-ray microscopy which uses XANES as its contrast mechanism (Ade and Urquhart, 2002). This approach works usually in the soft X-ray energy range (130-2.500 eV), and can reach nanometric scale of spatial resolution. This range of energy can interact with almost all elements, besides allow to map chemical species based on bonding structure. The use of soft X-rays also reduce the risks of damage for radiation, in comparison with the electron beam techniques (Lawrence et al., 2003). One advantage is the possibility of working with bulk samples, as long as it is transparent to the beam, and the possibility of imaging a number of key-elements of interest in the same sample. This technique has been utilized for determining the speciation of elements such as carbon and nitrogen in microfossils at submicrometric scale, such as organic microfossils from 1.88 Ga Gunflint Formation (Alleon et al., 2016), refining the knowledge about the degradation of organic biosignatures along the time, especially by the effects of temperature changes along the diagenetic processes. Infrared techniques can also be powerful tools for studies of detection and study of the preservation or alteration of organic biosignatures. It possible to retrieve infrared spectra in absorption or emission of samples in gas, liquid or solid state, with high spectral resolutions and in a wide spectral range. Although there are several studies using conventional infrared techniques for the study of biosignatures (Guido et al., 2012;Preston et al., 2014;Gordon and Septhon, 2016;Gaboyer et al., 2017;Igisu et al., 2018;Stevens et al., 2019), there is still a lack of reference in the use of synchrotron-based FTIR (SR-FTIR) techniques for this purpose. The synchrotron-based approach has the advantage of the high flux and broad range of energy, which allows the acquisition of fast spectra with high signal-noise ratio. For biological signatures, this can mean a decrease in the risk of degradation of biosignatures during the measurement time. An example of application of SR-FTIR on the evaluation of chemical biosignatures' behavior during the fossilization process was presented by Benning et al. (2004), in which SR-FTIR micro-spectroscopy was applied to determine the response of the organic structure of live cyanobacterial cells. The vibration of the original components of the microbial cells (specific functional groups related to the cells), together with the characteristic vibrations for silica was analyzed during the progressive silicification. This is especially relevant, as silicification is one of the most common fossilization processes and present high capability of biosignature preservation during the geological time (Konhauser et al., 2004;Wacey et al., 2011;Campbell et al., 2015;Sugitani et al., 2015;Manning-Berg et al., 2019). INVESTIGATING THE STRUCTURE, OPTICAL AND MAGNETIC PROPERTIES OF BIOMINERALS The advantage of biominerals (Dove et al., 2003;Perry et al., 2007;Dupraz et al., 2009) in their use as biosignatures is due to they are more resistant to deep-time geological processes and to several alterations that can be caused by the diagenesis. In comparison with organic biomarkers, for instance, the biominerals are more persistent in nature over the time (Jimenez-Lopez et al., 2010). Biominerals are very important for the study of past life on the planet, as they can represent life records present in rocks of very old ages. In the case of magnetofossils, it can be a good biomarker for the presence of past life on Earth and beyond (McKay et al., 1996). However, in order to be able to use them as biosignatures, it is first necessary to know the intrinsic characteristics that differ them from minerals of abiotic origin, and to compare them in order to establish biogenicity parameters. Biominerals can be differentiated from minerals of abiotic origin due to some intrinsic characteristics that they present. It is known that the biomineralization processes can influence the organization of minerals, allowing one to differentiate biogenic and abiogenic minerals regarding, for instance, their crystalline structure and physical properties (Chang and Kirschvink, 1989;Bazylinski et al., 1995;Thomas-Keprta et al., 2000;Egli, 2004). In the biologically-controlled mineralization, the genetics of the organisms/microorganisms can control intrinsically the mineral nucleation according some physiological or morphological need (Mann, 2001;Dupraz et al., 2009). This intrinsic control can originate characteristics that can be distinguishable from those inorganic minerals, such as some properties and organizations observed in internal and external skeletons (i.e., Ma et al., 2016;Rao et al., 2016) or observed in different origins of magnetites (Bazylinski et al., 1995;Thomas-Keprta et al., 2000;Körnig et al., 2014). These biominerals (or organominerals, such as suggested by Mann (2001) when the mineral is genetically controlled during its formation) can be considered direct evidence of life, consisting, when properly detected, a consistent biosignature. For instance, there are at least six specific characteristics that can distinguish the intracellular magnetite from detrital magnetite (Thomas-Keprta et al., 2000). They are: small size of the crystals to until a few dozen nanometers (single domain -SD), controlled mainly by the EPS; chemical purity and crystallographic perfection (less contaminants or exogenous elements between the atoms chains which forms the crystal lattices); organization in chains (when consists of magnetofossils or fresh cells), uncommon shapes of particles (such as bulletshaped or elongated crystals which can not be mimicked by inorganic processes); and a trendy to elongation of the crystals when they are organized in chains toward the crystallographic direction [111]. X-Ray Diffraction (XRD) is a potential technique to be used in the study of mineral biosignatures due to its capability of identifying crystalline phases such as inorganic and organic ordered structures preserved and/or secreted by living organisms. This kind of technique is selective and can identify synthetic or natural minerals (Tadic and Epple, 2004). Even if a material presents mixed crystal structures, the characteristic peaks produced from different planes of reflection may allow the identification of specific crystalline phases. Thus, XRD is a very important tool for evaluating the biogenicity of minerals (Che et al., 2016;Iñiguez et al., 2017), allowing the detection of past life evidence in ancient rocks, fossils and even rocks from Mars in the future. The detection limit of XRD depends on the measurement geometry, incident photon energy, spot size, and flux. Compared to conventional X-ray diffractometers, synchrotron measurements allow the study of very dilute phases with high angular resolution (2θ 10 −4• ), considering the Bragg-Brentano geometry and the measurement in 2θ, which can be used to deconvolute very close peaks. This can be used to distinguish biotic and abiotic crystals, for example, magnetic compounds like greigite and magnetites produced by bacteria (Miot et al., 2014;Till et al., 2017). Still exploring the possibility of detecting biominerals, another technique applicable to biogenicity problems of minerals, the X-ray Magnetic Circular Dichroism (XMCD), is able to give information about the orbital magnetic moment and the spin of the material. The technique provides information about the 3d electronic states in transition metals, such as Fe, Ni, Co, etc., which is responsible for the magnetic properties of the minerals (Stöhr, 1999;Rogalev et al., 2006). XMCD has been increasingly used to provide detailed information about the electronic and magnetic structure of nanoparticles (van der Laan and Figueroa, 2014), and this can be interesting for the study of biogenicity, especially for the investigation of ferromagnetic biominerals, such as biogenic magnetite and greigite. The technique is based on the dichroic effect, which occurs when left and right polarized light passes through a material showing differences in the absorption coefficients. Dichroism can be caused by the spin or by the anisotropy of the material, such as the magnetic anisotropy of certain minerals (Stöhr, 1999). That is, depending on the crystallographic direction of the mineral, the absorption of the light will be different, generating different spectra. Bacteria can produce extracellular nanoparticles of magnetite by different metabolic pathways, such as the iron oxidation under aerobic conditions. Also, another group, the magnetotactic bacteria, can originate intracellular magnetite in chains, the magnetosomes. These biominerals can be part of the fossil record or can be part of rocks with dubious origin. The XMCD technique has been shown to be a good tool in some biogenicity problems, especially regarding ferromagnetic minerals, as it can provide information about the ratio of iron species in magnetites of different origins (biogenic and inorganic), crystallinity, mineralogical structure and purity of the crystal. The crystallographic and magnetic characteristics of biogenic magnetites that allows to differentiate them from the inorganic minerals (Thomas-Keprta et al., 2000), could also be detectable with XMCD. For example, Carvallo et al. (2008), combining data from TEM analysis, used XMCD to demonstrate the high purity and crystallinity of biogenic nanomagnetite, showing that these particles contained higher amount of Fe 2+ than the abiogenic nanomagnetite. The authors utilized the technique to compare the ratio of Fe 2+ and Fe 3+ in biogenic and inorganic magnetite nanoparticles synthesized, concluding that biogenic ones have higher crystallinity and higher amount of Fe 2+ when measured in comparison with the inorganic nanoparticles. The authors concluded that the difference signal between the biogenic and abiogenic XMCD spectra is bigger than any systematic instrumental error. Also using XMCD, Coker et al. (2007) compared magnetosome crystals with extracellular magnetite and inorganic magnetite, showing the similarity of the magnetosomes with the stoichiometric magnetite and their higher chemical purity in comparison with the other non-intracellular crystals. The works of Coker et al. (2007) and Carvallo et al. (2008) can be useful by presenting parameters to differentiate biogenic and abiogenic magnetites, taking into account the high crystallinity and high Fe 2+ content in the intracellular magnetites. The exceptional reducing power of bacteria such as Shewanella putrefaciens probably explains the high concentration of Fe 2+ in comparison to nanoparticles of abiotic origin. The optical activity from organic and inorganic compounds has also been proposed as a tool for the detection and characterization of biosignatures. For example, kerogen (Marshall et al., 2017;Shkolyar et al., 2018), proteins (Dartnell and Patel, 2014;Lin et al., 2015) and minerals can be associated with the past presence of life in an environment. Gaft et al. (2015) showed several minerals in which the optical activity may be associated with rocks formed in different contexts. The optical channels in these materials, i.e., the ions and/or defects responsible for the luminescence, are described as in function of oxidation state and their optical transition (wavelength emission, time decay) when stimulated with ultraviolet (UV), visible and infrared (IR) light. X-ray Excited Optical Luminescence (XEOL) can be used for the same purpose. However, the excitation with X-rays allows the observation of all optical active channels due to their capability of exciting core levels, making the optical process dependent of the lattice relaxation. Defects may be probed and explored, as well as their characteristics such as oxidation state, origin (intrinsically or extrinsically formed) (Teixeira et al., 2014;Finch et al., 2016;Rezende et al., 2016), and the environment they were formed, for example, when a living organism has started its fossilization process or even why a precious gemstone presents determined color (Tao, 2016). XEOL is a photon-in/photon out technique in which X-rays are used to excite core levels and getting light emitted in the range from the UV to IR (Sham, 2002). It is site-selective and can be used with variable X-ray photon energy, deeply penetrating a structure to excite its optical channels to explore their origins. XEOL combined with techniques such as XRF and XRD can be a powerful tool to describe a whole picture about composition and elements distribution in natural materials. Beamlines of 4th generations synchrotrons, such as the Carnaúba beamline of the Sirius light source (Tolentino et al., 2017), in Brazil, will have specific setups for the application of multi-technique analysis (XRF, XAS, XRD, and XEOL) of environmental samples, such as rocks and fossils. It will be possible to map optical active channels with a micro/nanosized and high resolution probe that will allow to explore the presence, for example, of an ion inside of minerals from the bones of a fossil and their characteristics (Kolodny et al., 1996;Rakovan and Reeder, 1996). CONCLUSION The complexity of attestations of biogenicity on geological and paleobiological materials makes it essential to explore multiple and complementary approaches in different length and sensitivity scales. For the micron-and nanoscales synchrotron-based techniques represent the forefront of the application of photons for the inspection of a wide range of materials, allowing complex and heterogeneous samples to be studied at an unprecedented level of detail. Synchrotron approaches are been consolidated as important tools for the deeper understanding of the records of ancient life on Earth and for the non-destructive investigation of extremely rare samples, such as meteorites and rocks that will be retrieved from Mars in the near-future sample return missions. The recent developments in synchrotron sources also brings good perspectives for the study of biosignatures. The novel 4th generation sources, such as MAX IV in Sweden, Sirius in Brazil, and the upgraded sources ESRF-II in France, APS-U in the United States and Spring8-II in Japan are opening up new avenues for the nanoscale investigation of different types of materials. For geobiological specimens, these machines will allow the achievement of nanometric spatial resolution for resolving preserved morphological fossils and also microbial-mineral interactions with different chemical and morphological contrast information, high energy and high spectral resolution for probing, mapping and speciating heavy Z elements and high sensitivity to elements in trace concentrations. These advances will allow complex and important questions on the early chemical and morphological biosignatures to be attacked, likely consolidating synchrotron paleometry and nanopalebiology within biogeosciences and astrobiology. AUTHOR CONTRIBUTIONS All authors contributed to the literature revision and manuscript writing.
2019-10-11T14:34:52.779Z
2019-10-11T00:00:00.000
{ "year": 2019, "sha1": "6ca8a2c9d4b338163db64fe0ca5663baeb471c8f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.02358/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b3806498bfa6881d4bf977f91ccfe093d2731c07", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Medicine", "Geology" ] }
46260542
pes2o/s2orc
v3-fos-license
Conversion of Recombinant Hirudin to the Natural Form by in Vitro Tyrosine Sulfation DIFFERENTIAL SUBSTRATE SPECIFICITIES OF LEECH AND BOVINE TYROSYLPROTEIN SULFOTRANSFERASES* Hirudin, a tyrosine-sulfated protein secreted by the leech Hirudo medicinalis, is one of the most potent anticoagulants known. The hirudin cDNA has previously been cloned and has been expressed in yeast, but the resulting recombinant protein was found to be produced in the unsulfated form, which is known to have an at least 10 times lower affinity for thrombin than the naturally occurring tyrosine-sulfated hirudin. Here we describe the in vitro tyrosine sulfation of recombinant hirudin by leech and bovine tyrosylprotein sulfotransferase (TPST). With both enzymes, in vitro sulfation of recombinant hirudin occurred at the physiological site (Tyr-63) and rendered the protein biochemically and biologically indistinguishable from natural hirudin. However, leech TPST had an over 20-fold lower apparent Km value for recombinant hirudin than bovine TPST. Further differences in the catalytic properties of leech and bovine TPSTs were observed when synthetic peptides were tested as substrates. Moreover, a synthetic peptide corresponding to the 9 carboxyl-terminal residues of hirudin (which include Tyr-63) was sulfated by leech TPST with a similar apparent Km value as full length hirudin, indicating that structural determinants residing in the immediate vicinity of Tyr-63 are sufficient for sulfation to occur. Hirudin, a tyrosine-sulfated protein secreted by the leech Hirudo medicinalis, is one of the most potent anticoagulants known. The hirudin cDNA has previously been cloned and has been expressed in yeast, but the resulting recombinant protein was found to be produced in the unsulfated form, which is known to have an at least 10 times lower affinity for thrombin than the naturally occurring tyrosine-sulfated hirudin. Recombinant proteins' are increasingly being used in biology and medicine. Most of these proteins are secretory, and many of them are post-translationally modified. One posttranslational modification found in many secretory proteins is tyrosine sulfation which occurs in the lumen of the trans Golgi and is catalyzed by an integral membrane protein, tyrosylprotein sulfotransferase (TPST)' Baeuerle and Huttner, 1987; expressed in bacteria and yeast, these recombinant proteins are not tyrosine-sulfated (Riehl-Bellon et al., 1989). This is consistent with the observations that protein tyrosine sulfation, though widespread in metazoan cells, does not occur in prokaryotes and certain lower eukaryotes Hohmann et al., 1985), presumably because of the lack of TPST.3 In several cases, tyrosine sulfation has been shown to be of major physiological importance, affecting the biological activity or half-life of specific proteins (Anastasi et al., 1966;Bodanszky et al., 1978;Nachman et al., 1986;Pauwels et al., 1987;Suiko and Liu, 1988;Hortin et al., 1989). A striking example is the anticoagulant hirudin; the desulfated form of hirudin binds to thrombin with a lo- (Stone and Hofsteenge, 1986) to 15-fold (Seemtiller et al., 1986;Dodt et al., 1987) lower affinity than the natural, tyrosine-sulfated form produced by the leech Hirudo medicinalis. In addition, a tyrosinesulfated carboxyl-terminal dodecapeptide of hirudin was found to have a lo-fold higher anticoagulant activity than the unsulfated peptide (Maraganore et al., 1989). These differences between tyrosine-sulfated and unsulfated hirudin observed in vitro may well translate into a large increase in antithrombotic efficacy when used in vivo, in analogy to previous results with hirudin containing a single amino acid change (Degryse et al., 1989). A hirudin cDNA has recently been cloned (Harvey et al., 1986). The recombinant protein has been expressed in yeast and is available in highly purified form (Loison et al., 1988;Riehl-Bellon et al., 1989). Since hirudin, as one of the most potent anticoagulants known, has great potential in medical therapy (Markwardt, 1970;Markwardt et al., 1984Markwardt et al., , 1988, it would be desirable to convert the recombinant, unsulfated hirudin, which can be obtained much more easily than hirudin isolated from leeches, to the natural, tyrosine-sulfated form. Here we report on this conversion by using homologous (leech) as well as heterologous (bovine) TPST in vitro. Moreover, we show that leech and bovine TPSTs have distinct catalytic properties. resis at pH 1.9 and pH 3.5 was used to separate TPST Preparations All steps were performed at 4 "C. Leech TPST Preparation-Salivary glands of leeches (H. medicinalis, obtained from Ricarimpex, Audenge, France, or from a local pharmacy) were dissected and homogenized in 4 volumes of 0.3 M sucrose. The homogenate was centrifuged at 800 X g for 10 min and the resulting supernatant at 12,000 x g for 40 min. The membrane pellet was resuspended in 4 volumes of 0.3 M sucrose, layered on top of a 1.3 M sucrose cushion and centrifuged at 36,000 X g for 30 min. The membranes at the interface were collected and subjected to carbonate treatment which has been found to increase the specific activity of TPST ) and resulted in TPST assays being linear for several hours (data not shown). For this, 700 ~1 of interface membranes were incubated for 15 min in 10 ml of 0.1 M NaC03/NaHC03, pH 11.0, 1 M KCl, 0.025% (w/v) saponin, 2 mM EDTA, 0.1 mM phenylmethylsulfonyl fluoride, 1 mM benzamidine, and 1 mM 5-aminocaproic acid and centrifuged for 30 min at 130,000 x g. The membranes were resuspended in 1 ml of 10 mM MES-NaOH, pH 6.5, and 2 mM EDTA, centrifuged for 30 min at 130,000 X g, resuspended in 0.5 ml of 10 mM MES-NaOH, pH 6.5, 2 mM EDTA, and 0.3 M sucrose, and stored at -25 "C. ) and used as the source of bovine TPST. AND DISCUSSION When purified rHV2 was incubated with the sulfate donor [35S]3'-phosphoadenosine 5'-phosphosulfate (PAPS) and a membrane preparation from leech salivary glands enriched in TPST, i.e. the enzyme that physiologically sulfates hirudin, a sulfated product was formed which on HPLC eluted slightly before the peak of unsulfated rHV2 (Fig. lA). No such product was found when a soluble fraction of leech salivary glands was used as potential source of TPST or when rHV2 was omitted from the sulfation reaction (not shown). The product of the in vitro sulfation corn&rated with natural, tyrosinesulfated hirudin purified from leeches (Fig. lA), showing that sulfation alone is sufficient to convert the recombinant protein to a form indistinguishable from natural hirudin. As an alternative to leech TPST, we have tested a heterologous TPST preparation, TPST from bovine adrenal medulla, which has been previously characterized and is available in larger amounts than the leech enzyme. The rationale behind using this latter enzyme was the previous observation that protein substrate recognition by TPST has been sufficiently conserved during evolution to allow stoichiometric sulfation of an insect protein by mammalian TPST (Friederich et al., 1988). Incubation of purified rHV2 with [35S]PAPS and a membrane preparation enriched in bovine TPST resulted in the formation of sulfated rHV2 which, like the product of the reaction using leech TPST, comigrated with natural hirudin on HPLC (Fig. 1B). To investigate whether the [35S]S04 in rHV2 was linked to tyrosine, we subjected the in vitro sulfated rHV2 to alkaline hydrolysis, a condition in which tyrosine sulfate is released from proteins (Huttner, 1984). Thin-layer electrophoresis of the hydrolysate showed that, indeed, the radioactivity was recovered as tyrosine sulfate (Fig. 2). Serine sulfate and threonine sulfate were not detected. Hirudin contains 2 tyrosine residues, one at position 3 and one at position 63, 3 residues from the carboxyl terminus (Bagdy et al., 1976). Only tyrosine 63 is sulfated in the leech in uiuo (Bagdy et al., 1976). To investigate whether the in vitro sulfation of rHV2 by the leech and the bovine TPST preparation occurred specifically at tyrosine 63, purified [%I rHV2 was digested with carboxypeptidase Y at pH 5.5, a condition known to selectively release the carboxyl-terminal amino acids (Chang, 1983). Thin-layer electrophoresis of the digests showed that free tyrosine [35S]sulfate was released from [%S]rHV2 sulfated by either TPST preparation (Fig. 3). Serine sulfate and threonine sulfate were not detected (data not shown). To demonstrate that the release of tyrosine sulfate was caused by carboxypeptidase Y itself rather than by a contaminating endoprotease, a control digestion was performed at pH 7.4, a pH at which carboxypeptidase Y is inactive (Hayashi, 1977). No significant quantities of tyrosine [?S]sulfate were released from [%]rHV2 under these conditions (Fig. 3). Thus, in vitro sulfation of recombinant hirudin by either leech or bovine TPST occurred at tyrosine 63. It has previously been shown that the presence of a sulfate group on tyrosine 63 increases the affinity of hirudin toward thrombin (Stone and Hofsteenge, 1986;Seemuller et al., 1986;Dodt et al., 1987). Since in vitro sulfation of rHV2 occurred specifically at this site, we did not attempt to confirm these observations using the minute amounts of tyrosine-sulfated recombinant hirudin produced under the present in vitro conditions. It was, however, important to ascertain that the conditions of in uitro incubation did not unspecifically impair the biological activity of hirudin. The biological activity of hirudin can be demonstrated qualitatively by its binding to immobilized thrombin (Walsmann, 1981). To determine whether recombinant hirudin retained this property after in vitro incubation, purified [35S]rHV2 was subjected to affinity chromatography on a thrombin-Sepharose column ( Table I). All of the applied radioactive rHV2 bound to the column, and 70% could be specifically eluted with the thrombin inhibitor 4-aminobenzamidine. Thus, the in vitro sulfated recombinant hirudin was biologically active. The results described so far show that a recombinant secretory protein can be converted to the physiological form by performing the appropriate post-translational modification, tyrosine sulfation, in vitro. Although this modification occurs late in the secretory pathway (truns Golgi), this was not necessarily to be expected since a protein purified from the extracellular medium may not apriori have the same structure as in the truns Golgi. Differences in structure between the trans Golgi and the secreted form of a protein result, for example, from post-translational modifications occurring later in the secretory pathway than tyrosine sulfation, such as proteolytic processing (for review, see Steiner et al., 1984) and oligomerization uia disultide bonds (e.g. von Willebrand factor; for review, see Verweij, 1988). Thus, it could not be excluded that the tram Golgi form of a protein is specifically competent to undergo tyrosine sulfation. However, the present results show that TPST can not only sulfate proteins endogenously present in subcellular Golgi-containing fractions, as shown previously (for review see Huttner and Baeuerle, 1988), but also full length proteins purified after secretion, implying that complete passage through the eukar- [%]rHV2 sulfated by the leech TPST preparation and purified through HPLC (gradient I, Fig. 1) was subjected to tyrosine sulfate analysis. An autoradiogram of the cellulose thin-layer sheet is shown. The dashed line indicates the position of the tyrosine sulfate (7'yrG)) standard detected by ninhydrin staining. [""S]rHV2 obtained from in vitro sulfation reactions with either the leech or the bovine TPST preparation was purified by paper electrophoresis, and aliquots containing 2205 cpm (leech TPST) or 3283 cpm (bovine TPST) were digested with carboxypeptidase Y at pH 5.5 or 7.4. Released tyrosine [%]sulfate was separated from [""S]rHVZ by electrophoresis and is expressed as percent of total (sum of ""S radioactivity present in the tyrosine [%]sulfate plus [""S]rHV2 spots). yotic secretory pathway is not incompatible with subsequent tyrosine sulfation. The nine carboxyl-terminal amino acid residues of hirudin, which include the tyrosine sulfation site, are of particular relevance for the inhibitory action of hirudin on thrombin, probably by interacting with a noncatalytic domain of thrombin that binds to fibrinogen (Fenton et al., 1988;Noe et al., 1988). Deletion of these residues increases the apparent Ki value of hirudin lO,OOO-fold (Degryse et al., 1989). Conversely, the lo-12 carboxyl-terminal amino acid residues (in relatively high amounts) are alone sufficient for inhibition of thrombinmediated clotting (Krstenansky et al., 1987;Mao et al., 1988;Maraganore et al., 1989). It was of interest to investigate whether the structural requirements for the enzymatic tyrosine sulfation of hirudin were also contained in this part of [%]rHV2 sulfated by the leech TPST preparation and purified through HPLC (gradient I, Fig. 1) was chromatographed on a thrombin-Sepharose column. The flow-through was collected, the column was washed, and [""S]rHV2 was eluted with 4-aminobenzamidine. The radioactivity recovered in the various fractions is given after subtraction of background. For this comparison, we determined the kinetic parameters of leech as well as bovine TPST for these two substrates (Table II). Sulfation of Hir-(57-65) occurred on tyrosine (data not shown). The apparent K,,, values of leech and bovine TPST for Hir-(57-65) were in the same range. In contrast, only leech TPST had an apparent K,,, for full length hirudin that was similar to that for Hir-(57-65), whereas bovine TPST had a 23-fold higher apparent K,,, for full length hirudin than for Hir-(57-65) (Table II). These data suggest that the structural information required for the recognition of hirudin by TPST is contained within the nine carboxylterminal residues of hirudin, and that the tertiary structure of full length hirudin does not promote this recognition. Rather, the tertiary structure of full length hirudin can impose steric hindrance on this recognition, unless the TPST is evolutionarily adapted for this substrate, as appears to be the case for leech, but not bovine, TPST. The different apparent K,,, values of leech and bovine TPST for full length hirudin ver.suS Hir-(57-65) suggested that leech and bovine TPST have differential substrate specificities. Differences in catalytic properties between leech and bovine TPST were also observed with respect to V,,,,,, which was in the same range for full length hirudin and Hir-(57-65) in the case of leech TPST but differed 20-fold in the case of bovine TPST (Table II). Moreover, further differences between leech and bovine TPST became apparent when we assayed both enzymes with a second synthetic peptide, CCK-(107-115), corresponding to the carboxyl-terminal sulfation sites of preprocholecystokinin (Adrian et al., 1986;Eng et al., 1986). Leech TPST exhibited a 24-fold lower apparent K,,, value for CCK-(107-115) than for Hir-(57-65), with little change in V max, whereas bovine TPST showed a 16-fold higher V,,,,, for CCK-(107-115) than for Hir-(57-65), with little change in apparent Km (Table II). Thus, leech and bovine TPST are distinct in their catalytic properties toward two synthetic peptides, although both peptides conform to the previously suggested consensus features for tyrosine sulfation (Huttner
2018-04-03T06:18:32.319Z
1990-06-05T00:00:00.000
{ "year": 1990, "sha1": "c3637af0b140357de1d7ed61f889950d51945fc8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(19)38850-7", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "fd6aaefa208fbd3156842b0a5c75e449c901cbb2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233407617
pes2o/s2orc
v3-fos-license
A unified theory of spin and charge excitations in high-$T_c$ cuprates: Quantitative comparison with experiment and interpretation We provide a unified interpretation of both paramagnon and plasmon modes in high-$T_c$ copper-oxides, and verify it quantitatively against available resonant inelastic $x$-ray scattering (RIXS) data across the hole-doped phase diagram. Three-dimensional extended Hubbard model, with included long-range Coulomb interactions and doping-independent microscopic parameters for both classes of quantum fluctuations, is used. Collective modes are studied using VWF+$1/\mathcal{N}_f$ approach which extends variational wave function (VWF) scheme by means of an expansion in inverse number of fermionic flavors ($1/\mathcal{N}_f$). We show that intense paramagnons persist along the anti-nodal line from the underdoped to overdoped regime and undergo rapid overdamping in the nodal direction. Plasmons exhibit a three-dimensional character, with minimal energy corresponding to anti-phase oscillations on neighboring $\mathrm{CuO_2}$ planes. The theoretical spin- and charge excitation energies reproduce semi-quantitatively RIXS data for $\mathrm{(Bi, Pb)_2 (Sr, La)_2 CuO_{6+\delta}}$. The present VWF+$1/\mathcal{N}_f$ analysis of dynamics and former VWF results for static quantities combine into a consistent description of the principal properties of hole-doped high-$T_c$ cuprates as strongly correlated systems. Introduction-A profound problem in condensed matter physics is to unveil the microscopic structure of both the single-and many-particle excitations in hightemperature (high-T c ) cuprate superconductors (SC) as they evolve from antiferromagnetic (AF) insulator, through the SC phase, to a Fermi-Liquid normal state [1]. At low doping, localized holes coexist with collective spinwave excitations which are now well understood within the framework of Heisenberg-type models [2,3]. Much less is known about the microscopic mechanism governing single-and many-particle excitations at moderateand high doping levels, where no AF or charge order occur. Itinerant carriers are expected to cause Landau overdamping of spin-wave modes, particularly after AF order has been suppressed. On the contrary, resonant inelastic x-ray scattering (RIXS) and inelastic neutron scattering experiments demonstrate that robust paramagnons persist across whole hole-doping phase diagram [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. In addition, RIXS provides evidence for lowenergy charge modes (acoustic plasmons) in both holeand electron-doped cuprates [20][21][22][23][24][25]. Over the years, several distinct high-T c SC mechanisms, based either on fluctuations (magnetic [17,26,27] and charge [28,29]), or local correlations [30], have been proposed. In effect, a unified quantitative theory of the equilibrium thermodynamic properties, as well as correlated single-particle and collective excitations in high-T c copper-oxides, is now in demand to single-out the microscopic SC pairing scenario. The current theoretical frameworks, used interpret RIXS data for copper-oxides, encompass determinant quantum Monte-Carlo (DQMC) [19], Hubbard-operator large-N limit [31][32][33], random-phase-approximation (RPA) [7], and spin-wave theory (SWT) [3,10]. Those have been successful in explaining certain aspects of experiments, yet none of them provides a unified description of both spin and charge excitations within a single microscopic model with fixed parameters. Specifically, DQMC yields well controlled imaginary-time susceptibilities, but suffers from the sign problem and requires analytic continuation of numerical data, which reduces its reliability in regard to dynamics. Moreover, due to latticesize limitations, DQMC cannot account for long-range Coulomb repulsion that is considered essential for plasmon physics in high-T c materials [34]. On the other hand, Hubbard-operator large-N limit with long-range interactions included, reproduces measured plasmon spectra [25], but it is intended for the strong-coupling situation (t-J/t-J-V models) and seems to overestimate correlation effects, such as bandwidth renormalization. This has been compensated by adopting bare nearest-neighbor hopping scale |t| ≈ 0.5-0.75 eV [25,33,35], larger than accepted values |t| ≈ 0.3-0.4 eV. Also, the Hubbardoperator 1/N expansion does not treat the collective modes on the same footing and privileges charge-over spin excitations [36,37]. On the other hand, the RPA approach requires adopting unphysically small on-site repulsion U ∼ 1.5|t| [7,38]. Finally, accurate fits to the paramagnon spectra are obtained by applying SWT to extended Heisenberg models, including both cyclic-and long-range exchange [3,10]. Yet, SWT disregards charge excitations, and the underlying large-spin approximation yields magnetic order at high-doping, in disagreement with experiment. In effect, a consistent theoretical picture of spin-and charge dynamics in metallic high-T c cuprates has not been reached so far. We fill-in this gap and reconcile quantitatively both paramagnon and plasmon excitations in hole-doped cuprates within a single microscopic model with realistic and doping-independent microscopic parameters. We start from a three-dimensional Hubbard Hamiltonian, with long-range Coulomb repulsion included, and analyze it using recently developed VWF+1/N f scheme [39,40] that combines Variational Wave Function (VWF) ap- proach with expansion in inverse number of fermionic flavors (1/N f ). This allows us to account for both spin-and charge quantum fluctuations around the correlated ground state on the same footing, which is needed for an unbiased analysis. Explicitly, we show that intense and propagating paramagnons persist in the metallic phase along the anti-nodal (Γ-X) Brillouinzone (BZ) direction in wide doping range, but become rapidly overdamped along the nodal (Γ-M ) line. This reflects the experimental trends for multiple copperoxide families [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Also, plasmons are shown to exhibit a substantial three-dimensional character. The results agree semi-quantitatively with available RIXS data for (Bi, Pb) 2 (Sr, La) 2 CuO 6+δ . In effect, VWF+1/N f emerges as a platform for quantitative interpretation of spectroscopic data for strongly correlated materials, and combines with former equilibrium VWF results [30,[41][42][43][44][45] into a consistent overall description of high-T c cuprate superconductors. Model and method -We employ extended Hubbard model Hamiltonian are creation (annihilation) operators on site i for spin σ,n iσ ≡ĉ † iσĉ iσ , andn ≡n ↓ +n ↑ . The model is defined on a stacked two-dimensional square lattice (200 × 200 × 16 sites) with in-plane spacing a, and interlayer distance d (cf. Fig. 1). We adopt standard values of in-plane nearest-neighbor (n.n.) and next-nearest hopping integrals, t = −0.35 eV and t = 0.25|t|, respectively, and small out-of-plane one t z = −0.01|t|, reflecting substantial interlayer distance in Bi2201. The on-site Coulomb repulsion is set to U = 6|t|, which is backed by recent estimates of effective U ∼ 6-9 |t| [46], and has been adopted in a single-layer model study [39]. The last term accounts for long-range Coulomb repulsion. At large distances, V ij may be obtained as a solution of a discretized Laplace equation [47], yielding in k-space where , and || ( ⊥ ) are inplane (out of-plane) dielectric constants. We select γ = 10 and V c = 46|t|, which assumes a dominant latticeanisotropy effect on γ (see [25]) and yields ⊥ ≈ 4.66, comparable with the high-energy values ⊥ ≈ 4-4.5 reported for Bi2201 [48]. Also, the resulting n.n. repulsion V /|t| ≈ 2.03 is consistent with ab initio [49] estimates for related materials (2.36 for HgBa 2 CuO 4 and 2.30 for La 2 CuO 4 ). Note that the plasmon gap can be estimated as ∆ 2 p = 2 2 V c n c /(m * a 2 γ), with m * and n c being the correlation-renormalized carrier mass and concentration, respectively. The scale of ∆ p is thus sensitive to V c /γ = 4.6|t|. Hereafter we set the temperature to k B T = 0.4|t| to stay clear of broken-symmetry [50] states. The model (1) is solved using VWF+1/N f scheme which has been elaborated extensively in a methodological paper [40], regarded here as a Supplemental Material. In brief, the method is based on the energy functional E var ≡ Ψ var |Ĥ|Ψ var / Ψ var |Ψ var , defined in terms of the variational state |Ψ var ≡P var (λ)|Ψ 0 , where |Ψ 0 is an uncorrelated wave function. The opera-torP var (λ) adjusts weights of many-body configurations in |Ψ var and depends on a vector composed of variational parameters, λ, subjected to additional constraints [40]. By application of linked-cluster expansion in real space, E var = E var (P, λ) becomes a functional of "lines", P iσjσ ≡ ĉ † iσĉ jσ , and λ. We use Statistically-consistent Gutzwiller Approximation (SGA) [51] to truncate diagrammatic series for E var , which results in SGA f +1/N f variant of VWF+1/N f . We also approximate long-range part of Coulomb energy as V ≈ 1 2 i =j V ij n i n j , effectively disregarding non-local lines which is justified at large distances. As a second step, P → P(τ ) and λ → λ(τ ) are promoted to (imaginary-time) dynamical fields. Finally, the Euclidean action for P(τ ), λ(τ ), and other auxiliary fields, is constructed and used to generate dynamical spin-and charge-collective susceptibilities, χ s (k, iω n ) and χ c (k, iω n ), respectively. Analytic continuation is carried out as iω n → ω + i0.02|t|. To make a comparison with experiment, it is necessary to extract paramagnon energies and their damping rates from the calculated spectra. This is done by damped harmonic oscillator modeling [52] of the imaginary part of the dynamical spin susceptibility where A(k), ω 0 (k), and γ(k) denote the amplitude, bare energy, and damping rate, respectively. Crucially, ω 0 (k) does not represent the physical paramagnon energy, and it remains non-zero even if magnetic excitations are overdamped. The relevant parameter is thus the real part of the quasiparticle pole, ω p (k) = ω 2 0 (k) − γ 2 (k) if ω 0 (k) > γ(k), and zero otherwise. The last term represents the incoherent part, χ s,in (k, ω), providing background to the oscillator peak. We model χ s,in (k, ω) as the Lindhard susceptibility (defined as the loop integral, evaluated with Landau quasiparticle Green's functions), multiplied by a linear function of ω to allow for spectralweight redistribution between the coherent-and incoherent parts. Thus, χ s,in (k, ω) ≡ [B(k) + ωC(k)] · χ 0s (k, ω), where B(k) ≥ 0 and C(k) are free k-dependent parameters. This form of χ s,in (k, ω) reflects the fermiology of the underlying correlated electronic system. Results-Representative least-squares fits of imaginary parts of the SGA f +1/N f susceptibilities over the energy range approximately encompassing non-zero values of χ s (k, ω), are displayed in left panels of Fig. 2. Blue circles in (a), (c), (e) and (g) represent calculated χ s (k, ω) for k = (0.5, 0, 0), (0.5, 0.5, 0), (0.03, 0, 0), and (0.03, 0.03, 0), respectively. The green and blue regions are the incoherent and harmonic parts, respectively. The red line marks sum of the two, reproducing faithfully the SGA f +1/N f result. For completeness, by black dotted lines we depict the Lindhard susceptibility, character of which varies across the BZ. A substantial directional anisotropy of spin dynamics is apparent, with coherent oscillator peaks appearing only along the anti-nodal line. In the right panels, the corresponding charge response, χ c (k, ω), is plotted. A clear distinction between the incoherent part and plasmon peak should be noted for the experimentally relevant regime of small in-plane momentum transfers, hence we identify the plasmon energy with the peak position in χ c (k, ω). We now proceed to a unified quantitative analysis of both paramagnon and plasmon dynamics in (Bi, Pb) 2 (Sr, La) 2 CuO 6+δ . In Fig. 3 show the SGA f +1/N f (red solid lines) and RIXS [19] (solid circles) paramagnon propagation energies, ω p (k). Color maps represent imaginary part of the dynamical spin susceptibility, with blue and white colors mapping to low-and high-intensity regions, respectively. The agreement between theory and experiment is semi-quantitative for all doping levels, with the exception of the Γ-M direction for δ = 0.11. In the latter case, SGA f +1/N f yields overdamped magnetic dynamics (ω p (k) = 0), whereas RIXS data corresponds to substantially damped, but still resonant response. A significant anisotropy between the nodal (Γ-M ) and anti-nodal (Γ-X) directions is consistently observed both in SGA f +1/N f and experimental data. Namely, the anti-nodal paramagnons persist across the entire doping range, but become rapidly overdamped with increasing doping along the nodal line. We note that a comparable agreement with RIXS paramagnon spectra has been recently achieved within a single-layer model [39]. This points towards a predominately two-dimensional character of spin excitations, which is also supported by investigation of the static response, detailed below. In Fig. 3(d)-(f) and 3(g)-(i), we carry out an analysis of the underlying bare paramagnon energies and damping, ω 0 (k) and γ(k), respectively. Solid lines mark the parameters extracted from SGA f +1/N f spin susceptibilities, with the use of model (3), whereas solid circles are RIXS data of Ref. [19], processed in an analogous manner. The overall agreement of both quantities with experiment is semi-quantitative across the phase diagram, with the exception of the Γ-M line in the underdoped case, where SGA f +1/N f yields larger damping rates, and close to the Γ point for the overdoped situation. We turn next to the discussion of charge excitations show the propagation energy, ωp, as obtained from SGA f +1/N f approach (red solid lines) and experiment [19] (circles) at three hole-doping levels, δ = 0.21, δ = 0.16, and δ = 0.11. The color maps represent calculated imaginary part of dynamical spin susceptibility, ranging from blue (low intensity) to white (high intensity). Panels (d)-(f) and (g)-(i) show the bare paramagnon energies and damping rate (ω0 and γ, respectively). Lines and circles are SGA f +1/N f results and RIXS data [19], respectively. for the same model parameters as those used to generate Fig. 3. The wave-vector transfers are hereafter represented as k = h 2π a , 0, l 2π c , with c taken as 2d to account for two primitive cells in a crystallographic cell [53]. In Fig. 4(a), the calculated plasmon energies as a function of l are displayed for h = 0.03 (blue line) and h = 0.05 (red line), and compared with the corresponding RIXS data [25] for Bi 2 Sr 1.6 La 0.4 CuO 6+δ . For reference, in panel (b) we show raw imaginary part of the SGA λ d +1/N f charge susceptibility for h = 0.03, used to obtain theoretical dispersion curve [blue line in (a)]. Panel (c) exhibits in-plane plasmon dispersion relations for two fixed values of the out-of-plane wave-vector transfer, l = 1.5 and l = 1.75, as a function of h. In panel (d), we display unprocessed χ c for l = 1.5. The agreement between theory and experiment is quantitative along all BZ contours. As is seen in Fig. 4(a), plasmon modes disperse strongly along the out-of-plane direction, with the minimum energy for l = 1, corresponding to anti-phase charge fluctuations on neighboring CuO 2 planes. For the sake of completeness, we also examine stability of the paramagnetic metallic state against fluctuations. In ] depends only weakly on doping, indicating that the system stays clear of charge-density-wave (CDW) order in the considered temperature range. As is seen in Fig. 5, spin fluctuations are two-dimensional with barely distinguishable Γ-X-M -Γ and Z-R-A-Z profiles, whereas charge response exhibits qualitatively distinct behavior around Γ and Z points. Our results support the physical picture of at most moderate screening of the non-local Coulomb interaction, so that the plasmon excitations are influenced by its algebraic tail. On the other hand, the paramagnons are weakly affected by the non-local terms. The three-dimensional extension of the Hubbard model with inclusion of the long-range interactions is thus required primarily to quantitatively describe charge excitations. The impact of those terms on equilibrium properties has been discussed elsewhere [45]. Outlook -We have carried out a quantitative analysis of collective spin-and charge excitations in a microscopic model of high-T c copper-oxides. Those modes are present in wide temperature and doping range and, in particular, in the regime where no long-range spindensity-wave or CDW order occur. The principal difficulty in describing them is due to the strongly-correlated character of the underlying electronic states. This circumstance necessitates a generalization of Moriya-Hertz-Millis-type approach to incorporate fluctuations into a nonstandard reference state and going systematically beyond the renormalized mean-field theory (RMFT) [40]. The dynamical effects are included by 1/N f expansion around the variationally-determined saddlepoint solution, reproducing the experimental data semiquantitatively within a single scheme and for once fixed microscopic parameters [cf. Figs. 3(a-c) and 4(a-b)]. In conjunction with the former comprehensive VWF analysis of the static-and single-particle properties of high-T c cuprates [30,[41][42][43][44][45], encompassing SC/CDW phases, Fermi-velocity/wave-vector, quasiparticle masses, and kinetic energy gain at SC transition, we arrive here at a consistent semi-quantitative description of both staticand collective dynamic properties of hole-doped high-T c materials. Those aspects should be studied further within a more realistic three-band model of high-T c SC, either in the Hubbard or t-J-U -V form [30]. The untouched here questions comprise pseudogap formation and temperature-dependence of electrical resistivity, when the quantum fluctuations are tackled explicitly along the lines presented here. This requires supplementing the present approach with calculations of single-particle self-energy and subleading fluctuation free-energy corrections, all in a fully self-consistent manner. Such a task poses a substantial challenge. Finally, within the strong-correlation picture, both the realspace pairing and AF correlations in the cuprates share the same source: kinetic exchange interaction ∝Ŝ iŜj − 1 4n inj , that may be equivalently expressed in terms of singlet pairing operatorsb † ij ≡ 1 [54]. The considered here paramagnetic ground state is also spin singlet, and the elementary paramagnon excitations are associated with singlet-triplet (S = 0 to S = 1) transitions. Their robustness in hole-doped cuprates supports thus indirectly also the exchange-driven real-space pairing viewpoint, calling for an extension of the VWF+1/N f approach to incorporate the SC state. This requires accounting for the SC gap fluctuations through the anomalous lines, S iσjσ = ĉ iσĉjσ [30,43,55], introducing additional complexity to the problem, and should be treated separately. Acknowledgments-This work was supported by Grant OPUS No. UMO-2018/29/B/ST3/02646 from Narodowe Centrum Nauki and by a grant from the SciMat Priority Research Area under the Strategic Programme Excellence Initiative at the Jagiellonian University.
2021-04-28T01:16:17.192Z
2021-04-26T00:00:00.000
{ "year": 2021, "sha1": "7590a2dae2fe1d0714406a71924ab59ed8ff3fa0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.12812", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7590a2dae2fe1d0714406a71924ab59ed8ff3fa0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237551578
pes2o/s2orc
v3-fos-license
Identification of C3H2C3-type RING E3 ubiquitin ligase in grapevine and characterization of drought resistance function of VyRCHC114 Background RING is one of the largest E3 ubiquitin ligase families and C3H2C3 type is the largest subfamily of RING, which plays an important role in plant growth and development, and growth and responses to biotic and abiotic stresses. Results A total of 143 RING C3H2C3-type genes (RCHCs) were discovered from the grapevine genome and separated into groups (I-XI) according to their phylogenetic analysis, and these genes named according to their positions on chromosomes. Gene replication analysis showed that tandem duplications play a predominant role in the expansion of VvRCHCs family together. Structural analysis showed that most VvRCHCs (67.13 %) had no more than 2 introns, while genes clustered together based on phylogenetic trees had similar motifs and evolutionarily conserved structures. Cis-acting element analysis showed the diversity of VvRCHCs regulation. The expression profiles of eight DEGs in RNA-Seq after drought stress were like the results of qRT-PCR analysis. In vitro ubiquitin experiment showed that VyRCHC114 had E3 ubiquitin ligase activity, overexpression of VyRCHC114 in Arabidopsis improved drought tolerance. Moreover, the transgenic plant survival rate increased by 30 %, accompanied by electrolyte leakage, chlorophyll content and the activities of SOD, POD, APX and CAT were changed. The quantitative expression of AtCOR15a, AtRD29A, AtERD15 and AtP5CS1 showed that they participated in the response to drought stress may be regulated by the expression of VyRCHC114. Conclusions This study provides valuable new information for the evolution of grapevine RCHCs and its relevance for studying the functional characteristics of grapevine VyRCHC114 genes under drought stress. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-021-03162-8. Background To survive in a changing environment, post-translational modification of proteins often occurs when plants perceive and transmit internal or external signals. The acetylation, methylation, phosphorylation, and ubiquitination of proteins are the main types of posttranslational modification, which play a key role in different plant development stages and plant-environment interactions. The process of classifying intracellular proteins under the action of a variety of special enzymes, and specifically modifying the screened target proteins, is called ubiquitination [1]. In eukaryotic cells, the ubiquitin sfigystem is complex and mainly involving ubiquitin (a small molecule protein), intact 26 S proteasome, ubiquitin-activating enzyme (E1), ubiquitin-binding enzyme (E2), and ubiquitin-ligase (E3) [2]. The inactivated ubiquitin-dependent ATP is first activated by E1 through the thioester bond formed between the C-terminal of ubiquitin and the cysteine residue of E1; then the ubiquitin signal connected to E1 is transferred to the acetylcysteine of E2. In the next step, the ubiquitin linked to E2 is transferred directly or indirectly to the lysine residue of the target protein via E3. It is noteworthy that E3 ubiquitin ligase is the main factor determining the binding of specific protein during ubiquitination process [3], it can repeatedly add ubiquitin to the substrate protein, so that the target protein is degraded by the 26 S protease [4]. E3 ubiquitin ligases can be divided into 9 categories based on specific conserved domains: RING, HECT, Ubox, F-box, cullin, BTB, DDB, RBX and SKP. RING E3 ligases protein has a conserved RING domain, which can provide residence sites for E2 and specific substrates and enable E2-bound ubiquitin molecules to transfer to the host protein, thus completing the ubiquitination process. In the RING domain, there are eight conserved amino acids (Cys or His) located in the center of the threedimensional protein structure, which can combine with two zinc ions to help stabilize the entire structure. According to the types of conserved amino acid sites, they are divided into different subfamily. Among them, RING C3H2C3 is the largest subfamily. The RING conserved domain sequence of this family member is Cys-X2-Cys-X(9-39)-Cys-X(1-3)-His-X(2-3)-His-X2-Cys-X(4-48)-Cys-X2-Cys, X is any amino acid. In recent years, a growing number of studies have shown that the RING E3 ligase gene also figure prominently in abiotic stress responses of plants [5]. SpRing is a RING-type E3 ubiquitin ligase located in endoplasmic reticulum and participates in salt stress signal transmission in wild tomato variety Solanum pimpinellifolium 'PI365967'. In addition, SpRing is silenced by virus-induced gene silencing, resulting in increased sensitivity of wild tomato to salt stress. Overexpression of SpRing in Arabidopsis can improve its salt tolerance [6]. SDIR1 (SALT AND DROUGHT-INDUCED REALLY INTERESTING NEW GENE FINGER1) is a RING-type E3 ubiquitin ligase that regulates the salt stress response and ABA signaling in Arabidopsis by degrading the target protein SDIRIP1 (SDIR1-INTERACTING PROTEIN1). The downstream transcription factor ABI5 (ABA-INSENSITIVE5) is regulated by SDIRIP1, and overexpression of ABI5 increases salt tolerance [7]. The E3 ubiquitin ligase OsHTAS (Oryza sativa HEAT TOLERANCE AT SEEDLING STAGE) regulates the stomatal opening state of leaves by regulating ROS homeostasis, thus improving the basal heat resistance of leaves. It involves two pathways, ABAmediated [8]. In Arabidopsis, CHYR1 (CHYZINC-FIN-GERANDRINGPROTEIN1) encodes the RING-type E3 ubiquitin ligase which interacts with the related protein kinase KINASE2 (SnRK2) and can be phosphorylated by SnRK2.6 on its Thr-178 residues. When mediated by ABA, CHYR1 promotes the production of reactive oxygen species (ROS), stomatal closure, and drought tolerance in plants [9]. The capsicum annular E3 ubiquitin ligase, CaAIRF1 (Capsicum annuum ADIP1 INTER-ACTING RING FINGER PROTEIN1), can interact with protein phosphatase CaADIP1 and positively regulate ABA signaling pathway to improve drought tolerance [10]. In Zea mays, ZmXerico1 encodes a RING-type E3 ligase, which can regulate the stability of ABA8'-hydroxylase protein and thereby enable control of the dynamic balance of ABA, hence, expression of ZmXerico1 endows maize plants with ABA sensitivity and improves their water use efficiency under drought stress [11]. Furthermore, Arabidopsis AtAIRP1, AtAIRP2, AtAIRP3 and CaAIR1 jointly encode an E3 ubiquitin ligase, which regulates drought responses by regulating ABA signaling transduction, the expression of these genes increases ABA-mediated stomatal closure [12][13][14][15]. Collectively, the above studies show that E3 ligase plays a crucial role in response to abiotic stress. Grapevine (Vitis vinifera L.) is a major cash crop, whose cultivated varieties have a total worldwide output of nearly 70 million tons of the fruit berries from more 7 million hectares of harvested land [16]. This plant is mainly grown to produce table grapes, fruit juices, and wine [17]. Most grapevine producing areas in the world incur seasonal droughts. According to global climate modeling, droughts will intensify in the near future. Drought can adversely affect the growth and development of grapevines, because under drought stress the concentration of cytokinin in grape stems decreases, vegetative and reproductive growth are inhibited [18]. When grapevines are in full bloom, drought stress will also affect the pollination process, which decreases the fruit setting rate and affects the size of the individual fruit berries produced [19]. With worsening water shortages, drought stress is likely to become a key factor impacting grapevine and wine production worldwide [20]. Therefore, it is of great significance to grapevine production and breeding to study the drought resistance of wild grapevine plants as this could uncover the molecular mechanisms enabling them to withstand drought effects. Vitis yeshanensis is a wild grapevine plant native to arid areas of China, whose morphological characteristics indicate adaptability to arid environments in many aspects [21]. Several studies have shown that wild Vitis yeshanensis has stronger drought resistance than other cultivars [22,23]. The RING-type gene family has been found in more and more plant species, and its importance in plant stress responses and growth and development has been recognized, but RING-type genes have not been fully identified in grapevine. It is reported that the RING type E3 ubiquitin ligase is involved in grapevine stress and growth, but few studies have investigated the involvement of E3 ubiquitin ligase in regulating grapevine response to drought stress. We assume that RCHC protein may mediate the ubiquitination of key factors during grape drought stress to regulate plant drought resistance. This study aimed to characterize the RING-type E3 ubiquitin ligase in grapevine's genome and its relevance for drought stress. Genome-wide identification of C3H2C3 genes, the largest subfamily of grapevine RING-type, was carried out, coupled to their phylogenetic analysis, gene structure analysis, chromosome mapping, gene replication analysis, and cis-acting element analysis in gene promoter regions. We also quantified the expression levels of these genes under simulated drought treatment, 136 RCHC genes were found to be expressed and 52 DEGs, which 8 DEGs at least 3 stages. The VyRCHC114 gene was confirmed by RT-qPCR, and then the ubiquitin ligase activity of the gene was verified. The function of the gene under drought conditions was elucidated using Arabidopsis transgenic plants. Our study provides an important basis for the involvement of RCHC protein in the regulation of grape ubiquitination under drought stress. Results Genome-wide identification of RING C3H2C3 type finger proteins in grapevine The results of the Hidden Markov Model (HMM) were analyzed, and the gene sequences were extracted and given to SMART, CDD, and Pfam for domain authentication. From this, 143 VvRCHC genes were obtained by comparing and screening genes with eight conservative metal ligands, and the alignment members were not abandoned. The physicochemical properties of 143 VvRCHCs were identified ( Table 1). The number of amino acids encoded by the 143 VvRCHCs ranged from 70 (VvRCHC50) to 763 (VvRCHC98). For these genes, the molecular weights of their products varied from 7.83 kDa to 83.58 kDa, while their isoelectric points varied from 3.88 to 9.95. Analysis of VvRCHCs in the C3H2C3 domain The typical RING domain is considered to be an octahedral group of metal-bound cysteine and its residues, which can chelate two zinc ions in a spherical crosssupported structure, in which the metal ligands 1 and 3, 2 and 4, each bind to one zinc ion. This structure requires a certain distance between adjacent metal ligands, it being variable between ml2~ml3 and ml6~ml7. We calculated statistics for this distance between adjacent metal ligands (Table S2). It was found that, except those between ml2~ml3 and ml6~ml7, the distances between other metal ligands were constant, while those from ml2 to ml3 spanned 11 to 24 amino acids, and for ml6~ml7 the distance varied from 8 to 14 amino acids. The 143 VvRCHCs C3H2C3 domains have two amino acids between ml1~ml2 and ml5~ml6, while ml3m l4 contains one amino acid, ml7~ml8 contains two amino acids as does ml4~ml5. To understand whether these RING C3H2C3 structural domains are conserved apart from their eight special metal ligands, their comparative analysis was conducted (Fig. S1). This revealed that some amino acids in the structural domain of RING C3H2C3 have a typical position bias (Fig. 1a). In the C3H2C3 type RING region, the ml2 located in front of amino acid residues is the most common Ile (I) or Val (V); likewise, the phenylalanine (Phe, F) residue is typically before ml5, the leucine residue (Leu, L) is always next to ml2, and the aspartic acid (Asp, D) residue is usually positioned after ml6, while the tryptophan residue (Trp, W) is usually the fourth following ml6. Notably, a very conservative proline (P) was found situated after ml7. According to the RING-type C3H2H3 domain schematic diagram, two pairs of metal ligands bind to a zinc ion (Fig. 1b). The total amino acid length of the C3H2C3 domain per VvRCHC gene and the corresponding number of different lengths were calculated: the vast majority of these were 41 and 42, accounting for 88.8 % of all genes (Fig. 1c). Phylogenetic analysis of VvRCHCs To infer the evolutionary relationships of grapevine VvRCHCs, phylogenetic analysis of RCHC protein sequences of Arabidopsis, tomato, and grapevine were constructed (using the Maximum Likelihood method). According to the phylogenetic analysis, these 180 genes can be divided into 6 subgroups: I~VI (Fig. 2). Group I has the least number of members, only 12, and the group of the largest number of members is group III, Characterization of the motifs and gene structure of VvRCHCs To further understand the diversity in motif composition between VvRCHCs, the MEME analysis of VvRCHC proteins from groups I to VI was carried out. From this, 12 conserved motifs were identified in the VvRCHC protein, named motif 1 to motif 12 (Fig. 3b), in which motif 1 and motif 2 is found in almost every VvRCHCs, this motif combines to form the eight most important metal ligand (Cys-Cys-Cys-His-His-Cys-Cys-Cys) structures of every VvRCHC gene. Importantly, there are 13 such structures in some genes, such as PA, CUE, DUF1117, zinc_ribbon_9, and zf-CHY, among others. These structures domain could be relevant for the function of VvRCHCs. The sequence information of motif 1~12 is presented in Table 2; Fig. 3d (motif data). We next analyzed the exons, introns, and several key structures of VvRCHCs (Fig. 3c). Most VvRCHCs (67.13 %) had no more than 2 introns, with a maximum of 19 introns in VvRCHC29 and none intron in 57 VvRCHCs (Fig. S3). The longest intron length was found in VvRCHC141. According to the phylogenetic analysis of VvRCHCs (Fig. 3a), 45 pairs of genes can be found in the evolutionary tree. The results of the MEME and gene structure analyses of these gene pairs were also similar ( Fig. 3b and c). For example, the conserved motifs in the protein sequences of VvRCHC44/64 are highly similar, and the structure type and length are also similar, such as for VvRCHC94/96, VvRCHC38/97, VvRCHC18/78, VvRCHC28/67 and VvRCHC11/107, to name a few. Unexpectedly, the MEME analysis of VvRCHC55/127, VvRCHC105/133, and VvRCHC13/116 gene pairs gave near identical results to those from the gene structure Fig. 3 Phylogenetic tree, gene domain, and structure analysis of VvRCHCs in grapevine. a The phylogenetic tree of VvRCHCs was constructed using the ML method. Different background colors represent different grouping branches. b Domain analysis of VvRCHCs proteins. At the bottom of the line, different colored squares represent different types of conserved amino acid sequences and based on MEME analysis. The modules of different colors above the line represent the functional domains that have been identified. c Genetic structure of VvRCHCs, the CDS sequence is represented by a blue square/rectangle, the introns by black lines analysis, revealing a remarkably similar protein sequence length, gene structure length and the intron number among them. We thus speculate these four gene pairs may perform similar functions in grapevine plants. Chromosomal localization and gene replication analysis of VvRCHCs According to the location of VvRCHCs in the grapevine genome, 143 VvRCHCs were placed on 20 chromosomes (Fig. 4a), albeit unevenly distributed among them. Imprinting of the VvRCHCs was found in each chromosome of grapevine, but the number of genes on different chromosomes varied. The most found were 12 VvRCHCs on chromosome 11, the 11 VvRCHCs were identified on chromosome 1,7,13 and 18. Further, we also observed that these most of these VvRCHCs are likely distributed at both ends of the chromosome, leaving only a small portion of them in its middle part. Gene replication events include tandem replication and segmental replication, which are very vital for expanding the number of members of the gene family. To clarify the amplification mechanism of VvRCHCs during their evolution, we studied their potential repetitive events of VvRCHCs. According to the intraspecific alignment of 143 VvRCHCs, 9 pairs of genes, 7 and 2, were respectively identified as associated with tandem or segmental replication events. Among the 9 pairs of gene events, the tandem repeat frequency between chromosomes 1 was the highest, there were six tandem replication events, moreover, one pair of genes on chromosomes 3 identified as tandem replication genes. These results suggested that the main replication event mode of grapevine VvRCHCs family is via tandem replication; hence, it could have played a crucial role in the amplification of VvRCHCs during their evolutionary history. To explore the selection of grapevine VvRCHCs in terms of their repetition and differentiation, the nonsynonymous (Ka), synonymous (Ks), and Ka/Ks of each duplicated VvRCHCs were calculated. Among the 9 pairs of repetitive genes in grapevine, the Ka/Ks values of one pair were less than 0.5, while the average Ka/Ks value was 0.325. It is worth noting that 8 pairs had Ka/Ks values less than 0.5, indicating that most of the repeated grapevine VvRCHCs were under negative selection during evolution (Table 3). Figure 4b shows that grapevine, Arabidopsis, and tomato all retained similar RCHC genes in their evolutionary history. It is worth noting the absence of homologous genes with VvRCHC29 in tomato, but their presence in Arabidopsis, which may have arisen from gene deletions in the process of evolution, given that the same genes are VvRCHC11, VvRCHC38, VvRCHC107, VvRCHC119, and VvRCHC137. Nonetheless, two or more RCHC genes in Arabidopsis and tomato were found homologous to one VvRCHC gene; for example, VvRCHC89 and Solyc07g053850.3/ Solyc12g005470.2 and AT4G28370/AT2G20650, as well as those of VvRCHC1, VvRCHC32, VvRCHC97, VvRCHC104, VvRCHC118, and VvRCHC142. Hence, these genes may be parallel gene pairs and the putative source of amplifications of RCHC genes during evolution. Cis-acting element analysis in VvRCHCs promoter To further investigate the transcriptional regulation of VvRCHCs, cis-acting elements in the 2000 bp region upstream of the VvRCHCs' codon was predicted. The predicted cis-acting elements can be divided into seven categories according to their functions: namely, light response (32), hormone response (11), growth and development response (9), stress response (6), enhanced promoter cis-acting (6), binding site cis-acting (6), and other functional cis-acting (2) elements. Most promoters of grapevine VvRCHCs contained the CAAT-box or TATA-box, which are involved in the enhanced promoter cis-acting elements. In addition, 127 VvRCHCs promoters harbored the stress response element ARE, more than half of the promoters of the VvRCHCs having the hormone response elements ABRE, TGACG-motif, CGTCA-motif, and over half of the VvRCHCs also featured the G-box, GT1-motif, and Box 4 in their promoters (Table S3). In the 2000 kb region upstream of VvRCHCs, discovered many different functions of cis element, in addition to the common cis element with light response and enhanced the promoter, also found that the more growth and adversity stress related cis element, this suggests that VvRCHCs may be widely participating in various life activities of plant. It is known that the RING gene play a key role in plant growth and response to abiotic stresses. Accordingly, the cis-acting elements related to abiotic stress, growth and hormone regulation were focused upon here. The respective locations of the five major acting elements associated with hormone response, binding sites, growth and development, and stress of our concern, on the promoter of the VvRCHCs (Fig. S4a) were determined. To accurately identify the stress-related elements, we focused on four kinds (anaerobic induction, injury response, low temperature, drought response (Fig. S4b), low temperature response, defense and stress response), whose locations are also depicted. In addition, we counted the number of major elements related to stress, growth and development, and hormone responses in the VvRCHC gene promoter (Fig. S4b). Evidently, concerning growth and development, the number of O 2 -sites is the largest, there are 5 promoters of VvRCHC6 and 4 promoters of VvRCHC40. In terms of stress, the number of ARE is very large, found in 89 % of the VvRCHCs promoters, moreover, 5 of the most promoters of VvRCHC14 and VvRCHC81 occurred. In terms of hormone response, the number of ABRE is dominant, found in 64 % of the VvRCHCs promoters, moreover, 9 of the most promoters of VvRCHC3 and VvRCHC16 occurred. Surprisingly, 22 of the VvRCHC74 gene promoters were found and 11 of the VvRCHC128 gene promoters were found. These results suggest that VvRCHCs may be associated with cis-acting elements of different functions; in other words, these genes may be regulated by these elements and thereby influence related plant life activities. Expression analysis of VvRCHCs in roots of two grapevine rootstocks with different drought sensitivity To investigate differential VvRCHCs' expression between plants having contrasting drought-resistant genes (101.14 vs. M4) under drought stress and their potential functioning, the grapevine RNA-Seq transcriptome database of the published dataset was used [24]. We checked the expression of 143 VvRCHCs, of them, a total of 136 VvRCHCs expression.To understand the expression of these VvRCHCs under the drought treatment, we used the ratio of WS (Water Stress) to WW (Well-Watered) gene expression of the two genotypes to draw an expression heat map, expression values are reported as log 2 (WS/WW) fold change (Fig. 5a), the differential multiple matrixes of these VvRCHCs is recorded in Table S4. However, more than 60 % of the VvRCHCs in the two genotypes were highly expressed under the imposed drought. To screen out the key genes, in each time period of the treatment, the gene that conforms to |log 2 (WS/WW)| > 1 is considered a differential gene, and the Venn diagram was made using the differentially screened genes of the drought-tolerant genotype M4 at different times (Fig. 5b). By looking at the different genes in each period, there are finally 8 genes that are different in three periods. To robustly verify the gene expression levels, the expression patterns of these 8 genes were verified by qRT-PCR (Fig. 6), whose pattern basically conformed to the trend shown in Fig. 5c. The VyRCHC114 gene was significantly down-regulated at 2 days, with a strong downward trend of the drought treatment. The VyRCHC66, VyRCHC68, VyRCHC69 and VyRCHC95 genes had a similar expression trend, being slightly up-regulated at 2 days of drought, but strongly down-regulated thereafter. These results suggested eight key genes are probably involved in regulating the plant response to drought. Identification of E3 ubiquitin ligase activity of VyRCHC114 To clarify whether VyRCHC114 has E3 ubiquitin ligase activity, we conducted an in vitro ubiquitin activity assay, achieved by using purified MBP-VyRCHC114 fusion protein mixed with ubiquitin, E1, and E2 and by western blotting with the MBP antibody. Ubiquitin molecules were detected on the fusion protein linked by MBP antibody (Fig. 7a). This same method was used to detect ubiquitin antibody tags. The VyRCHC114 protein was detected in the fusion protein linked by the ubiquitin antibody, which indicated it had E3 ligase activity. We know that the RING-C3H2C3 type protein can form a RING structure for ubiquitin regulation, but this process depends on the interaction between the eight conserved metal ligands. To further illustrate whether and how E3 ligase activity of VyRCHC114 depends on these conserved metal ligands, as shown in Fig. 7c, we selected four different amino acid sites for mutation (two key conservative and two non-conservative metal ligand sites). Four corresponding proteins (C320S, C328S, H341A, N355A) were obtained, and their ubiquitin activity in vitro was tested by the same method. After the immuno-blotting analysis of MBP antibody and ubiquitin antibody, evidently the two mutant proteins C320S and H341A lost their E3 ubiquitin ligase activity due to mutations at key sites, but the two mutant proteins C328S and N355A maintained theirs (Fig. 7b). The unprocessed original image is in Fig. S6. These results indicated these conserved metal ligand sites are crucial factors for demonstrating the VyRCHC114 ligase activity. Overexpression of VyRCHC114 enhances Arabidopsis drought tolerance To clarify the effects of VyRCHC114's role in plant responses to drought, we selected transgenic Arabidopsis (OE #2, #5, #13) with high expression levels of the VyRCHC114 gene for subsequent experiments (Fig. 8b). After 15 days of drought imposed upon wild plants and transgenic plants, followed by normal watering for 6 days, phenotype observations revealed that plants overexpressing VyRCHC114 had significantly improved the drought tolerance (Fig. 8a). Further, on average, more than 70 % of the plants overexpressing VyRCHC114 survived the drought stress, which was significantly higher than the 30 % survival rate of the EV-transformed group (Fig. 8c). To understand the relationship between plant growth and drought resistance, electrolyte leakage rates (Fig. 9a) and chlorophyll content (Fig. 9b) were both measured. These were similar between VyRCHC114-overexpressed Fig. 6 Expression of 8 candidate genes were screened in qRT-PCR for plants under drought stress and control conditions. The x-axis represents the different days during the treatment and the y-axis the relative levels of a gene's expression. Each treatment group had three biological repeats whose averages are plotted with the standard deviation. The asterisks indicate the significant level (* P < 0.05, ** P < 0.01) and EV-transformed plants in the non-stress treatment, but after 8 days of drought stress, the electrolyte permeability of the former was significantly lower than the latter, while the chlorophyll content was significantly higher in overexpressing than EV-transformed plants. Additionally, the changes in photosynthesis under drought stress were further analyzed by measuring potential photosynthetic efficiency (Fig. 9c) and capacity storage capacity (Fig. 9d). Each was not significantly different from EV-transformed and VyRCHC114 overexpression plants under non-stress; however, Fv/Fm was significantly higher in the latter than the former at 4 days, and especially at 7 days, of drought stress. At 4 days, energy storage capacity of VyRCHC114overexpressed plants was not significantly different from that of EV-transformed plants, but at 7 days of drought stress, that of the former exceeded the latter. Hence, these results suggest that VyRCHC114 can enhance the drought resistance of plants by participating in the regulation of photosynthesis. Many studies have shown that antioxidant enzymes can influence plants' drought tolerance. Common antioxidant enzymes are ascorbate peroxidase (APX), superoxide dismutase (SOD), peroxidase (POD), and catalase (CAT), so we examined their activity. As Fig. 10 shows, under non-stress conditions, the activity of these antioxidant enzymes was similar between the plants, whereas when drought stressed for 4 and 7 days, the activities of APX (Fig. 10a), SOD (Fig. 10b), POD (Fig. 10c) and CAT (Fig. 10d) were significantly higher in plants overexpressing VyRCHC114 than those EV-transformed. Taken together, these data indicate VyRCHC114 may also improve drought tolerance by elevating antioxidant enzyme activity. AtCOR15a, AtERD15, AtP5CS1, and AtRD29A are known to be key genes for regulating plant responses to drought stress. So we quantified their expression of imposed drought. As expected, when non-stressed, there was no significant difference between plants overexpressing VyRCHC114 overexpression and those EV-transformed. By contrast, under drought stress, all four genes were significantly higher in VyRCHC114overexpressed plants than in those EV-transformed (Fig. 11). Discussion The RING C3H2C3 gene family has since been identified with many plant species [25][26][27][28]. Related studies have shown that RING genes are involved in a variety of biological processes, growth and development and hormonal responses, as well as plant responses to abiotic stresses [29]. However, for grapevine, the RING C3H2C3 gene had not yet been identified in its whole genome, with few reports available on its relevance for grapevine growth and developmental regulation or response to abiotic stress. In our study, we analyzed the whole genome of grapevine for the RING C3H2C3 gene family members. Using the criteria of whether the eight conserved metal ligands are present, a total of 143 nonredundant RING C3H2C3 genes were thus identified. Studies have shown that grapevine's genome size is about 0.5 times that of tomato, containing 0.75 times as many genes as tomato [30,31]. According to the known RING C3H2C3genes in tomato, the genes account for 0.58 in grapevine, which lies between the multiples of genome length and the number of genes [26]. Many RING C3H2C3 of E3 ubiquitin ligases belong to the ATL gene family [32]. According to Arabidopsis and tomato RING C3H2C3 genes, we divided grapevine's RING C3H2C3 genes into six categories (I~VI) (Fig. 2). Each group has Arabidopsis or tomato in the same branch. This shows that grapevine genes have sequence similarity with Arabidopsis thaliana and tomato genes. Gene replication can arise from fragment replication, tandem replication, transposable events, and even whole and EV-transformed plants were determined. Data are mean ± SD (standard deviation). The asterisk, (*) and (**), indicates that OEs and EVtransformed groups were significantly different at P < 0.05 and P < 0.01 (Student's t-test) genome replication, which not only provide the evolutionary potential for species to produce new functional traits but also are a main driving force for species differentiation [33,34]. In the identification of gene families from many species, gene replication events have proven instrumental in their expansion [35]. Studies have shown that tandem replication often occurs in widely and fast evolving gene families, a good example being Nucleotide Binding Sites Leucine Rich Repeat (NBS-LRR) resistance families [36]. Segmental replication is more common in slow evolving gene families, like the MYB gene family [36]. There are 7 pairs of genes in VvRCHCs that are duplicated in tandem, and 2 pairs of genes are duplicated in segments. Tandem replication may be the cause of VvRCHCs expansion, like the WRKY family genes in the autopolyploid Saccharum spontaneum [37]. The collinearity analysis of VvRCHCs with Arabidopsis and tomato explains the homology relationship between grape RCHC gene and tomato is closer. In addition, 54 VvRCHCs were found to be homologous to genes in Arabidopsis and tomato, these genes may be preserved by the ancestors of dicots. Cis-acting elements in gene promoter regions may be critical for gene regulation. Plant hormone regulation, growth and development, and stress were common upstream of different VyRCHCs. This situation is in fact rather common in RING genes of all species [25][26][27][28]. The ubiquitin-proteasome system has been implicated in the control of the ABA response at different points of the ABA pathway [38]. ABRE is ubiquitous in the promoter of VvRCHCs (Fig. S4), Ubiquitin ligase SDIR1 regulates stress-responsive abscisic acid signal by interacting with ABRE abscisic acid response element [39]. There are ABRE elements in the promoters of VvRCHC3,VvRCHC16 and VvRCHC74, which may be induced by the regulation of abscisic acid. ABA, GA, ethylene, trauma, drought, heat stress, and pathogen response elements are present in the promoter region of OsRING genes of rice plants, for which pathogen infection, SA, ABA, JA, and ethephon (ET) treatments could induce target genes expression to different degrees [40]. A similar analysis of RING gene ZmRHCP1 was recently done in maize [41]. Similarly, there are at least seven types of hormone regulatory binding elements in the promoters of VvRCHC39, VvRCHC65, VvRCHC126 and VvRCHC129, they may be responding to a variety of hormone regulation. According to the analysis of RNG-Seq data set (Fig. 5a), more than 60 % of VyRCHCs were significantly upregulated or down-regulated under drought stress, indicating those genes may play a key role in how grapevine responds to drought. Studies have revealed the molecular mechanism of many circular genes involved in the drought stress. For example, in Arabidopsis, XERICO, SDIR1, AtAIRP1, AtAIRP2, AtAIRP3 and AtAIRP4 has been found to play a key role in the drought response of plants. In addition, E3 ubiquitin ligase atrzf1 mutation increased the proline content of Arabidopsis and improved drought tolerance [42]. GpDSR7 encodes an E3 ubiquitin ligase, which overexpressed in Arabidopsis increased its tolerance to drought stress [43]. In our study, focused on genes which were significantly up-regulated or down-regulated at four time periods during the drought treatment according to previous screening methods [44], as they more likely to play a key role in grapevine's drought stress response (Fig. 5b and c). Comparing the RNA-Seq dataset with the RT-qPCR data (Fig. 6), it was found that the VyRCHC114continues to be downregulated. Hence, we postulated the VyRCHC114 may possess E3 ubiquitin ligase activity and as a negative regulatory factor to respond to drought stress. Verifying this, we detected that VyRCHC114 has E3 ubiquitin ligase activity (Fig. 7) and the VyRCHC114 gene is overexpressed in Arabidopsis (Fig. 8). The results show the overexpression of VyRCHC114 gave Arabidopsis drought resistance. This is not in line with expectations. Use the two reference models in Fig. S5 to explain the experimental results. Pattern 1: After VyRCHC114 degrades target protein A, protein B, which is functionally redundant with A, is strongly activated, which gives plants stronger drought resistance. Genes with redundant functions often come from the same gene family, they generally have similar conserved domains and participate in various life activities together [45]. the five MIR172 have redundancy in the regulation of Arabidopsis meristem size, stem elongation and flowering [46]. AtUBP12 and AtUBP13 have functional redundancy in plant immunity, circadian clock, and photoperiod flowering regulation [47,48]. Pattern 2: After VyRCHC114 degrades target protein A, protein B, which competes with A, is not inhibited, and continues to regulate downstream gene expression. This kind of competitive relationship is often accompanied by a complicated regulatory network. There are studies describing similar that ERF4 and MYB52 regulate downstream gene expression in an opposite manner by antagonizing each other's DNA-binding ability through a physical interaction [49]. Fig. S5 show interesting and complex networks. The results that are not in line with expectations have prompted more attention to the target protein of VyRCHC114. The preliminary pattern diagram (Fig. S5) gives us confidence. In addition, overexpression of VyRCHC114 caused changes in many related indexes, including antioxidant enzyme activity, photosynthesis rate, active oxygen metabolism and drought resistance gene expression. The results also indirectly prove that the targeted degradation substrate of the gene may be a key regulatory factor in the process of drought stress, and its degradation strongly activates other elements to resist drought stress, thus giving plants stronger drought resistance. It's going to be an interesting story. In the next work, we will identify the substrate protein of VyRCHC114, further study its regulatory pathway, sequence the transcriptome of Vitis yeshanensis under drought stress, the co-expression networks under drought stress, and compare the differences of drought resistance of other cultivated species. Drought stress greatly impacts the photosynthesis of plants, by affecting their photosynthetic rates and carbon metabolic pathways [50]. A lowered rate of photosynthesis can lead to excessive accumulation of reactive oxygen species (ROS), leading to cytotoxicity, membrane lipid peroxidation, and even cell death which can be countered by antioxidant enzymes as a form of plant defense [51,52]. The overexpression of maize E3 ubiquitin ligase gene in transgenic tobacco can reportedly improve the drought resistance of tobacco [53]. Not only that, other abiotic stresses may also be regulated by photosynthesis, thus enabling plants to adapt to stress conditions [54]. According to our results, VyRCHC114 overexpressing plants maintained a strong photosynthetic rate and energy storage capacity while under drought stress. The reason for this may be an increase in their chloroplast content, pointing to VyRCHC114's possible involvement in the regulation of chlorophyll biosynthesis pathway as an E3 ubiquitin ligase. Nonetheless, we also examined the expression of genes known to play a major role in drought stress responses to plants [55][56][57]. Our results revealed that the expression levels of these genes were significantly higher in VyRCHC114overexpressed than EV-transformed Arabidopsis plants. Moreover, antioxidant system may be involved in plant abiotic stress tolerance mediated by the E3 ubiquitin ligase. Here, we provide physiological evidence that VyRCHC114 heterologous expression enhances drought resistance by increasing the activity of antioxidant enzymes, which can scavenge for and eliminate ROS to indirectly reduce membrane damage. Conclusions VyRCHC may act as an E3 ligase to mediate substrate degradation through the ubiquitin proteasome mechanism. This interaction may cause the target protein to be labeled by ubiquitin signaling, which leads to proteasome degradation. Since VyRCHC114 likely represents a new class of positive /negative regulatory factors of the drought signal pathway, however, positive/negative depends on the regulatory characteristics of the target protein, but we think the degraded protein is a positive regulator of drought signaling, so that more of this substance may activate drought signaling. Therefore, VyRCHC114 may improve the water retention ability and antioxidant defense of plants by regulating their chlorophyll content and antioxidant system, thus participating in drought stress response. So far, however, the target protein of the plant VyRCHC114 gene has not been determined, nor is the mechanism of augmented SOD, POD, APX and CAT activities clearly understood. In future work, we will focus on the identification of VyRCHC114 target proteins under drought stress and activation mechanisms of the antioxidant system in VyRCHC114-transgenic plants. Plant materials The grapevine variety Vitis yeshanensis was sampled from the field, located in the grape germplasm resource garden of Northwest A&F University. Annual plants are selected for treatment, and the treatment method is the same [58]. Root samples were taken at 0d, 2d, 4d, 6d, 8d and 10d respectively for subsequent experiments. Transgenic and wild type (WT) plants of Arabidopsis thaliana ecotype Columbia (Col-0) plants were grown in vermiculite: perlite (1:1, v/v) mix in plastic pots in a growth chamber. Arabidopsis plants were grown in a soil mix of peat moss, perlite, and vermiculite (3:1:1, v/v/v) under a 12-h/12-h day/night cycle at 25℃ with 60 % relative humidity. For the drought stress treatment, plants were transformed with an empty vector (EV) or to overexpress VyRCHC114 (OE#2, OE#5, or OE#13 lines) [59], all of which were grown on individual MS medium plates for 7 days before transplantation into soil, there were 5 strains in the control and three transgenic lines. This experiment mainly followed previous research methods, albeit slightly modified [60]. After 3 weeks, these plants received a 12-day drought stress treatment (no water provided), after which they were re-watered and their survival recorded 6 days later. All experiments were repeated three times. Identification of RING-type C3H2C3 genes in the grapevine genome To identify the C3H2H3 type of RING, the most recent grapevine genome file in the Ensembl Plants Database (http://plants.ensembl.org/index.html) was downloaded and used. The grapevine RING C3H2C3 candidates were identified based on the HMM profiles (PF13639 and PF12678) with an e-value cutoff of 0.01. The screened proteins screened were given to Pfam (http://pfam.xfam. org/search/) and SMART ( ttp://smart.embl-heidelberg.de/), e value less than 0.01. According to the results of PFAM, SMART database protein domain identification, extract RCHC conservative domain sequences in the VyRCHC protein sequence, and use CLUSTALX 2.0 to perform multi-sequence alignment, see if conserved Cys-Cys-Cys-His-His-Cys-Cys-Cys. Finally, 143 proteins that ultimately meet the conditions. The physicochemical properties of each RING-type C3H2C3 protein were predicted using the Protparam online tool (https://web. expasy.org/protparam/). The 143 VvRCHCs were named according to their positional information on the chromosomes. Bioinformatics analysis of VyRCHCs family CLUSTALX 2.0 software was used to perform a multiple-sequence alignment of the 143 grapevine genes and the 18 tomato and 19 Arabidopsis RCHC protein sequences; it was also used to manually remove any untrusted gaps at both sequence ends. A phylogenetic tree was generated in MEGA 7.0 using the ML (maximum likelihood) method and bootstrapping with n = 1000 replicates, with all other settings set to their default values; the online EVOLVIEW (https://www.evolgenius. info/evolview/#login/) tool carried out the tree's visualization. The online program Gene Structure Display Server 2.0 (http://gsds.cbi.pku.edu.cn/) was used to identify the genetic structure the VvRCHCs. Using the MEME online program (http://MEME.nbcr.net/meme/ introduction.html), the VvRCHC protein sequences could be analyzed under these parameters: an optimal motif width of 6~35 and a maximum number of motifs of 12. According to the annotated positions in grapevine genome data, the 143 grapevine VvRCHCs were located on 20 chromosomes. By referring to previous studies, BLASTN was used to compare the CDS sequences of VvRCHCs in grapevine and tomato (e-value = 1 × 10 − 10 , homology > 75 %). The tandem repeat gene pairs and segment repeat gene pairs of VyRCHCs were also identified [61,62]. Further, the Ka/Ks ratio between repetitive genes pairs can be used to infer the selection pressure in the process of genome evolution. Next, the MCScanX program (e-value: 1 × 10 − 10 , num alignments: 5) was used to detect the collinear region between VvRCHCs in grapevine and tomato/Arabidopsis; any collinear gene pair of VvRCHCs was marked with red and green lines. The cis-elements were identified from the upstream 2-kb promoter sequences of the VvRCHCs a f t e r s u b m i t t i n g t h e m t o P l a n t C A R E ( h tt p : / / bioinformatics.psb.ugent.be/webtools/plantcare/html) [63], to obtain their image display, the resulting XML file was uploaded to TBtools [64]. Expression analysis of VyRCHCs in grapevine under drought stres To analyze the grapevine RCHC genes' expression levels under drought stress, we obtained from the NCBI database (registration number: SRA110531) two different drought resistance genes (101.14 and M4) which were compared under two different treatments WS(Water Stress) and WW (Well-Watered) in roots and in different periods (T1-T4: 2d、4d、7d、10d) RNA-Seq data set [24]. Based on the expression values of RING C3H2C3 in the roots of the two genotypes, we calculated the log 2 (WS/WW) values (fold-change) in each time period (Table S4). The R package 'pheatmap' was used to produce a heatmap for this data. RNA extraction and quantitative real-time PCR (qRT-PCR) The qRT-PCR primers were designed using Primer Premier software (version 5.0). The RNA from Arabidopsis and grapevine (Vitis yeshanensis) leaves was extracted using the Spectrum Plant Total RNA Kit (Sigma-Aldrich, Beijing, China), after which reverse transcription of RNA into cDNA was done using the Prime Script RT Reagent Kit (Takara, Dalian, China). The qRT-PCR was performed in an IQ 5 real-time PCR detection system (Bio-Rad Laboratories, Hercules, CA, USA) with SYBR Premium EX Taq II (Takara, Dalian, China). The reaction volume was 25 µl. The relative expression level corresponding to β-TUB4 and ubiquitin1 was calculated by using the 2 −ΔΔCt method [65]; each reaction was prepared in triplicate and repeated three times. Primer sequence information in Table S1. E3 ubiquitin ligase activity assay The open reading frame (ORF) of VyRCHC114 and the different site mutants C320S, H341A, C328S, and N355A were separately cloned into the SalI/KpnI site of the pMAL-c5X vector (New England Biolabs UK Ltd, Hitchin, UK). According to the manufacturer's instructions, the pMAL protein fusion and purification system (New England Biolabs) was used to purify the fusion protein. Ubiquitination activity was then measured that according to the method described above [66], albeit with the following modifications made: 250 ng of purified E3 (MBP-VyRCHC114, C320S, H341A, C328S, and N355A) in the ubiquitination buffer (50 mM Tris-HCl (pH 7.5), while the other reagents and steps used were the same. Primer sequence information in Table S1. Physiological analysis of drought stress response of transgenic Arabidopsis To determine the water loss rate, 10 leaves were detached from 3-week-old transgenic and WT plants and immediately weighed. The samples were then placed on dry filter paper at a relative humidity of 40-45 % at room temperature and weighed over a time course. Leaves were sampled after dehydration to detect cell death, electrolyte leakage, malondialdehyde, antioxidant enzyme activity. The leaves collected before dehydration were used as a negative control. For chlorophyll content measurements, approximately 0.05 g of fresh leaf material was placed in 5 ml of 96 % ethanol and incubated at 4 C in the dark overnight. The absorbance of the extracted pigments was measured at 665 and 649 nm using a spectrophotometer (Hitachi Limited, Tokyo, Japan) and the chlorophyll content was calculated as previously described [58]. Relative electrolyte leakage was measured as previously described [67], as was MDA content [66]. In addition, superoxide dismutase (SOD), peroxidase (POD), catalase (CAT), ascorbate peroxidase (APX) enzyme activities were extracted from 0.5 g leaves from abiotic stress treated plants as well as control plants, and measured as described by [68]. Statistical analysis All the above experiments by SPSS software (version 21.0) were employed to analyze the statistically significant differences of the gene expression levels by ANOVA with Duncan's multiple range test. All experiments were repeated three times as independent analyses. Availability of data and materials All data generated and analyzed during this study are included in this published article. To identify the C3H2H3 type of RING, the most recent grapevine genome file in the Ensembl Plants Database ( http://plants. ensembl.org/index.html ) was downloaded and used. Expression data of C3H2H3 type of RING genes in grapevine used in this study can be accessed via the NCBI SRA database with accession numbers of SRA110531. Figure S1 Schematic diagram of C3H2C3 conserved protein sequence alignment of VvRCHCs. Figure S2 The original tree of Fig. 2. Figure S3 Number of introns in VvRCHCs. Figure S4 Analysis of cis-acting elements in the VvRCHCs. Figure S5. Two model diagrams (VyRCHC114 is involved in drought resistance). Figure S6 VyRCHC114 in vitro ubiquitin gel imprinting (uncut). Table S1 The sequences of the primers used in these experiments. Table S2 The distance between conserved metal ligands in the C3H2H3 domain of 143 VvRCHCs. Table S3 Functions of the cis-acting elements that found in the promoter region of each of VvRCHCs.
2021-09-18T13:32:38.885Z
2021-09-17T00:00:00.000
{ "year": 2021, "sha1": "d167bf98b0f89bc4ce04af1abc6f3a8be7710dff", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-021-03162-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "858cc6ac0633c393b253210a4b9fce062000a596", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
17071871
pes2o/s2orc
v3-fos-license
New Species of Rotundomys (Cricetinae) from the Late Miocene of Spain and Its Bearing on the Phylogeny of Cricetulodon and Rotundomys The material of Rotundomys (Rodentia, Cricetinae) from the Late Miocene fossiliferous complex of Cerro de los Batallones (Madrid, Spain) is described and compared with all species currently placed in the genera Rotundomys and Cricetulodon. Both the morphology and size variation encompassed in the collection of specimens from Batallones suggest they belong to a single taxon different from the other known species of these genera. A new species Rotundomys intimus sp. nov. is, therefore, named for it. A cladistic analysis, which is the first ever published concernig these taxa, has been conducted to clear up the phylogenetic position of the new species. Our results suggest that Rotundomys intimus sp. nov. inserts between R. mundi and R. sabatieri as a relatively primitive taxon inside the clade Rotundomys. The new taxon is more derived than R. mundi in having a transversal connection between the metalophulid and the anterolophulid on some m1 but more primitive than R. sabatieri and the most evolved species of Rotundomys (R. montisrotuni +R.bressanus) in its less developed lophodonty showing distinct cusps, shallower valleys, and the presence of a subdivided anteroloph on the M1. The species of Cricetulodon do not form a monophyletic group. As a member of Rotundomys, Rotundomys intimus sp. nov. is more derived than all of these taxa in its greater lophodonty and the complete loss of the anterior protolophule, mesolophs, and mesolophids. Introduction The Cerro de los Batallones fossiliferous complex (CBFC) comprises a set of nine sites that have yielded vertebrate remains of Late Miocene age. It is situated in Torrejón de Velasco, South of the city of Madrid (Spain) (Figure 1). CBFC consists of cavities filled in with clays that are interpreted as having acted as traps for large vertebrates [1]. Besides fishes, amphibians, reptiles, and birds, the sites of Cerro de los Batallones deserve the attention they have received because of both the abundance and pristine conservation of the fossil carnivores and herbivores they yield [2]. However, the micromammals (insectivores, lagomorphs, and rodents) are also represented by numerous and well-preserved remains, although only the cricetodontine Hispanomys has been studied in detail so far [3]. The aim of this article is to provide a description and a systematic assessment of the specimens from Batallones assigned to the genus Rotundomys and conduct the first cladistic analysis involving not only all the species currently recognized as pertaining to Rotundomys but also to the closely related genus Cricetulodon. Rotundomys is only known with certainty from the Vallesian (late MN9-MN10) of France, Spain, and Portugal. It is characterized by the development of lophodonty and moderate hypsodonty in their cheek teeth. The morphological similarities in the molar crown pattern between the most advanced species of Rotundomys and early arvicolids have led to the idea that Rotundomys could have been the taxon from which arvicolids were eventually derived [4]. However, a number of other cricetids show arvicoline features so that the exact relationships of Rotundomys with respect to arvicolines remains to be determined [5]. Material and Methods The material studied herein was collected thanks to numerous summer field and washing campaigns, in which the authors took part. The excavations in the CBFC were carried out according to the authorization issued by the Dirección General de Patrimonio Histórico de la Comunidad de Madrid. All necessary permits were obtained for the described study, which complied with all relevant regulations. We have received permission from the Université Claude Bernard-Lyon 1 (Villeurbanne, France) for the loan of the The systematic revision presented below is based on the examination of specimens and casts of the MNCN and FSL collections and data from the literature. We examined dental material of the following taxa: -Rotundomys sp. nov. from Batallones (see below); -Rotundomys montisrotundi from Montredon (Hérault, France) and Rotundomys cf. montisrotundi from Douvre (Ain, France) (unnumbered specimens); -Rotundomys bressanus from Soblay (Ain, France), Ambérieu 2c, and Ambérieu 1 (Ain, France) (unnumbered specimens); -casts of Rotundomys cf. mundi from Terrasa (Barcelona, Spain) (unnumbered specimens) and R. freiriensis from Freiria do Rio Maior (Santarém, Portugal) (unnumbered specimens). The new specimens have been described and compared with the equivalent teeth of all the species of Rotundomys known to date and some Cricetulodon. First, second, and third lower molars are designated as m1, m2, and m3, respectively, and first, second, and third upper molars as M1, M2, and M3. The terminology used in the tooth descriptions follows the rodent dental terminology of Freudenthal et al. [6] with some adjustments (see Figure 2). The occlusal measurements (greatest length and greatest width; Table 1) of the teeth of Rotundomys from Batallones have been obtained with a Nikon digital counter CM-6S measuring device. The calculations of the statistical descriptives and Analyses of Variance (ANOVA) have been carried out with a standard software (SPSS Statistics version 18.0, SPSS Inc., Chicago, IL, USA). Tests on normality and homogeneity of variance have been performed with this software before the Analyses of the Variance. The relative reduction in the length of the third molars was calculated using the (mean Length of M1)/(mean Length of M3) and (mean Length m1)/(mean Length m3) ratios, which is a classic method for evaluating the degree of reduction of the third molars [3]. For Rotundomys samples whose variance was known, the standard error of the ratio (SER) was calculated using the Delta approximation (sensu Ratio technique in SPSS) [3]. The formula used is: (where Var is the variance, r the coefficient of correlation between the length of the first and third molars, and sd is the standard deviation) The coefficient of correlation of all the Rotundomys and Cricetulodon samples included in Table 2 is 0.503 for the upper molars and 0.527 for the lower ones. The cladistic analysis carried out in this work treated as ingroup all known species of the genera Cricetulodon and Rotundomys. Therefore, the taxonomic units are: Cricetulodon hartenbergeri, C. sabadellensis, C. bugesiensis, C. meini, C. lucentensis, Rotundomys montisrotundi, R. bressanus, R. mundi, R. sabatieri, R. freiriensis, Rotundomys sp. nov. from Batallones. Democricetodon franconicus has been selected as outgroup. It is a well-known species of Democricetodon, which is a genus from which Cricetulodon is supposed to have been derived (see e.g., [7]). A total of 42 phylogenetically informative characters (mainly of dental morphology) have been coded (Text S1). 31 characters are binary, whereas 11 are multistate. Owing to the lack of a priori information, all characters were unordered and equally weighted (Fitch optimality criterion). As some species are known so far from only a few specimens, the influence of intraspecific variation in the scoring of the characters could not be assessed. The data matrix (Text S2) was built using Mesquite version 2.6 (Maddison WP & Maddison DR, Mesquite Project, Vancouver, Canada) and processed with TNT [8] with the "implicit enumeration" option. Branch support was estimated through two complementary indices: Bremer support [9] and relative Bremer support [10]. Nomenclatural acts The electronic edition of this article conforms to the requirements of the amended International Code of Zoological Nomenclature, and hence the new name contained herein is available under that Code from the electronic edition of this article. This published work and the nomenclatural acts it contains have been registered in ZooBank, the online registration system for the ICZN. The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix "http://zoobank.org/". The LSID for this publication is: urn: lsid: zoobank.org: pub: 308BFD06-6024-4BF5-9F0C-F9C6415F8201. The electronic edition of this work was published in a journal with an ISSN, and has been archived and is available from the following digital repositories: PubMed Central, LOCKSS. Results Order RODENTIA Bowdich, 1821 [11] Family CRICETIDAE Etymology: From the Latin intimus, the most interior, in reference to the fact that the locus typicus is situated in the innermost position with respect to the other Spanish sites, which are much closer to the shore. Diagnosis: Cricetinae with lophodont cheek teeth; protoconid connected to the hypolophulid in a regularly curved crest on the first lower molars; wide valleys usually closed by thin, low cingula; weak anterior connections; anterior protolophule and mesolophs/ ids absent and anterior metalophule usually absent; strongly fordwardly-directed anterior metalophulid and strongly backwardly-directed posterior metalophule. Well-developed metacone on the M3. Differential diagnosis: Differing from the species of Cricetulodon in being more lophodont, lacking the anterior protolophule, and usually the anterior metalophule and the mesolophs/ids, in having the protoconid connected to the hypolophulid in a regularly curved crest on the m1 and the M3 much less reduced. Differing from Rotundomys bressanus in being smaller, less lophodont, with distinct cusps/ids, better developed cingula surrounding the valleys and having low, weak and interrupted metalophulid and anterolophulid on the m1. Differing from R. mundi in being larger, lacking the anterior metalophule on the M2, and in having the M3 less reduced and without strong connection between paracone and labial anteroloph. Differing from R. montisrotundi and R. sabatieri in being smaller, less lophodont, and in having more distinct cusps/ids and shallower valleys. Differing from R. freiriensis in having the anterolophulid, a strongly forwardlydirected metalophulid on the m1, lingual anterolophid on the m2, and the M3 much less reduced. Description Material from the type locality (Batallones 5). m1: The teeth are elongated, being widest at the level of the hypoconid. The anterior part of the teeth is fairly broad. The anterolophid, which is as high as the main cusps, is usually divided in two or three cuspids, but it may consist of a single ridge. The labial anterolophid descends and nearly joins the protoconid, enclosing a wide valley (protosinusid). The poorly developed metalophulid points strongly forwards, being as it is almost longitudinal. It does not usually connect to the anterolophid and, when it does, this connexion is thin. The mesolophid is absent. The protoconid joins the hypolophulid in a regularly curved crest. The protoconid and the hypoconid have about the same size. The mesosinusid is large, curved, and partially closed in most specimens by a thin and low lingual cingulum ridge. The posterolophid bulges as a posteroconid; from it, a lingual crest descends, but does not usually reach the entoconid. Thus, the posterosinusid is not completely closed. The sinusid is nearly transverse and it is closed by a low labial cingulum ridge. Three out of 11 specimens (BAT5'10-07, BAT5'2006-I16d-02 and BAT5'11-01; Figure 3A-C) have the metaconid isolated and m2: The maximal width of the tooth is at the level of the hypoconid. The anteroconid is distinct and centrally located; from it, a strong labial anterolophid runs down, reaches the protoconid, and closes the protosinusid. The lingual anterolophid is absent in the entire sample. The metalophulid runs obliquely forwards and the mesolophid is absent. As in the m1, the protoconid and the entoconid form a continuous arch. The labial cusps have nearly the same size. The mesosinusid is large and curved; it is closed by a strong and low lingual cingulum ridge. The posterolophid is bulged in a posteroconid; from it, runs a lingual crest that joins the entoconid and closes the posterolingual sinusid. The nearly transverse sinusid is closed by a low and strong labial cingulum ridge. These teeth have two roots. m3: Except for BAT5'10-05, BAT5'10-07, BAT5'10-15, BAT5'10-06 (Figures 3A, 4C-E) and possibly a worn specimen (BAT5'10-12; Figure 4F), which have the posterolophid connected to the entoconid closing the posterolingual sinusid, the remainining m3 from Batallones 5 show a short posterolophid that does not join with the entoconid. All m3 are somewhat posteriorly reduced and, therefore, their hypoconid is reduced as well. The anteroconid is large and slightly lingually located. They show a low and strong labial anterolophid that connects to the anterior wall of the protoconid, closing the protosinusid. The lingual anterolophid is lacking. Due to the very anterior position of the metalophulid, the lingual anterior cingulum is absent. The posterior arm of the protoconid is very long and it connects to the hypolophulid, but there is no longer the regularly curved crest that characterized the m1 and m2 of this taxon. The mesosinusid is large and it is closed by a thin low lingual cingulum ridge. The sinusid is closed by a strong labial cingulum ridge with a cuspule on the posterior wall of the protoconid in some specimens (BAT5'06-I15-5; Figure 4B). Some specimens (BAT5'10-05; Figure 4E) show a short labial posterolophid that closes the small labial posterosinusid. These teeth are two rooted. M1: The prelobe (all structures anterior to the protocone and paracone) is long and the posterior side of the protocone is located at about the midpoint of the teeth. The anterolophule connects the protocone lingually with the anteroloph, which is usually divided into two anterocones. 2 out of 4 specimens (BAT5'11-02 and BAT5'10-14; Fig 5A-B) show a labial spur on it that connects to the anteroloph. Another specimen (BAT5'06-I15-28; Figure 5C) shows at this level a slight inflation that may correspond to this spur and BAT5'06-01 ( Figure 5D) lacks all trace of it. None of the teeth have a true mesoloph but they have the anterior arm of the hypocone somewhat inflated. All the teeth but BAT5'11-02 ( Figure 5A), which has a low and thin anterior metalophule, lack this structure. The anterior protolophule is absent in all specimens. The posterior protolophule and metalophule are posterolabially directed. The metacone is located on the posterolabial corner of the tooth and it is connected to the posteroloph through the posterior metalophule. Two specimens (BAT5'06-01 and BAT5'10-14; Figure 5B, D) show a thin labial ridge emerging from the end of the posteroloph, enclosing a small labial posterosinus. The sinus is transverse. Thin and low labial and lingual cingula enclose the valleys. These teeth are three rooted (the lingual root is the largest). M2: The M2 from Batallones 5 are widest at the level of the paracone. They have a large anterocone, slightly lingually located. The labial and lingual anteroloph are low but well developed. The lingual anteroloph, placed much lower than the labial one, joins with the protocone and closes the protosinus. The labial anteroloph does not usually reach the paracone. A strong labial cingulum ridge closes the mesosinus. Except for specimen BAT5'06-I15-16 ( Figure 5E), in which a very thin and short mesoloph is noticed, a true mesoloph is absent in the entire sample. However, the hypocone, which forms a wider V than the protocone, has its anterior arm slightly inflated at the level of the mesoloph. The metacone is located on the posterolabial edge of the teeth. These teeth lack anterior protolophule and metalophule. The posterior protolophule is slightly oblique, whereas the posterior metalophule points strongly backwards, joining with Figure 5D-E), the end of the posteroloph extends as a thin labial ridge that runs posterolabially and reaches the posterior wall of the metacone, enclosing a very small posterosinus. In the remaining specimens, the posterosinus is lacking. All specimens have a transverse sinus, which is closed by a low and distinct lingual cingulum. The labial valleys are closed by thin but distinct low cingula. These teeth have four roots. M3: The posterior portion is somewhat reduced, so the hypocone, even though it is well developed, is smaller than the protocone. The anterocone is large and located slightly lingually. The labial anteroloph is higher and better developed than the lingual one, which is very low but distinctly noticeable. They join with the protocone and paracone, respectively, closing as they do the anterior valleys. All specimens show the anterior metalophule, which is generally long and reaches the metacone, but it can also be of medium length and free (as in BAT5'10-01 and BAT 5'06-I15-16; Figure 5E-F). The posterior metalophule is fused with the posteroloph. All M3 from Batallones 5 have the posteroloph connected to the posterior wall of the large metacone. The lingual cingulum is thin and low and closes the transverse sinus. The labial one is much less developed than it is on the M1 and M2 of this taxon. These teeth are three rooted (a single lingual and a double root or three separate roots). Material from the other localities. Batallones 3: The material from this locality is composed of 5 hemimandibles, one of them without m1, and a cranium. The morphology of the material from Batallones 3 is similar to that described of the population of Batallones 5. Nevertheless, there are several morphometrical differences. Except for specimen BAT3-12, the m1 are longer and narrower than those of Batallones 5. The scarcity of the material from Batallones 3 does not allow for precise statistical testing of the metrical differences between those assemblages for most dental elements. The tests carried out on samples larger than 4 specimens reveal no significant differences between Batallones 3 and 5 except for the greater length of the m1 (t-student = 3.77, signification (bilateral) = 0.002) in Batallones 3. Table 2 shows that the ratio between M1 and M3 length is higher in Batallones 3 than in Batallones 5 implying a possible trend towards relatively smaller M3, not observed on theoretically more advanced forms of Rotundomys. The morphotypes of the m1 observed in Batallones 3 correspond to some found at Batallones 5. For instance, BAT3'07-234 ( Figure 6A) has the metaconid connected to the lingual anterolophid through a weak metalophulid that strongly points forwards and the protoconid is connected to the labial anterolophid through a long anterolophulid. BAT3'08-1 (Figure 6B) has the metaconid isolated and the protoconid joined with the labial or central part of the anterolophid through a long longitudinal anterolophulid. Besides, BAT3-09 ( Figure 6C) and BAT3'07-234 ( Figure 6A) have the metaconid connected to the protoconid by a thin transverse ridge. The former has the metaconid isolated from the anterolophid. The m2 and m3 from Batallones 3 do not differ from those of Batallones 5. The particular morphology shown by the specimens BAT3-12 ( Figure 6D) and BAT3-2007-234 ( Figure 6A), which have a small labial posterolophid, is found also in some specimens from Batallones 5 (BAT5'10-05; Figure 4E). With regard to the upper molars, their morphology is also similar to the specimens from Batallones 5 that lack the spur of the anterolophule. As the teeth of the single specimen ( Figures 6E, 7A-B) we have from this locality are worn, it is not possible to discern if they had an additional ridge arising from the end of the posteroloph. Batallones 10 locality: From this locality only two maxillary fragments with M1-M3 belonging to the same individual have been recorded ( Figure 7C-F). The morphology of these teeth is similar to that found in the population of Batallones 5. In particular, the M1 match well those from Batallone 5 that lack the spur of the anterolophule. Batallones 1 locality: This locality has yielded three isolated teeth: 1M2, 1M3, and 1 m3 ( Figure 7G-I). The M2 and M3 are similar in morphology to those of the population of Batallones 5 and fall within its size range. The morphology of the m3 is also similar but it is slightly larger than those of the type population. The cheek teeth of this species are smaller than the equivalent teeth of Rotundomys intimus sp. nov. from Batallones. In addition, this species is characterized by the presence of mesoloph and anterior protolophule on the upper molars. Moreover, the M3 of Cricetulodon hartenbergeri are much more reduced than those of R. intimus sp. nov. and their morphology is very different. For instance, the metacone of the former taxon is very small and may even disappear as a distinct cusp by fusion with the posteroloph. In contrast, the upper molars of R. intimus sp. nov. lack the mesoloph and the anterior protolophule and their M3 are less reduced and characterized by a well-developed metacone. With regard to the lower molars, most C. hartenbergeri have a mesolophid, which is absent in R. intimus sp. nov. In addition, most m1 of C. hartenbergeri have the anterolophulid connected to the lingual cusp of the anterolophid, whereas only 20% of the m1 of R. intimus sp. nov. show this kind of connection. Finally, in the m1 and m2 of C. hartenbergeri, the protoconid does not join the hypolophulid in a regularly curved crest as is the case in R. intimus sp. nov. Most of the M1 and M2 of Cricetulodon sabadellensis have a short mesoloph and a labial spur of the anterolophule directed towards the paracone (anterior protolophule). These structures are lacking in almost all the M1 and M2 of Rotundomys intimus sp. nov. The M3 of C. sabadellensis also have double protolophule and a short anterior metalophule. In contrast, the M3 of R. intimus sp. nov. lack the anterior protolophule and have a longer anterior metalophule. The morphology of the m1 of C. sabadellensis is very different from that of R. intimus sp. nov. The latter species has the metalophulid nearly longitudinal, whereas it is transverse in C. sabadellensis. About half the M1 of Cricetulodon bugesiensis have double protolophule and anterior metalophule and almost all of them have the mesoloph, usually of medium length or long. In contrast, the M1 of Rotundomys intimus sp. nov. lack a true mesoloph and the anterior protoloph and metaloph. All the M2 of C. bugesiensis have a double protoloph, most of them a true mesoloph, and about half the specimens show an anterior metalophule that can be formed by the mesoloph or be independent from it. In contrast, all the M2 of R. intimus sp. nov. lack the anterior protolophule and metalophule and only one specimen shows a very short mesoloph. The M3 of C. bugesiensis also have a double protoloph and a number of them have a mesoloph, which is absent in R. intimus sp. nov. With regard to the lower molars, over half of the m1 and some m2 and m3 have a mesolophid, which is absent on all lower molars of R. intimus sp. nov. Comparison with Cricetulodon meini (Agustí, 1986) [31]. This species was originally created by Agustí [31] as belonging to the genus Kowalskia. Later, this taxon was reallocated to the genus Cricetulodon [7] on the basis of the lingual anterolophulid on the m1, a reduced M3, and reduced mesolophs and mesolophids. The holotype (FCA-237), a right isolated M1 from the MN12 locality of Casa del Acero (Murcia, Spain), is housed in IPS. The upper molars of Cricetulodon meini have a double protolophule and they can bear a mesoloph. In contrast, those of Rotundomys intimus sp. nov. lack the anterior protolophule and the mesoloph. Moreover, the M3 of C. meini are much more reduced than those of R. intimus sp. nov. and lack the lingual anteroloph, which is well developed in the latter species. The m1 of C. meini have a metalophulid that does not point strongly forwards whereas that in R. intimus sp. nov. is nearly longitudinal. In addition, the lower molars of the former taxon usually have the mesolophid, which is lacking on those of R. intimus sp. nov. Comparison with Cricetulodon lucentensis (Freudenthal, Lacomba et Martín-Suarez, 1991) [32]. This species was originally attributed to the genus Neocricetodon by Freudenthal et al. [32]. Subsequently, Freudenthal et al. [7] transferred it to the genus Cricetulodon on the basis of the clearly lingual anterolophulid of some m1 and the strong reduction of the third molars. The holotype of this taxon (RGM 404 677) is a right m1 from the MN12 locality of Crevillente 17 (Alicante, Spain) [32] that is housed in RGM. Additional material of this species has been recovered from Crevillente 5 and Crevillente 8 (Alicante, Spain) [32]. Some m1 of Cricetulodon lucentensis have a long mesolophid, which is absent on the equivalent teeth of Rotundomys intimus sp. nov. In addition, the m1 of the former species have the metalophulid directed much less forwards than what can be observed in the latter. The m3 of C. lucentensis are much more reduced than those belonging to R. intimus sp. nov. In addition, the m3 of the former taxon have a lingual anterolophid that is absent in the latter species. Most of the M1 and M2 of C. lucentensis have a double protolophule, a double or anterior metalophule, and a mesoloph. In contrast, the M1 and M2 of R. intimus sp. nov. lack the anterior protolophule, the anterior metalophule, and the mesoloph. In addition, the M2 of C. lucentensis have a well-developed labial anteroloph that closes a large anterosinus. The M3 of C. lucentensis are morphologically very different and much more reduced than those of R. intimus sp. nov. Mein, 1975 [15]. This species was erected on the basis of 43 isolated cheek teeth from the late MN10 locality of Soblay (Ain, France) [15]. After the study of additional material from Montredon (Herault, France), Aguilar [33] considered this taxon a synonym of Rotundomys montisrotundi. However, Freudenthal et al. [7] argued that there were enough characters to distinguish the two taxa (for instance, the overall size, the wear surface of protoconid and protocone, the degree of reduction of both, labial anterolophid on the m1 and posterolophid-entoconid connection on the m3) and, therefore, they considered R. bressanus a valid species, an opinion with which we concur. Comparison with Rotundomys bressanus The holotype (FSL 65443) of this species is an isolated left M1 housed in FSL. The cheek teeth of Rotundomys bressanus are larger, more lophodont, and have deeper valleys than those of R. intimus sp. nov. The upper molars of R. bressanus have the posteroloph completely fused with the metalophule, whereas there are various specimens in which the posteroloph exceeds its junction with the metalophule in the sample from Batallones. Besides, most of the M1 of R. bressanus have a spur of the anterolophule, which connects the protocone to the labial anterocone, whereas most of the M1 of R. intimus sp. nov. lack it. The M2 of R. bressanus show a strong mesocone and usually a short but distinct mesoloph, which are absent on most of the equivalent teeth from Batallones. In addition, the m1 of R. bressanus have a well-marked and high anterolophulid and a well-developed metalophulid, whereas the anterolophulid and metalophulid are low, weak, and interrupted (or even absent) in most of the m1 of R. intimus sp. nov. Moreover, R. bressanus is characterized by the reduction of the labial anterolophid on the m1, which is present in R. intimus sp. nov. Comparison with Rotundomys mundi Calvo, Elizaga, Ló pez-Martínez, Robles et Usera, 1978 [16]. This species was created on the basis of some isolated cheek teeth from the MN10 locality of Hijar-1 (Albacete, Spain). The holotype (H-7) is a right M2. We were unable to locate any of the specimens mentioned in [16] despite our efforts, so the present whereabouts should be considered as unknown. Agustí ([22]: 136) has described Rotundomys cf. mundi from the late MN10 localities of Trinxera Nord Autopista, Trinxera Nord Autopista II, Trinxera Sud Autopista, and Can Perellada (Barcelona, Spain). Later [36] he changed this assignation into Rotundomys sp.. However, according to Freudenthal et al. [7] this material would in fact correspond to R. mundi, the first interpretation of Agustí [22] being accurate. R. mundi is based on a very small number of specimens. As far as we can judge of it, R. sabatieri, which is known from many more specimens, is not fundamentally different from it. However, the scoring of these two species differs (e.g., characters 29 and 34) so that we provisionally accept them as distinct pending further investigations. The morphology of the single recorded M2 of Rotundomys mundi is very different from that of the equivalent teeth of R. intimus sp. nov. The former has a complete anterior metalophule that is absent on the M2 of Rotundomys from Batallones, in which the metalophule is short and posterior. In addition, the M3 of R. mundi are much more reduced than those of R. intimus sp. nov. and they have a strong connection between the paracone and the labial anteroloph, which is unknown on the M3 of R. intimus sp. nov. On the basis of the rich sample of Rotundomys montisrotundi from Montredon (Herault, France), Aguilar [33] described various morphotypes that he found on the cheek teeth of this species. We found some of these morphotypes on the check teeth of R. intimus sp. nov. but in different percentages in the Spanish and French populations. For instance, 28% of the m1 show the morphotype i (specimens with the metaconid isolated and a longitudinal anterolophulid connected to the anterolophid) in Batallones 5, whereas it has been found only in 9% of the specimens from Montredon. The connections between the metalophulid and the anterolophid or between the anterolophulid and the anterolophid are weaker in the Batallones sample than in R. montisrotundi. Moreover, some m1 of R. intimus sp. nov. have a weak but distinct posterior metalophulid, which is absent in R. montisrotundi. On the whole, the molars of R. intimus sp. nov. are less lophodont than those R. montisrotundi. In fact, in the former species all cusps/ids are very distinct. In particular, the metacone is large and the cusp/ids of the anteroloph and anterolophid are distinct in the first molars. The cingula of R. intimus sp. nov. are less strong and the valleys shallower than in R. montisrotundi. R. intimus sp. nov. is more robust than R. montisrotundi. The cheek teeth of Rotundomys sabatieri are more lophodont with less distinct cusps and are less robust than those of R. intimus sp. nov. Moreover, all M1 of R. sabatieri have the metalophule completely fused with the posteroloph and lack the labial posterosinus (morphotype c according to Aguilar [33]). On the contrary, on the M1 of R. intimus sp. nov. the metalophule is not completely fused with the posteroloph, leaving a small labial posterosinus, which disappears with wear. In addition, some of the M1 of R. sabatieri show a short but distinct mesoloph (or incomplete anterior metalophule) directed towards the metacone, whereas a true mesoloph is never present on the M1 of R. intimus sp. nov. (instead, there is a thickening of the anterior arm of the hypocone). Some of the M2 of R. sabatieri show a double metalophule (morphotypes d and e according to Aguilar [33]), whereas none of R. intimus sp. nov. show it. With respect to the lower molars, the m1 of R. sabatieri have strong connections of metalophulid-anterolophid and anterolophulid-anterolophid, which are weak, interrupted or even absent on the equivalent teeth of the Batallones sample. Comparison with Rotundomys freiriensis Antunes et Mein, 1979 [17]. This species was coined on the basis of 21 teeth recovered from the lower MN10 site of Freiria do Rio Maior (Santarém, Portugal) [17]. Its holotype, an isolated left m1, is housed at the Stratigraphical and Palaeobiological center of UNL. Additional material of this species has not been found to date. Rotundomys freiriensis is smaller than R. intimus sp. nov. The m1 of R. freiriensis lack the anterolophulid and have a transverse metalophulid connected to the protoconid instead of the anteroconid. All m1 of R. intimus sp. nov. have a distinct anterolophulid and the metalophulid points always strongly forwards, nearly longitudinal. Furthermore, the m2 of R. freiriensis have a distinct lingual anterolophid that is absent on the m2 of R. intimus sp. nov. The M3 of R. freiriensis are much more reduced than those belonging to the Batallones sample. Discussion The general morphological pattern of Rotundomys intimus sp. nov. recalls that of R. montisrotundi and R. sabatieri. However, the detailed comparison described above between the type material of the latter species and R. intimus sp. nov. reveals the existence of important differences between these taxa that justify the erection of the new species. R. intimus sp. nov. is characterized by being less lophodont, having higher cusps/ids, weaker connections, shallower valleys, and thinner cingula than R. montisrotundi and R. sabatieri. There are also size differences between the type material of these taxa and the samples from Batallones. Figure 8 shows the scatter plot of the maximum length and width of the dental elements of all species belonging to the genus Rotundomys. It clearly shows the differences in size between R. intimus sp. nov. and the remaining species of the genus. The specimens from Batallones 5 are distributed in the lower range of R. montisrotundi, R. bressanus and R. sabatieri and in the upper range of R. freiriensis and R. mundi. Several variance analyses (ANOVA) have been performed for the length and width of each dental element to appreciate the differences amongst the samples (Text S3). The results show significant differences for most dental elements between the population from Batallones 5 and Rotundomys montisrotundi on one hand and R. sabatieri on the other hand. In fact, with the exception of the M3, the width of the m1 and the length of the M2, the Tukey's Honest Significant Differences post-hoc test indicates that the length and width of the molars of R. montisrotundi are significantly larger than those of R. intimus sp. nov. from Batallones 5. The differences in size that we have found in these samples are particularly meaningful taking into consideration that the population of Montredon presents a very wide range of size dispersion (see Figure 8). With regard to R. sabatieri, this test shows that the length of the M1, m2, and m3 are significantly larger than in R. intimus sp. nov. from Batallones 5. Phylogeny In order to elucidate the relationships between the species pertaining to the genera Cricetulodon and Rotundomys and the position of the new species from Batallones within Rotundomys, the first cladistic analysis involving all species of these genera has been conducted. A single most parsimonious tree has been generated with a length of 67 and a low degree of homoplasy (CI = 0.746 and RI = 0.825). Branch support was estimated through two complementary indices: Bremer Support [9] and Relative Bremer Support [10]. These indices are indicated for each node on the cladogram in the figure 9. It should be stressed that some of them are as low as 1, including that from which Rotundomys intimus arises. On a side note, no difference in topology (only very slight CI and RI deviations) occurs when serial homologues (characters 14, 16, 17, 18, and 37) are run as single characters. The tree shows a completely resolved topology. Cricetulodon hartenbergeri and C. sabadellensis position as sister-species in the most basal branch. C. bugesiensis and C. lucentensis split as sisterspecies at the base of a clade that is one node less inclusive. C. meini inserts between them and the remaining species of the ingroup, which all belong to Rotundomys. These species are henceforth fully asymmetrically distributed along the crown of the cladogram in an arrangement that lines up from R. freiriensis to R. bressanus plus R. montisrotundi. R. intimus sp. nov. locates in the middle of this sequence, in which it is flanked basally by R. mundi and apically by R. sabateri. However as explained above (1 comparison), the available material of R. mundi is very scarce so far so that only about 60% of the characters could be scored. Thus, this taxon is prone to shift its phylogenetical position, given new information. However, if we prune Rotundomys mundi from the ingroup before running the analysis the topology of the tree obtained is not altered, which means that this species is not currently affecting the results of our analysis. The transformations supporting the topology of this tree (under the ACCTRAN and DELTRAN optimizations) are listed in Table 3. Each internal node is discussed below, beginning from the most basal (whenever both unambiguous and ambiguous synapomorphies support a given node, only the former are mentioned). Node 22 (Ingroup). This clade is supported by three exclusive and unambiguous synapomorphies: LM1/LM3 ratio between 1.78 and 1.58; anterolophulid mostly joined with the lingual cusp of the anterolophid (this character is lost at node 19, all taxa arising from this node have the anterolophid mostly connected to the labial cusp of the anterolophid except for R. freiriensis, which lacks the anterolophulid); absence of mesolophid on the m2. Node 13 (Cricetulodon hartenbergeri + Cricetulodon sabadellensis). Three exclusive and unambiguous synapomorphies support this clade: M2 with nearly transverse posterior metalophule, presence of mesolophid and metalophulid connected to the anterolophulid behind the anteroconid on the m3. This node is supported by an additional unambiguous and non-exclusive synapomorphy: the frequent absence of anterior metalophule on the M1 (a parallelism with node 18 under ACCTRAN and node 17 under DELTRAN). Node 14 (Cricetulodon bugesiensis + Cricetulodon lucentensis). This clade is supported by an unambiguous and exclusive synapomorphy: absence of anterior metalophule on the M1. Two non-exclusive synapomorphies are also present at this node: presence of a forked anterolophule in some M1 (a parallelism with node 18 under ACCTRAN and with node 17 under DELTRAN); metalophulid connected to the anteroconid on the m2 (a parallelism with node 18). Node 21 (Cricetulodon bugesiensis + Cricetulodon lucentensis) + more derived species. Three unambiguous and exclusive synapomorphies support this node: m1 longer than 1.95 mm (this character is lost in Cricetulodon bugesiensis and Rotundomys freiriensis, which have a length of the m1 between 1.70 and 1.95 mm), M1 with posterior metalophule fused with the posteroloph or very oblique backwards, and absence of labial posterosinus. Node 20 (Cricetulodon meini + more derived). A single unambiguous and exclusive synapomorphy supports this node: absence of anterior metalophule on the M2 (this character is reversed in Rotundomys mundi, in which this structure is present in some specimens, and in the taxa arising from node 16). Node 19 (Rotundomys freiriensis + more derived species). The clade Rotundomys is sustained by four exclusive and unambiguous synapomorphies: loss of the anterior protolophule on the M1-M3, protoconid on the m1 connected to the hypolophulid in a regularly curved crest. In addition, three non-exclusive and unambiguous synapomorphies support this node as the absences of: mesoloph on the M2 (a parallelism with some specimens belonging to Cricetulodon sabadellensis, C. lucentensis, and C. bugesiensis), labial posterosinus on the M3 (a parallelism with C. sabadellensis) and mesolophid on the m1 (a parallelism with some specimens of Table 3. Synapomorphies plotted in the most parsimonious tree under ACCTRAN and DELTRAN optimizations. stage. We, therefore, continue using Rotundomys in the narrower sense used by Freudenthal et al. [7] and others. The evolution of Rotundomys is marked by the development of lophodonty on the cheek teeth, the loss of the anterior protolophule on the upper molars, the connection between protoconid and hypolophulid in a regularly curved crest, and the complete loss of mesolophs and mesolophids. However, the absence of the mesoloph on the M1 and M2 and the mesolophid on the m1 is also found in some specimens of several species of Cricetulodon. This suggests that these characters were not stable in populations of Cricetulodon, but quickly became so in the course of Rotundomys evolution. The mesolophs and the mesolophids are lost on the M3, m2, and m3 earlier than on the remaining teeth. The evolution from a nearly transverse to a very much oblique backwards posterior metalophule, commonly fused with the posteroloph on the upper molars, begins before the establishment of the Rotundomys clade. The same holds true for the loss of the anterior metalophule on the M1 and M2, which occurs in most specimens of Rotundomys spp. Nevertheless, this structure is also lost in most or all M1 of some species of Cricetulodon (C. sabadellensis, C. hartenbergeri, and C. meini) and on the M2 of C. bugesiensis and C. meini. Rotundomys intimus sp. nov. shares one exclusive and unambiguous synapomorphy with the more derived species of the genus: an occasional transverse connection between the metalophulid and the anterolophulid on the m1 (character 29, state 1). These species also present the unambiguous synapomorphy of having lost the lingual anterolophid on the m2 (character 34, state 1), which is not exclusive as it is also found homoplastically (parallelism) in Cricetulodon bugesiensis. Two other possible synapomophies are the anterolophule that is forked in part of the M1 at least (character 5, state 1) and the anterior metalophule that is absent in most, but not all, of the M1 (character 8, state 1). They are, however, equivocal (the M1 is unknown in R. mundi, which flanks basally R. intimus sp. nov.) and, in any event, non-exclusive as the former character-state has been acquired independently in (C. bugesiensis +C. lucentensis) and the latter in (C. hartenbergeri +C. sabadellensis). On the other hand, R. intimus sp. nov. is drawn aside from the clade formed by the most evolved species of Rotundomys by its archaic lophodonty showing distinct cusps (character 1, state 1), the shallow depth of the valleys of the occlusal surface (character 2, state 0), and the subdivided anteroconid on the M1 (character 4, state 0). Indeed, in R. sabatieri, R. montisrotundi, and R. bressanus the lophodonty is perfected, the valleys are deep, and the anteroconid on the M1 is crest-shaped (all are exclusive and unambiguous synapomorphies). In addition, in these species the anterior metalophule is mostly absent on the M2 (unambiguous synapomorphy), as it occurs homoplastically in C. bugesiensis, whereas it is always absent in R. intimus sp. nov. (a character-state acquired in the ancestor of C. meini and more derived species, but reversed in R. mundi). The moderately developed lophodonty of R. intimus sp. nov. can be optimised as either a reversion to the state that appears at node 19 (ACCTRAN) or as a parallelism with R. freiriensis (DELTRAN). However, the hypothesis of a reversion is not very plausible because once the lophodonty is acquired within a lineage, it is retained. The fact that the phylogenetical position of Rotundomys mundi is uncertain due to the amount of missing data makes it possible for Rotundomys intimus to be actually the second most basal taxon inside Rotundomys, i.e. not as distant from R. freiriensis as reflected in our analysis. Supporting Information Text S1 List of Characters and character states used for the phylogenetic analysis.
2016-05-12T22:15:10.714Z
2014-11-12T00:00:00.000
{ "year": 2014, "sha1": "fc830b5d49eb020eef96c13385d8bc5626aedf0a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0112704", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc830b5d49eb020eef96c13385d8bc5626aedf0a", "s2fieldsofstudy": [ "Biology", "Geology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
243480521
pes2o/s2orc
v3-fos-license
Study on Enhanced Methods for Calculating NH3 Emissions from Fertilizer Application in Agriculture Sector Ammonia is a representative PM-2.5 secondary product, and the need for management is emerging as health and living damage caused by fine particulate matter worsens. The main source of ammonia is the agricultural sector, and in Korea, 79% of the total ammonia emissions are emitted from the agricultural sector. Among them, there is high uncertainty about how to calculate emissions from ammonia discharged from fertilizer use, and inventory in the U.S. and Europe is borrowed, so inventory needs to be improved according to the situation in Korea. In this study, the ammonia inventory in the agricultural sector in Korea and abroad was examined, and additional activity data that can be used were reviewed. In addition, in order to improve the emission calculation method, the emission was calculated in three ways by different factors. As a result, it was confirmed that the amount of discharge varies depending on the type of soil use or whether cultivated crops are considered, and the possibility of excessive fertilizer top-dress by farmers was confirmed. In order to calculate the emission at a more detailed level based on this study, basic data such as fertilizer input method and regional distribution of crops should be systematically collected, and related follow-up studies should be conducted. Introduction Various studies worldwide have reported damages to human health caused by particulate matter, which has become a critical social issue [1][2][3]. Particles smaller than or equal to 2.5µm in diameter are defined as fine particulate matter (PM-2.5). Since the diameter of PM-2.5 is less than 1/20 the diameter of human hair, these particles can reach pulmonary alveoli or bronchial tubes without being filtered by the nose and cause respiratory diseases. Additionally, they may affect various other organs and cause cardiovascular disease, lung cancer, cell aging, and infections [4][5][6][7]. Hence, the International Agency for Research on Cancer, an affiliated organization of the World Health Organization, specified particulate matter as a carcinogen in 2013 [8]. Depending on their formation, PM-2.5 are categorized as either primary or secondary PM-2.5. Primary PM-2.5 are generated from the source and are directly emitted as solid fine dust. Secondary PM-2.5 refers to emitted SOx and NOx that form ammonium sulfate or ammonium nitrate by reacting with substances such as ammonia (NH 3 ) in the air [9][10][11]. Secondary PM-2.5 accounts for over 72% of PM-2.5 generated in South Korea [12], and their importance is increasing due to an increase in the number of high PM-2.5 concentration days. NH 3 concentrations are closely related to the change in PM-2.5 concentration, thus requiring accurate emission calculations and the identification of emission sources. However, in comparison to SOx and NOx, NH 3 is under less strict control in South Korea. 2 of 15 A management system is enforced for the total SOx and NOx load, and these pollutants are measured in real time at large-scale workplaces; currently, there is no such management scheme for NH 3 . According to South Korea's national emission statistics in 2017, total NH3 emissions were 308,298 tons, of which the majority (79.3%) was generated from the agricultural sector [13]. Chemical fertilizers are excessively applied in South Korean farmlands to increase productivity. The aggregate amount of chemical fertilizer application in South Korea ranked the highest among Organisation for Economic Co-operation and Development countries [14]. Therefore, regulation is necessary for NH3 emissions, which are generated from volatilization from soils due to excessive chemical fertilizer application. Currently, the list of emission sources and most emission factors are adopted from overseas systems for calculating NH3 emissions of fertilized farmland in South Korea. In addition, an accurate emission calculation method based on the agricultural environment in South Korea is required because the calculated emissions are evenly distributed for each month and fail to reflect the circumstances of the actual environment. In this study, global NH3 emission calculation methods were reviewed to devise potential enhancements for an accurate and a reliable method of NH3 emission calculation in the South Korean agricultural sector. Review of the Ammonia Measurements Methods in Agriculture Sector Each country uses different methods to control and account for NH 3 emission sources in the agricultural sector, taking country-specific factors into account. The United States (US) and the European Union (EU) provide detailed guidelines for calculating NH 3 emissions in the agricultural sector. These guidelines are reviewed in Table 1. The calculation method is divided into Tier 1, 2, and 3, and is calculated according to the circumstances of each country -Tier 1: Utilization of basic emission factor and annual fertilizer supply -Tier 2: Considering soil pH, climatic conditions, and nitrogen loss by fertilizer -Tier 3: Using a model considering soil pH, fertilizer amount, rainfall, and temperature, etc. Spatial-Temporal resolution Allocation in consideration of spatial distribution by crop type and average nitrogen input data by crop In the US, agricultural emissions are classified into four categories: crop cultivation, dust from livestock, fertilizer application, and livestock feces management. NH 3 emission sources from fertilizer application are classified into 14 nitrogen-based fertilizers (including anhydrous ammonia, aqueous ammonia, and urea). The Fertilizer Emission Scenario Tool for CMAQ is used in the emission calculations, and the soil profile, climate variables, volume of fertilizer supply, and the status of the cultivation area are used as activity data. Information on cultivation management (fertilizer application period, fertilizer components, application methods, and application quantities) is obtained through surveys, and the soil nutrient content and crop nutrient demand are estimated for their use in the NH 3 emission calculation. The emission factors are calculated monthly for each administrative region, considering the percentage of NH 3 emissions due to the total nitrogen fertilizer use. The calculated emissions consider the crop cultivation circumstances in the US [15]. The EU controls NH 3 emissions through country-specific methods stated in the EMEP/EEA Air Pollutant Emission Inventory Guidebook. The guidebook classifies the agricultural sector into four emission sources, according to the NFR (Nomenclature For Reporting) code: manure management, crop production, agricultural soils, and other agriculture, which includes the use of pesticides and the burning of agricultural waste in the field. NH 3 emission sources from fertilizer applications are restricted to those from nitrogen-based fertilizers. There are three emission calculation methods: Tier 1, Tier 2, and Tier 3. In Tier 1, the basic emission factors are multiplied by the annual fertilizer supply volume. In Tier 2, the climatic zone and soil pH are considered based on the 2006 Intergovernmental Panel on Climate Change Guidelines for National Greenhouse Gas Inventories and the unique emission factors for 11 nitrogen-based fertilizers are used. Finally, specific data such as crop growth processes, land type, climate conditions, the local distribution by crop type, and average nitrogen demand by crop type are utilized in Tier 3 for greater precision [16]. In South Korea, the agricultural sector is classified into two emission sources: fertilized farmland and feces management. The emission sources within fertilized farmland are further classified into 10 types of nitrogen-based fertilizers (such as urea, compound fertilizers, and ammonium sulfate). The emissions are calculated by multiplying the regional fertilizer supply by the fertilizer nitrogen content and the emission factor. Then, the emission quantities are allocated according to the farmland area. The volume of fertilizer supplied regionally by the National Agricultural Cooperative Federation is used, since accurate figures of the applied fertilizer quantities are difficult to obtain. In addition, the agriculturally active period is assumed to be eight months (from March to October), considering the average temperatures. During this period, NH 3 emissions are evenly distributed. However, the exact fertilizer type and application location are unknown, and the assumption that the annual supply of fertilizer is completely consumed in the supplied region differs from reality. Lastly, the even distribution of NH 3 emissions from March to October does not reflect actual agricultural activities, and emissions from greenhouses are not considered [17]. The US and most EU member countries use models to calculate national emissions. This allows for the exact location and the management of specific emissions information (cultivation plans, fertilizer application period, crop nutrient demand, etc.) to be taken into account. Currently, obtaining specific information related to NH 3 emissions and model applications are difficult in South Korea. Nevertheless, accuracy in emission control can be improved using the acquired data on farmland area, cultivated crops, and nationally recommended cultivation methods by crop type. Study Site Selection Jeolla Province was selected as the pilot study area for the development of an enhanced NH 3 emission calculation method based on fertilized farmland ( Figure 1). This region emits the largest amount of NH 3 from the largest area of arable land in South Korea. National emissions statistics in 2017 revealed that fertilized farmland emitted 17,754 tons of NH 3 out of 244,335 tons of total NH 3 emissions, and 6189 tons (34.9%) originated from Jeolla Province. South Korea has a total farmland area of 1,320,795 ha, and 493,059 ha (30.4%) area is distributed in Jeolla Province [18]. Fertilized Farmland NH3 Emission Calculation Method In South Korea, NH3 emissions are calculated based only on the regional fertil supply volume and the nitrogen content. They are allocated to administrative dist based on the farmland area during the agriculturally active period (March-Octob which does not reflect the actual duration or location of fertilization. The spatial and t poral resolution of the NH3 emission calculation method was enhanced and compare the existing method as shown in Table 2. In this study, 2017 data was used for compar with the latest national emissions (2017). In the enhanced calculation method, the m common land use types in South Korea (rice paddies, fields, greenhouses, and pomi ture), according to their current emission source classification system depending on current fertilization types, the crops cultivated by region and farmland type, the stand fertilizer application amount for each crop type, and farming schedules, were conside Fertilized Farmland NH 3 Emission Calculation Method In South Korea, NH 3 emissions are calculated based only on the regional fertilizer supply volume and the nitrogen content. They are allocated to administrative districts based on the farmland area during the agriculturally active period (March-October), which does not reflect the actual duration or location of fertilization. The spatial and temporal resolution of the NH 3 emission calculation method was enhanced and compared to the existing method as shown in Table 2. In this study, 2017 data was used for comparison with the latest national emissions (2017). In the enhanced calculation method, the most common land use types in South Korea (rice paddies, fields, greenhouses, and pomiculture), according to their current emission source classification system depending on the current fertilization types, the crops cultivated by region and farmland type, the standard fertilizer application amount for each crop type, and farming schedules, were considered. Method I (NH 3 emission calculation considering the volume of supply for each fertilizer type) considers the volume of supply for each fertilizer type to calculate NH 3 emission. This method is currently used in NH 3 emission calculations from fertilization applications in South Korea. The emission sources' classification system is based on fertilizer types (including urea, compound fertilizers, and ammonium sulfate) as shown in Table 3. The NH 3 emission is calculated by multiplying the volume of fertilizer and nitrogen content by the fertilizer type and emission factor as shown in Equation (1). The EPA emission factors were adopted for ammonium sulfate and other nitrogen-based fertilizers, and emission factors were developed in South Korea and are used for urea and compound fertilizers as shown in Table 4. where E is NH 3 Emission (kg-NH 3 /yr); A is supply volume of fertilizer (ton-fertilizer/yr); N is nitrogen content or product (%); 100 is nitrogen content unit conversion factor; EF is emission factor (kg-NH 3 /ton-fertilizer). Method II (NH 3 emission calculation considering the representative crops and the standard fertilizer application amount) considers the volume of fertilizer supply and the farming schedule for spatially and temporally enhancing the resolution of the NH 3 emission calculation. The emission sources' classification system is based on land use types (rice paddies, fields, greenhouses, and pomiculture) as shown in Table 5. Fertilizer Application Rice Paddies The order of the emissions calculated using Method II is shown in Figure 2. In this study, the area of four farmland types (rice paddies, fields, greenhouses, and pomiculture) and the area cultivated for every crop type were collected [19]. Representative crops from regional farmlands were selected based on a land occupation rate of approximately 80%. The farming schedules and standard fertilization application quantities of the selected crops were designed according to the standard cultivation method [20]. Then, weighted values for each month were calculated and the NH3 emission quantities for each farmland type were calculated using Equation (2). The fertilizer supply provided in In this study, the area of four farmland types (rice paddies, fields, greenhouses, and pomiculture) and the area cultivated for every crop type were collected [19]. Representative crops from regional farmlands were selected based on a land occupation rate of approximately 80%. The farming schedules and standard fertilization application quantities of the selected crops were designed according to the standard cultivation method [20]. Then, weighted values for each month were calculated and the NH 3 emission quantities for each farmland type were calculated using Equation (2). The fertilizer supply provided in Method I was assumed to have been allocated by area of each farmland type. The same values as in Table 4 were applied for emission factors. Previous studies confirmed that over 80% of NH 3 volatilized within 1 week in general [21][22][23][24], although temperature and seasonal effects were affected [25]. Therefore, the weighted values were calculated based on the assumption that volatilization and fertilizer application occur simultaneously. where E is NH 3 Emission (kg-NH 3 /month); A is supply volume of fertilizer (ton-fertilizer/ month); Weight is representative crops' weighted values for each month; N is nitrogen content or product (%); 100 is nitrogen content unit conversion factor; EF is emission factor (kg-NH 3 /ton-fertilizer). NH 3 Emission Calculation Method with Spatial and Temporal Resolution Enhancement Considering Nitrogen Application Method III (NH 3 emission calculation considering crop-specific nitrogen application) considers the nitrogen quantities in the farming schedules for spatially and temporally enhancing the resolution of the NH 3 emission calculation. The emission sources' classification system is based on land use types (rice paddies, fields, greenhouses, and pomiculture) as shown in Table 6. The order of the emissions calculated using Method III is shown in Figure 3. where E is NH3 Emission (kg-NH3/month); A is supply volume of fertilizer (ton-fertilizer/month); Weight is representative crops' weighted values for each month; N is nitrogen content or product (%); 100 is nitrogen content unit conversion factor; EF is emission factor (kg-NH3/ton-fertilizer). NH3 Emission Calculation Method with Spatial and Temporal Resolution Enhancement Considering Nitrogen Application Method III (NH3 emission calculation considering crop-specific nitrogen application) considers the nitrogen quantities in the farming schedules for spatially and temporally enhancing the resolution of the NH3 emission calculation. The emission sources' classification system is based on land use types (rice paddies, fields, greenhouses, and pomiculture) as shown in Table 6. Rice Paddies Fields Greenhouses Pomiculture The order of the emissions calculated using Method III is shown in Figure 3. Similar to Methods II and III, the spatially and temporally enhanced resolution of the considered emissions took into account the standard fertilizer application amount for farmland area based on the nitrogen content. Based on the farmland area for each selected crop type, the standard fertilizer application amount was multiplied to generate the monthly nitrogen application amount as shown in Equation (3). where E is total amount of nitrogen application (kg/month); cultivation area is the area cul- Similar to Methods II and III, the spatially and temporally enhanced resolution of the considered emissions took into account the standard fertilizer application amount for farmland area based on the nitrogen content. Based on the farmland area for each selected crop type, the standard fertilizer application amount was multiplied to generate the monthly nitrogen application amount as shown in Equation (3). E = Cultivation area × Standard f ertilizaiton applicaion quantity × 10 4 × 10 −6 (3) where E is total amount of nitrogen application (kg/month); cultivation area is the area cultivated for every crop type (ha); standard fertilization application quantity is standard fertilization application quantities of the selected crops (g-N/m 2 ); 10 4 is area unit conversion factor (m 2 to ha); 10 −3 is fertilization unit conversion factor (g to kg). Results of NH 3 Emission Calculation Considering the Volume of Fertilizer Supply Using Method I (NH 3 emission calculation considering the volume of supply for each fertilizer type), the total NH 3 emissions in 2017 in Jeolla Province was calculated to be 2,420,292 kg, as shown in Table 7. Among the 14 administrative districts in Jeolla Province, Gimje city generated the highest NH 3 emission at 337,283 kg, followed by Iksan city (316,961 kg) and Gunsan city (313,683 kg). Together, four cities (Gimje, Iksan, Gunsan, and Jeongeup) accounted for 51% (1,231,034 kg) of the total emissions. The monthly emission in Jeolla Province calculated using Method I was the same from March to October, as shown in Figure 4. Urea was the most frequently applied fertilizer, followed by compound fertilizers and ammonium sulfate. Currently, South Korea's classification system identifies "farmland" as an emission source, which is not further classified into different land use types. Therefore, the spatial and temporal emission characteristics are unidentifiable. The monthly emission in Jeolla Province calculated using Method I was the same from March to October, as shown in Figure 4. Urea was the most frequently applied fertilizer, followed by compound fertilizers and ammonium sulfate. Currently, South Korea's classification system identifies "farmland" as an emission source, which is not further classified into different land use types. Therefore, the spatial and temporal emission characteristics are unidentifiable. Results of NH 3 Emission Calculation Considering the Regional Representative Crops and Their Standard Fertilizer Application Amount Using Method II (NH 3 emission calculation considering the representative crops and standard fertilizer application amount), the total NH 3 emission in Jeolla Province was calculated to be 2,439,895 kg, as shown in Table 8. Gimje city generated the largest NH 3 emission at 339,241 kg, followed by Iksan and Gunsan city. The monthly emissions varied while using Method II for the calculation, as shown in Figure 5. The monthly emission varied according to the fertilizer type when the representative crops were equal, while the overall pattern remained similar. Urea application generated the highest amount of NH 3 emissions, followed by compound fertilizers and ammonium sulfate. Ammonium sulfate generated the lowest amount of NH 3 emissions in all farmlands. Rice paddies fertilized by urea generated the largest emissions in May (531,000 kg), followed by fields (urea, 153,000 kg in February and October, lowest from December to January), greenhouses (highest emissions in March, even throughout the year), and pomiculture (highest emissions in February and March). Table 9 shows the total amount of nitrogen application (15,488,710 kg) and NH3 emissions in Jeolla Province calculated by Method III (NH3 emission calculation considering crop-specific nitrogen application). The regional nitrogen application amount was converted to NH3 emissions by multiplying the regional nitrogen application amount by the existing fertilizer type-dependent emission factors. The emission factors were additionally weighted based on the regional supply volume of different fertilizer types. A total of 1,505,393 kg of NH3 were emitted. Gimje city generated the largest amount of NH3 emissions at 246,627 kg, followed by Iksan and Jeongeup city. The regional emission value derived from Method III differed from that derived from Method II, which is likely caused by the difference in the manner of fertilizer application between Jeongeup and Gunsan city. In more detail, rice is cultivated in rice paddies with an area of 98% or more, and rice is fertilized in May, July, and August under the farming schedules, so it can be seen that emissions stand out during the month. In fields, 35% of the total area of the fields is cultivated with barley, wheat, and onions, and these crops are fertilized in February and October, indicating that the emissions are high during the month. In the facility, watermelon cultivation area occupies 25%, accounting for the largest proportion, and it can be seen that the largest amount of discharge occurs because additional fertilization is given in March. In addition, in the pomiculture, 39% of the total area is cultivated for apples and persimmons, and these crops are basal fertilized in February and March. Based on South Korea's current NH 3 emission calculation method, spatial and temporal emission characteristics are unidentifiable. Therefore, the representative crop and farmland type identifications used in Method II will enhance NH 3 emission controls. Table 9 shows the total amount of nitrogen application (15,488,710 kg) and NH 3 emissions in Jeolla Province calculated by Method III (NH 3 emission calculation considering crop-specific nitrogen application). The regional nitrogen application amount was converted to NH 3 emissions by multiplying the regional nitrogen application amount by the existing fertilizer type-dependent emission factors. The emission factors were additionally weighted based on the regional supply volume of different fertilizer types. A total of 1,505,393 kg of NH 3 were emitted. Gimje city generated the largest amount of NH 3 emissions at 246,627 kg, followed by Iksan and Jeongeup city. The regional emission value derived from Method III differed from that derived from Method II, which is likely caused by the difference in the manner of fertilizer application between Jeongeup and Gunsan city. As shown in Figure 6, monthly NH 3 emissions varied with application in Method III, since the farming schedule for each crop was considered. The fertilization application schedule was the same for all crops, which resulted in a development similar to Method II. However, the results showed different emission levels. In May, July, and August, when emissions are highest, the NH 3 emissions calculated by Method III (1,050,661 kg) were lower than those calculated with Method II (1,063,984 kg) by 13,322 kg. Method III yielded lower emissions for other farmland types as well. This confirms that actual farmlands are fertilized beyond their standard amounts set by South Korea. As shown in Figure 6, monthly NH3 emissions varied with application in Method III, since the farming schedule for each crop was considered. The fertilization application schedule was the same for all crops, which resulted in a development similar to Method II. However, the results showed different emission levels. In May, July, and August, when emissions are highest, the NH3 emissions calculated by Method III (1,050,661 kg) were lower than those calculated with Method II (1,063,984 kg) by 13,322 kg. Method III yielded lower emissions for other farmland types as well. This confirms that actual farmlands are fertilized beyond their standard amounts set by South Korea. Comparison of the Fertilized Farmland NH3 Emission Calculation Methods A comparison between the previous and the enhanced NH3 emission calculation methods is shown in Figure 7. Method I is based on the volume of fertilizer supply, Comparison of the Fertilized Farmland NH 3 Emission Calculation Methods A comparison between the previous and the enhanced NH 3 emission calculation methods is shown in Figure 7. Method I is based on the volume of fertilizer supply, Method II has the additional considerations of regional representative crops and the standard fertilization application amount for each crop, and Method III incorporates the nitrogen application amounts, resulting in 2420, 2439, and 1505 tons of calculated NH 3 emissions, respectively. Methods I and II, which used data based on current activities, resulted in similar emission values. Method II has the advantage that fertilizer supply data are available from the official national emission calculation. For actual compound fertilizers, the nitrogen content varies as per the manufacturer. In this study, an average value was used instead of considering the varying nitrogen content, which may have yielded different results. Method III generated the lowest emission value, which is caused by excessive fertilization. Although chemical fertilizers increase agricultural productivity, their excessive application may impoverish the soil and negatively affect crop physiology and quality [26][27][28]. The standard fertilizer application amount for each crop is defined in South Korea. However, additional amounts are optionally applied to increase productivity. Previous studies showed that nitrogen-based fertilizers are applied 1.5-2.4 times more than the standard amount [29]. Studies that evaluate changes in soil properties by farmland type showed that for rice paddies, chemical fertilizers are mostly applied within the optimal range. However, in greenhouses and pomiculture, numerous chemical fertilizers exceed the optimal range [30,31]. The results from Method III differ from those from Method I by 915 tons, demonstrating the excessive application of fertilizer on actual farmlands. than the standard amount [29]. Studies that evaluate changes in soil properties by farmland type showed that for rice paddies, chemical fertilizers are mostly applied within the optimal range. However, in greenhouses and pomiculture, numerous chemical fertilizers exceed the optimal range [30,31]. The results from Method III differ from those from Method I by 915 tons, demonstrating the excessive application of fertilizer on actual farmlands. Specific regional fertilizer application excess can be ascertained using Method III, combined with the annual data on soil quality and the state of the soil, provided by the National Institute of Agricultural Sciences in South Korea. These data may be useful for the development of soil and NH3 emission control policies [32]. However, this combination requires complementary measures to take into account fertilizer-specific emission factors and to convert the calculated total nitrogen application amount to NH3. Monthly emissions based on fertilizer application are shown in Figure 8. The regional crop characteristics reflected in Methods II and III are expected to improve the evenly distributed emissions in Method I. In Methods II and III, the largest emissions were generated in May, July, and August. This was likely caused by rice paddy NH3 emissions that exceeded other farmland type NH3 emissions. In fact, rice paddies account for the largest proportion of the area by farmland in Jeolla-do, with 68.1% of rice paddies, 22.9% of fields, 3.1% of facilities, and 5.9% of fruit trees. Rice is cultivated on an area of more than 98% in rice paddies, and according to the standard cultivation method, rice is fertilized in May, July, and August, so it is judged that the graph shown in the figure below is shown. Specific regional fertilizer application excess can be ascertained using Method III, combined with the annual data on soil quality and the state of the soil, provided by the National Institute of Agricultural Sciences in South Korea. These data may be useful for the development of soil and NH 3 emission control policies [32]. However, this combination requires complementary measures to take into account fertilizer-specific emission factors and to convert the calculated total nitrogen application amount to NH 3 . Monthly emissions based on fertilizer application are shown in Figure 8. The regional crop characteristics reflected in Methods II and III are expected to improve the evenly distributed emissions in Method I. In Methods II and III, the largest emissions were generated in May, July, and August. This was likely caused by rice paddy NH 3 emissions that exceeded other farmland type NH 3 emissions. In fact, rice paddies account for the largest proportion of the area by farmland in Jeolla-do, with 68.1% of rice paddies, 22.9% of fields, 3.1% of facilities, and 5.9% of fruit trees. Rice is cultivated on an area of more than 98% in rice paddies, and according to the standard cultivation method, rice is fertilized in May, July, and August, so it is judged that the graph shown in the figure below is shown. Conclusions With the increase in the number of days with a high fine dust concentration and as NH3 is the main source of secondary PM-2.5 formation, controlling NH3 emissions is crucial. The US and the EU members monitor their country-specific farmland types, climate zones, precipitation values, temperatures, soil pH, and CECs, given that the main source of NH3 emission is the agricultural sector. In contrast, South Korea's NH3 emission calculation method only considers the regional fertilizer supply volume and the nitrogen content of each product. The calculated amount of NH3 emission is distributed evenly for each month, which fails to reflect spatial and temporal characteristics of the crops' cultivation periods that vary by latitude and greenhouse crops that are cultivated in winter. In this study, the current NH3 emission calculation method was reviewed and two enhanced methods were derived to increase the accuracy and reliability of the results. Consequently, Methods II and III resulted in varied monthly emissions based on land use or cultivated crop types. Additionally, actual farmlands may be applying fertilizers in excess, as determined by comparing Methods I and III. Methods II and III distinguish farmlands by land use type and consider crop-specific farming schedules by selecting representative regional crops, allowing for the identification of monthly and farmland-specific emission characteristics. In addition, the theoretical crop-specific nitrogen demand can be calculated using Method III, which may increase the precision of the NH3 emission calculation and the accuracy in emissions control when combined with regional data on soil quality and the state of the soil. This calculation method may be used for precise emissions control in the US or EU. Systematic data collection on future crop management plans, fertilizer application methods, and regional distributions of crops will enable precise emissions control in South Korea. NH3 emission prediction models can be developed in the future, with the continuous accumulation of data on soil quality and the state of the soil, climate zones, and climatic conditions. Conclusions With the increase in the number of days with a high fine dust concentration and as NH 3 is the main source of secondary PM-2.5 formation, controlling NH 3 emissions is crucial. The US and the EU members monitor their country-specific farmland types, climate zones, precipitation values, temperatures, soil pH, and CECs, given that the main source of NH 3 emission is the agricultural sector. In contrast, South Korea's NH 3 emission calculation method only considers the regional fertilizer supply volume and the nitrogen content of each product. The calculated amount of NH 3 emission is distributed evenly for each month, which fails to reflect spatial and temporal characteristics of the crops' cultivation periods that vary by latitude and greenhouse crops that are cultivated in winter. In this study, the current NH 3 emission calculation method was reviewed and two enhanced methods were derived to increase the accuracy and reliability of the results. Consequently, Methods II and III resulted in varied monthly emissions based on land use or cultivated crop types. Additionally, actual farmlands may be applying fertilizers in excess, as determined by comparing Methods I and III. Methods II and III distinguish farmlands by land use type and consider crop-specific farming schedules by selecting representative regional crops, allowing for the identification of monthly and farmland-specific emission characteristics. In addition, the theoretical crop-specific nitrogen demand can be calculated using Method III, which may increase the precision of the NH 3 emission calculation and the accuracy in emissions control when combined with regional data on soil quality and the state of the soil. This calculation method may be used for precise emissions control in the US or EU. Systematic data collection on future crop management plans, fertilizer application methods, and regional distributions of crops will enable precise emissions control in South Korea. NH 3 emission prediction models can be developed in the future, with the continuous accumulation of data on soil quality and the state of the soil, climate zones, and climatic conditions.
2021-11-05T15:11:52.715Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "f9b624bd1a0576d0518da82ae3114636652a5095", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/21/11551/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "474496cfa689a333b2d6d50a1e3745c9fcb361d0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
108901059
pes2o/s2orc
v3-fos-license
A Framewrok of Prototype Compressor Impeller Auto-Test System This paper presents a framework of prototype compressor impeller auto-test system. Two type performance test can be held in a unique pipeline layout. Work conditions are accuratly under control. Status data is acquired synchronously and precisely. Performance evaluation is accomplished efficient. This framework is proved to be highly available to improve traditional manual test method. I. INTRODUCTION Compressors are critical equipments in many industries. The performance of the impeller, which is the core component of a compressor, directly infects the performance of the entire compressor. Therefore testing a prototype impeller plays an important role in the compressor industry. The traditional manual test method is a time-consuming job and the accurancy of test results depends on the skill and experience of the operation crew. In this paper, we present a framework of building an automatic test system to accomplish the test. Structure of the auto-test system is introduced in Part II. Part III shows a test flow and differences form the traditional manual test flow. As a conclusion Part IV summarizes the advantages of employing the auto-test system and presents further plans to improve this system. II. SYSTYEM STURCTURE The basic theory of the prototype impeller performance test is as follows: The impeller is installed in a single level compressor. When the test begins the rotation speed of the compressor is set to the designed rotation speed of the impeller. By changing the area of the compressor outlet section the work condition of the compressor is under control. The test system records status data of the compressor under different work conditions. By calculating the characteristic curve through that data and comparing the curve with the designed parameters, the performance of the impeller can be effectively evaluated [1] [2]. Fig.1 shows the general structure of the system. A field data acquition (DAQ) device collects status data of the compressor. A field controller handles the mechanical part of the system. A control room host processes status data, manages the test flow, and calculates the characteristic parameters of the under-test impeller. A main network charges the communication between devices which are presented above. The general structure of the system A. Pipeline layout Conduit pipes are built surrounding the compressor as shown in Fig.2. In order to measure the flow of the gas running into to the compressor, a throttle device is set before the inlet of the compressor. Valve #2 is installed after the outlet of compressor. By adjusting the opening of the valve the pressure of the gas running out form the compressor can be changed, as a result of which the work condition of the compressor is completely under control. Two types of test is involved: open loop test and close loop test. Some of the valves are used to organize the pipeline for the very type. By opening valve #1, #3 and shutting valve #4, #6, the pipeline is then set to an open loop. On the contrary, opening valve #4, #6 as well as shutting valve #1, #3 could form a close loop pipeline. When the impeller is under a close loop test, in order to ensure the stability of the compressor in every work condition, the pressure and temperature at the inlet of the compressor should maintain proper values. Valve #5 in the pipes is used to hold the access of the gas from the gas bottle. Valve #3 is used to release the gas from the pipeline. By controlling these two valves the stability of the pressure at the inlet is then guaranteed. In the close loop pipeline, a cooler is used to cool down the gas that runs from the outlet of the compressor. Valve #7 and #8 control the flow of the water in the cooler, which means the temperature of gas in the pipeline is also under control. All pressure and temperature data which is necessary to calculate the performance parameters of the prototype impeller is generally acquired from the inlet and outlet of the compressor and differential pressure between the gas running in and out the throttling device as shown in Fig.3. By inserting probes into the pipe and connecting them to pressure transmitters with thin tubes, the pressure data is output as electric signals. Data of temperature is acquired by thermal couples. A NI PXI-SCXI combo [3] chassis with a real-time controller and DAQ devices is employed to collect signals from all instruments and sensors. Signals from the pressure transmitters are directly connected to PXI DAQ devices. Through a cold-end compensator which is used to avoid the interference of ambient and a SCXI signal conditioning module, signals from the thermal couples are gathered by PXI DAQ devices as well. B. Field data acquisition In order to reduce the load of the main network, data from the field is preprocessed before transferring to the control room host. C. Wireless data acqusition There some sorts of problems when setting sensor and aquiring data. Wiring problem is one of them. When it is hard to connect the sensors to the DAQ device, wireless transmitting through WIA-PA protocol is then employed. WIA-PA is an industrial wireless communication standard which is data-stable and suitable for realtime applicathons [4]. Sensors are connected to a wireless node which provide function of pre-process. Then data from wireless nodes are received by a WIA-PA gateway. The gateway directly communicates with control room host in the main network. D. Field controller A NI CompactRIO [5] device is employed as the field controller in the auto-test system. The field controller handles all valves. Specifically, it first switches the pipeline between open loop and close loop by opening and shutting the proper valves that are presented above. Secondly, it used valve #2 to change the work condition. Last but not least, in close loop test two PID processes are implemented in the controller to maintain the temperature and pressure of gas running into the compressor by controlling related valves that are introduced in part II-B. The other task of the field controller is to adjust the rotation speed of the motor that drives the compressor to ensure the safety and stability especially during the start procedure. A ProfiBus master module is installed in the controller to read statuses from and send commands to the inverter that drives the motor as a ProfiBus slave. During the start procedure the raise of rotation speed strictly follows a speed-time curve that is pre-set in the controller. Fig.4 presents the architecture of software in the control room host. It is a modularized software system. Every module is an independent process. A kernel module that manages a data tunnel and an event queue takes charge of all internal behavior of the system including starting and shutting down any other modules, exchanging data between modules by the data tunnel and scheduling modules by the event queue. This architecture guarantees that the system can easily be modified and extended. E. Control room host Communication management module receives data from the field DAQ device and posts it to the data tunnel. At the meantime this module also sends commands and set values to the field controller. Data processing module processes the status data of the compressor. This module sends the processed data to the data tunnel as well as post monitor events to the event queue to inform the operator. Test flow management module schedules the entire test. Before a test this module set the pipeline to required type and download the rotation speed adjustment curve to the field controller. During the test this module monitors the alteration of the temperature at the outlet section to determine if the compressor is under a stable work condition. After ensuring the condition is stable it records the data and sends command to the field controller to change work condition till the end of the test. Data analyzing module calculated the performance parameters of every work condition through the recorded data and presents the characteristic curve of the prototype impeller. By comparing the curve with the design parameters this module evaluates the design and generates a final test report. Database management module provides functions of saving and searching information about spec of sensors, design parameters of impellers, history data of tests. III. AUTO-TEST FLOW An auto-test flow is shown in Fig.5. As shown in Fig.6 the auto-test system has increased the efficiency of the prototype impeller test by recording more work conditions as well as reducing the total test time. Furthermore, before employing this system it is a four people job to accomplish a test: one controls valves and the motor; two record the field data; one analyzes the data. Now only one operator in the control room can accomplish the same test all by himself. Figure 6. Time comparation between using traditional method to record 6 work conditions and using auto-test system to record 8 IV. CONCLUSION This research presents a framework of prototype compressor impeller auto-test system. Contrasting with the traditional manual test method advantage of this system is obvious: • Accuracy of prototype impeller test is improved. The high performance DAQ devices guarantee that all signals are acquired precisely and synchronously. • Efficiency of prototype impeller test is increased. The auto-test system can record more work conditions in less time. • Cost of prototype impeller test is reduced. Currently, pressure probes and thermal couples are all fixed. As a result, the flow field information of sections is not clear enough. A servo-drive system is planned to be employed to move the sensors to any position in a section. By representing the flow field information the results of the test can be more accurate.
2019-04-12T13:58:26.504Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "e493b528ddff88f3f4d639b3fdb66f9ba93178fb", "oa_license": "CCBYNCSA", "oa_url": "http://ir.sia.cn/bitstream/173321/13863/2/A%20framewrok%20of%20prototype%20compressor%20impeller%20auto-test%20system.pdf", "oa_status": "GREEN", "pdf_src": "ScientificNet", "pdf_hash": "26db7f6c8b5913057c78c75efe156ecb162b7a2b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
6086266
pes2o/s2orc
v3-fos-license
To Compare PubMed Clinical Queries and UpToDate in Teaching Information Mastery to Clinical Residents: A Crossover Randomized Controlled Trial Purpose To compare PubMed Clinical Queries and UpToDate regarding the amount and speed of information retrieval and users' satisfaction. Method A cross-over randomized trial was conducted in February 2009 in Tehran University of Medical Sciences that included 44 year-one or two residents who participated in an information mastery workshop. A one-hour lecture on the principles of information mastery was organized followed by self learning slide shows before using each database. Subsequently, participants were randomly assigned to answer 2 clinical scenarios using either UpToDate or PubMed Clinical Queries then crossed to use the other database to answer 2 different clinical scenarios. The proportion of relevantly answered clinical scenarios, time to answer retrieval, and users' satisfaction were measured in each database. Results Based on intention-to-treat analysis, participants retrieved the answer of 67 (76%) questions using UpToDate and 38 (43%) questions using PubMed Clinical Queries (P<0.001). The median time to answer retrieval was 17 min (95% CI: 16 to 18) using UpToDate compared to 29 min (95% CI: 26 to 32) using PubMed Clinical Queries (P<0.001). The satisfaction with the accuracy of retrieved answers, interaction with UpToDate and also overall satisfaction were higher among UpToDate users compared to PubMed Clinical Queries users (P<0.001). Conclusions For first time users, using UpToDate compared to Pubmed Clinical Querries can lead to not only a higher proportion of relevant answer retrieval within a shorter time, but also a higher users' satisfaction. So, addition of tutoring pre-appraised sources such as UpToDate to the information mastery curricula seems to be highly efficient. Introduction With increasing medical literature, learning information management is crucial for clinicians to make them competent to find the best evidence in a short time [1]. In this context the important issue for clinicians is identifying sources which can provide them with reliable, relevant and readable information [2]. Many evidence based medicine workshops and courses have been conducted all over the world to teach clinicians and medical students information management. Most of them focus on principals of searching in resources such as PubMed and especially PubMed Clinical Queries [3][4], which is not available bedside and users also need critical appraisal skill to decide on applying retrieved information into daily practice. Whilst some other workshops focus on 5S model as a reliable and optimum approach in order to seek for evidence-based information in systems, summaries, synopses, syntheses and studies arranged through the highest to the lowest level resources, respectively [5][6]. However, recently ''6S'' model is introduced (systems, summaries, synopses of syntheses, syntheses, synopses of studies, and studies) [7]. Both models suggest looking for the needed information at the highest level and proceeding to lower levels in case of failure to find the relevant evidence [6]. Therefore, it seems that learning search within the higher level resources is at least as important as learning search within lower level resources since it may change inefficient information-seeking behavior of physicians [2]. Some studies have compared different medical information resources to suggest the best resources fulfilling trainees' need in practice. Although some of them have compared searching PubMed with UpToDate [8], and searching MEDLINE prior to pre-appraised sources with the reverse protocol [9] it remains unclear which information source should be more emphasized in evidence based medicine workshops. Since a) computerized decision support systems are not well developed yet, b) using Clinical Queries is reported to facilitate timely retrieval of results in MEDLINE [10], c) UpToDate is reported to be the best ''summary'' source in the previous studies [11][12][13] the investigators of this study aimed to compare the proportion of relevantly answered clinical questions, time spent to find the answers, and users' satisfaction using PubMed Clinical Queries and UpToDate during a workshop. Participants and Setting After obtaining the ethical approval from Medical Education and Development Centre (MEDC) affiliated to Tehran University of Medical Sciences (TUMS), this cross-over randomized trial was conducted in February 2009 at TUMS. MEDC ethics committee agreed with verbal consent. Participants were postgraduate yearone or two residents at TUMS studying in 10 different residency programs including cardiology, pediatrics, emergency medicine, psychiatry, pathology, anesthesiology, radiology, obstetrics and gynecology, internal medicine and urology. They were recruited to participate in a one-day information mastery workshop. The Investigators explained design and purpose of the study to participants and verbal consent was obtained as well. Interventions Through a one-hour lecture, participants were taught principles of Information Mastery including ''5S'' approach to information resources. (Table 1) The consented participants were randomly assigned to two groups with equal size using UpToDate or PubMed Clinical Queries as the first resource, they were then asked to repeat the exercise using the alternative. In each database they were asked to answer 2 clinical questions. Questions were randomly assigned to participants in a way each participant received a question of diagnosis and a question of therapy, No one search similar questions using two databases, and all questions were also searched in both resources. Before beginning to search, each participant used a self-learning slide-show in power point format demonstrating the instruction on how to use the resource. (Table 1) Then they were given 10 minutes to get familiar with it. 16 clinical scenarios, with definite answers, followed by a formulated question in the PICO (Patient, Intervention, Comparison, Outcomes) format were selected from the website of the Center of Evidence Based Medicine of the University of Toronto. [14]. The questions were focused on eight clinical fields including child health, critical care, gastroenterology, general practice, general surgery, geriatrics, neonatology and physiotherapy. From each field one question of diagnosis and one question of therapy were selected. Software designed by Microsoft Excel Visual Basic for Application was used to provide participants with questions. Randomization sequence was generated by Random Allocation Software version1.0.0 using simple random method. Sequentially numbered sealed opaque envelops were used to conceal the allocation. Each participant received one envelope containing the randomization code ( Figure 1). Each code indicated the first allocated resource followed by the number of randomly assigned software subtype, the second resource, and its randomly allocated software subtype (ie: U3CQ8). They were not allowed to open the envelope until everyone had his own. Blinding was not applicable to the users and outcome assessors because they could recognize the layout of the resources. Measurements The primary outcome measures of the study were: a) answer retrieval, and b) time to answer retrieval. The secondary outcome measures were: a) user satisfaction, and b) user interaction with PubMed Clinical Queries. Participants' baseline characteristics including age, gender, type and the year of specialty or subspecialty, and also prior use of allocated resources were recorded using a checklist. Basic computer skills and prior familiarity with resources were measured by a five-point Likert scale. The answers and time to retrieve them was also saved by the software. Because of time limitation of the workshop and the importance of time-effective answer retrieval in bedside, the software assigned maximum 20 minutes to each scenario to be answered. If participants had asked for more time they would have been provided with it. They were also able to stop the program whenever they found the answer and the software was able to calculate the time. Finally, investigators assessed the relevancy of retrieved information by participants to the answer mentioned in the website of the Center of Evidence Based Medicine of the University of Toronto and they also checked if the layout of saved information is compatible with the layout of the information source using by participant [14]. The measures of users' satisfaction including interaction with the resource, amount and accuracy of the retrieved information, and overall satisfaction were recorded using a questionnaire [12]. The measures of user interaction with PubMed Clinical Queries were also recorded using a self-administered checklist [ Table 2]. Statistical Analysis In this study proportion of retrieved answer, time to answer retrieval, and the measure of users' satisfaction were compared by the McNemar test, Log Rank survival analysis, and Wilcoxon test respectively. Each analysis was performed on all data, questions of diagnosis, and questions of therapy. In order to do intention-to-treat analysis we assigned the outcomes to the resource which they were basically allocated to use via the randomization sequence. Whenever there was a failure to record the answer or time to answer (mostly due to technical errors), data imputation was used to substitute the missing values. These substituted values were calculated based on other participants' outcomes. Finally, results of intention-to-treat and per-protocol analysis were compared using sensitivity analysis. SPSS V.16 was used for the whole process of analysis and a P,0.05 was considered significant. Characteristics of the participants Forty four participants were recruited to the study [ Figure 2]. Twenty six (63%) were male. Thirty seven (90%) were in the first year of the residency program. The mean age of participants was 32 years (SD = 3). The median of their basic computer skills was medium (3 out of 5 in a five-point Likert scale). Baseline characteristics including prior use of and familiarity with the two resources were comparable between the two groups. Answer retrieval Participants retrieved relevant answers to 67 (76%) questions using UpToDate compared to 38 (43%) questions using PubMed Clinical Queries (P,0.001). The answer to the questions of diagnosis was retrieved 38 (86%) by UpToDate users compared to 25 (57%) by PubMed Clinical Queries users (P = 0.004). Time to answer retrieval Survival analysis showed that median time to answer retrieval was 17 min (95% CI: 16 to 18) among UpToDate users compared to 29 min (95% CI: 26 to 32) among Pubmed Clinical Queries users (P,0.001). The median time to answer retrieval for the questions of diagnosis was estimated to be 16 min (95% CI: 15 to 16) using UpToDate versus 25 min (95% CI: 21 to 29) using PubMed Clinical Queries (P,0.001). For questions on therapy the median time to answer retrieval was 18 min (95% CI: 16 to 20) for UpToDate users and 43 min (95% CI: 42 to 43) for PubMed Clinical Queries users (P = 0.011). Users' satisfaction Results of the users' satisfaction survey are summarized in Table 3. Users were satisfied with accuracy of retrieved answers from UpToDate significantly more than PubMed Clinical Queries .They also reported significantly easier interaction with UpToDate compared to the PubMed Clinical Queries. Similarly, Overall satisfaction was higher among UpToDate users. User interaction with PubMed Clinical Queries PubMed Clinical Queries users reported that they started searching 46 (65%) out of 88 questions in ''Clinical Study Category'' box and 25 (35%) questions in ''Find Systematic Review'' box. Out of 34 answered questions, the users found the answer of 24 (83%) in the ''Clinical Study Category'' box compared to 5 (17%) in the ''Find Systematic Review'' box. The abstract of the articles were used in 24 (77%) out of 34 retrieved answers in PubMed Clinical Queries and users did not need full text to find the answers. Relevancy was the most frequent criterion to select the article for 24 (77%) out of 34 retrieved answers. Sensitivity analysis Per-protocol analysis showed an answer retrieval rate of 74% in UpToDate compared to 41% in PubMed Clinical Queries (P,0.001). In addition, per-protocol survival analysis estimated a median time to answer retrieval of 15 min for UpToDate compared to 30 min for PubMed Clinical Queries (P,0.001). Per-protocol comparison of satisfaction factors between UpTo-Date and PubMed Clinical Queries showed a significant difference regarding the interaction with database (P,0.001), accuracy of content (P = 0.001) and overall satisfaction (P,0.001). Comparing the results of per-protocol and intention-to-treat analyses showed that no test yielded a different result and also the outcomes were similar. Discussion The results of this study indicated that first time users using UpToDate could answer a higher proportion of questions within a shorter time rather than Pub Med Clinical Queries. In addition, UpToDate users reported a higher satisfaction regarding interaction with system, accuracy of the content and also overall satisfaction. In a previous study, Patel and colleagues showed that when searching MEDLINE preceded pre-appraised sources (including UpToDate, ACP Journal Club and Cochrane Library), most of the questions (80%) were answered with MEDLINE and little further questions (5%) with the pre-appraised sources; while using the reverse search protocol, a lower proportion of questions (64%) were answered with pre-appraised sources and a considerable proportion of questions (23%) with MEDLINE. In contrast, considering the time factor, a higher proportion of questions were answered in less than 5 minutes when pre-appraised sources were searched prior to MEDLINE (26% vs. 55%) [9]. These results could show that the content coverage of MEDLINE is more comprehensive; but in limited time, pre-appraised sources are more rewarding. In another study, Hoogendam and colleagues reported a higher answer retrieval rate for UpToDate compared to Pub Med (83% vs. 63%) and also a shorter time to answer retrieval (241 vs. 291 seconds) [8]. Similarly, Thiele and colleagues showed that not only users of UpToDate were more likely than users of PubMed to answer the questions correctly but also UpToDate were faster than PubMed in answer retrieval. Indeed, subjects had the most confidence in UpToDate [15]. Most of the results of these studies support our findings. However, in both of these studies Clinical Queries was not emphasized in searching MEDLINE. While Demner-Fushman and colleagues showed that using Clinical Queries facilitates timely retrieval of results in MEDLINE [10], not focusing on Clinical Queries might be the reason of the low timely retrieval rate in MEDLINE in those studies. PubMed Cilnical Queries is a set of search filters for separating valid and relevant articles out of the repository of PubMed citations. Thus limits its clinical efficiency; because: a) Searching for one question may yield multiple high quality articles that present different answers, which the clinician does not have time to evaluate comprehensively. b) Few articles compare all management options for a given health problem. Therefore if the clinicians intend to decide between all possible options, they would have to review several studies systematically to inform their decision making. This is time consuming and also requires expertise. On the other hand, UpToDate is highly efficient; because a) the information is organized in entries rather than articles; each discusses a complaint (e.g. chest pain), disease (e.g. acute coronary syndrome) or a category (e.g. diagnosis) of a disease; if a special issue needs further discussion, another entry would be specified to it (e.g. cholesterol lowering after an acute coronary syndrome). Thus, the clinician is guided to alternation and is not overwhelmed with information. b) The information is provided by integrating the best available evidence by experts to address all management options for a given health problem and most of the recommendations are graded on the basis of their level of evidence. Thus, clinicians can use the recommendations knowing that all options are considered and the best one is recommended. The study limitations include: a) Whilst the native language of the participants was Persian (Farsi), the databases were in English. Thus may increase the time to retrieve answer, b) Unfamiliarity of participants with information management skills and inadequate competency for searching PubMed Clinical Queries compared to UpToDate inspite of equal prior training which might be the reason of such a low answer retrieval in this source, c) limited time for learning, practicing, and also searching for the answer of each question, d) using limited number of questions compared to the previous studies, e) limited questioned clinical categories and failure to include other important categories (e.g. prognosis), and f) Technical problem with the internet speed in the 2nd workshop which leaded to such a long median time to answer retrieval for both databases compared to the similar studies. However, this study has the following strengths: a) conducting a randomized cross-over rather than self-control trial during the workshop, b) providing training to use both PubMed Clinical Queries and UpToDate by the self-learning slide shows, c) providing participants with clinical scenarios and formulated foreground questions, d) measuring the time to answer retrievals accurately using special designed software, e) verifying all answers for relevance. Based on the findings of this study, we recommend addition of tutoring pre-appraised resources such as UpToDate in information mastery workshops; because they seem to be more rewarding and faster, so more applicable in the daily practice; furthermore, they can enhance lifetime learning competencies among physicians. This study can be a signal to conduct studies comparing two different EBM workshop curricula regarding participants' satisfaction, effects on clinically important outcomes, medical errors, and costs. The results of such studies may make refinements in EBM workshop curricula.
2014-10-01T00:00:00.000Z
2011-08-12T00:00:00.000
{ "year": 2011, "sha1": "ee1269fc9c8e383d6d50d6a8ad2ef8b44e82c545", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0023487&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee1269fc9c8e383d6d50d6a8ad2ef8b44e82c545", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
236662456
pes2o/s2orc
v3-fos-license
Joint selection of the MCS’s and power allocation coefficients in the two-user downlink PD-NOMA system . Power Domain Non-Orthogonal Multiple Access (PD-NOMA) is a perspective multiplexing technique for future cellular networks. Nevertheless, it is poorly studied and not applied in the existing systems due to the complexity of PD-NOMA signal processing, resource scheduling, and power allocation. The issue is that a modulation and coding scheme (MCS) selection, including power allocation, is a cooperative procedure considering the channel state information of each multiplexed user. It can be solved by enumerating all possible multiplexing combinations but at the expense of the high computational complexity. In our work, we propose a composed table with the joint MCS’s, which can be selected by the base station (BS) for the user multiplexing in a downlink PD-NOMA system based on their signal-to-noise (SNR) ratios. It allows selecting two MCS’s with two power allocation coefficients for both users and guarantees the 10% block error rate (BLER) performance in the additive white Gaussian noise (AWGN) channel. The joint MCS selection method is based on a max-rate scheduling strategy and provides system capacity maximization ignoring fairness between users. The proposed table is given in the Appendix. Introduction PD-NOMA is one of the perspective techniques for user multiplexing in future networks. It has been shown [1][2][3][4][5] that PD-NOMA allows increasing the system capacity in the multiuser communication systems such as cellular (3GPP long term evolution, LTE) and wireless local area networks (IEEE 802.11). The basic PD-NOMA idea is the nonorthogonal multiplexing of several users in a power domain of the same time-frequency resource segment by the superposition of their signals with different power weights. Thus, there is a controlled co-channel interference that is cancelled at the receiver side by the successive interference cancellation (SIC) method [6]. The issue is that user signals are dependent on each other, and it should be considered during resource scheduling, cooperative MCS selection, and multiplexing. To achieve the maximum system capacity, it is necessary to use the MCS that has the highest spectral efficiency (SE) and guarantees the required error rate performance within the given channel conditions. Therefore, adaptive modulation and coding (AMC) has been proposed [7,8]. It is a simple approach that selects the most efficient MCS according to the propagation channel quality. The AMC is widely used in actual wireless networks with orthogonal multiple access (OMA). Thus, in conventional OMA systems, MCS's for different users are independently selected because they are orthogonal to each other. Unlike OMA, in PD-NOMA systems, the users are non-orthogonal in the power domain, and it is a reason for the joint MCS selection. Besides, the power allocation coefficients should be calculated by taking into account the channel quality of each user. It can be carried out by enumerating all possible multiplexing configurations and selecting the most effective of them (joint MCS's and power allocation coefficients) at the expense of the high computational complexity. Many researchers have developed power allocation schemes for PD-NOMA within the conditions of the Shannon system model, considering the infinite number of MCS levels. However, during the literature searching, we have found the only paper devoted to the joint MCS selection in PD-NOMA systems using finite number of MCS levels from the practical system and considering the efficiency of used error-correcting methods. In [9], the authors propose their method of a joint power allocation and MCS selection considering max-min user fairness and determining the power allocation coefficient according to the MCS level of the user with a poor channel. It allocates the maximum power shared to the user with better channel conditions while guaranteeing the decoding of the second user. However, their algorithm is iterative, so it has high computational complexity. In our work, we propose a table that makes it possible to select the pair of MCS levels depending on the user SNR values considering the max C/I scheduling strategy. It maximizes the system throughput instantly by selecting two MCS's that have the highest sum SE. This approach has a very low computing complexity and may be a temporary solution for the AMC in a two-user PD-NOMA system until an optimal approach is found. Downlink PD-NOMA system We are using the PD-NOMA system model with multiplexing of two users in the power domain. According to its principles, the group signal is a superposition of two user signals with different power weights. The user with a weak propagation channel gets the largest power share compared to the user with a strong propagation channel. We sorted users in the ascending order of their channel quality and identified the weak user (UE1) with power allocation coefficient 1 p and the strong user (UE2) with Proposed joint MCS Table In this section, we have described the joint MCS table design process. Our goal is to search for the most efficient joint MCS combination, which guarantees 10% BLER performance and provides the maximization of the system capacity, considering the user's SNR values. Our approach to the search includes the enumeration of all possible combinations for each joint user's SNR values and their verification with the help of a simulation. The combinations that do not guarantee 10% BLER are eliminated from a valid range of searches. Then, one with the most SE is selected from the remaining combinations. Used MCS levels A downlink system capacity significantly depends on a user non-orthogonal multiplexing technique at the BS side, which includes the joint MCS selection and power allocation between users. Commonly, all possible MCS's are given in a special table in a technical specification. In our system model, we use a bank of the finite number of MCS levels given by [10]. Their main characteristics are presented in Table 1. MCS Table in LTE Downlink.. Each MCS level is indexed by m and based on a square Q-QAM and Turbo encoding with the code rate R. The SNR threshold with 10% BLER is used for switching m together with changing the channel state. It allows increasing the SE of transmission due to improving the channel quality. Simulation model The structure of the simulation model is shown in Fig. 1 and all possible power allocation coefficients controlled by 1 p in the range from 0.5 to 1 with the 0.05 step. Next, we initiate the simulation of a data transmission process and estimate a BLER performance. Further, the 1 m , 2 m , 1 p combination that guarantees 10% BLER performance and maximizes the system capacity is selected and added in the table for the SNR thresholds 1 W , 2 W . In case the identical sum SE is given by several combinations, the priority is given to the weak user, and the combination that simultaneously maximizes throughput and the SE of UE1 is added in the table. Fig. 1. The structure of the simulation model. p , so only five additional bits are required to transfer the control information about the power shared between the users. Joint MCS's selection Exactly as for the AMC in the OMA, in the PD-NOMA, the BS should select the MCS combination with the highest sum SE for the given users' SNR values, considering the thresholds 1 W , 2 W . These methods are well described in [10]. For example, if UE1 and UE2 possess 1
2021-08-03T00:06:05.858Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "274527411c94273b61e5c4b5a55c8b17d5bcc634", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/46/e3sconf_wfces2021_01031.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "185afe227d68b449ec140e1b43d70d3b5a7b2aa7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
260792154
pes2o/s2orc
v3-fos-license
Chemistry of Dimer Acid Production from Fatty Acids and the Structure–Property Relationships of Polyamides Made from These Dimer Acids While there is abundant literature on using a wide range of biomaterials to make polymers for various adhesive applications, most researchers have generally overlooked developing new adhesives from commercially available bio-based dimerized fatty acids. Some of the literature on the chemistry taking place during the clay-catalyzed dimerization of unsaturated fatty acids is generally misleading in that the mechanisms are not consistent with the structures of these dimers and a by-product isostearic acid. A selective acid-catalyzed interlayer model is much more logical than the widely accepted model of clay-catalyzed Diels–Alder reactions. The resulting dimers have a variety of linkages limiting large crystal formation either as oligomeric amides or polyamides. These highly aliphatic fatty acid dimers are used to make a wide range of hot melt polyamide adhesives. The specific structures and amounts of the diacids and diamines and their relative ratios have a big effect on the bio-based polyamide mechanical properties, but analysis of the structure–property relationships has seldom been attempted, since the data are mainly in the patent literature. The diacids derived from plant oils are valuable for making polyamides because of their very high bio-based content and highly tunable properties. Introduction The interest in making adhesives more bio-based has increased in recent years, but unfortunately much of the recent literature has been directed at using materials that have limited commercial viability due to the small volumes available and the complexity of isolating the valuable bio-based materials. Many of the recent reviews [1][2][3][4][5], except for one [6], have given little attention to the wide use of fatty acid (FA) bio-based materials for industrial applications, except as bio-fuels and epoxidized fatty oils; the fascinating chemistry involving technology developed in the mid-20th century for making polyamide adhesives has received only limited mention. The early work on the dimerization process had been thoroughly reviewed many years ago [7][8][9][10], and with most of the polyamide literature in patents, the public domain knowledge is mainly related to performance properties. This paper concentrates on the interesting chemistry for converting the monomeric FA into dimer and oligomeric acids using clay catalysts and then the structure-property relationships in using these dimers for making a wide variety of commercial oligomeric products and polyamide hot melt adhesives. While the main source of fatty esters is agricultural crops (e.g., oilseeds), fatty esters are present in all plants. Most of these esters are consumed as foods, but now there is a growing interest in trans-esterifying them for bio-based liquid fuels and modifying them as partial replacement for petroleum-based monomers used in making polymers. Fatty esters and their hydrolysis products FA have long been used in many industrial applications, including coatings, surfactants, and adhesives [11][12][13][14][15]. The interest in evaluating the polymerization of fatty esters was derived from research on improving drying oil coatings, which involve the free radical oxidation of polyunsaturated fatty esters with air exposure after incorporation of certain metal salts [16,17]. Thermal polymerization of polyunsaturated fatty esters in the absence of air was shown to provide non-oxidative intermolecular reactions at the olefinic sites, and after hydrolysis of these dimer esters, the resulting dicarboxylic acids were used to synthesize polyamides [18]. However, investigation into converting the polyunsaturated FA into polymeric materials by a thermal process was hampered by some loss of the carboxylic acid functionality and increase in color. This problem was solved by Goebel, who added water to the thermal heating step [19]. Later on, a process using a montmorillonite clay catalyst to lower the reaction temperature proved to be the best route for commercial production of a product with a high diacid content and lighter colors not only from polyunsaturated FA, but also from the monounsaturated FA oleic acid [8,20,21]. The literature on the function of a clay catalyst is not always clear, which has led to confusion in various publications on the reaction mechanism and the products from the process [10,[22][23][24]. Thus, the first objective of this paper is to clarify the role of the clay and how it leads to unique monomeric, dimeric and polymeric acids. The second objective is to provide an understanding on how these unusual dimers lead to the observed structure-property relationships of associative oligomers and hot melt polyamides. For clarity, instead of calling the initial dimer acid and higher polymers fraction dimer acids, this paper uses the commonly used term polymerized FA to keep it distinct from the purified dimer fraction. Because of the interest in adhesives with a high bio-based content, the lack of recent reviews on the dimerization process and the polyamide adhesives produced from these dimer acids, with over 90% bio-based content, is surprising. Hopefully, an understanding of the dimerization process and the properties of the dimer polyamides will lead others to do further research in these areas. Clay Structure and Property Natural and modified clays have long been used in the chemical industry [25]. A very important use in the fatty oils industry is as bleaching earths, involving clays such as kaolin, bentonite, sepiolite or attapulgite. These clays can absorb all kinds of substances, such as oil, insecticide, and hydrocarbons [25,26]. Activated bleaching earths are often used for bleaching vegetable oils to remove colored and distasteful components for food uses. Most clays are layered alumino-silicates composed mainly of fused layers of tetrahedrally arranged silicate and octahedrally arranged aluminate groups [25][26][27]-see Figure 1-and many are known to be acidic catalysts. Clays are mainly stacked sheets, with each sheet composed of an aluminate layer in between two silicate layers [25,26]. All the clays have a negative surface charge due to interruption of sheet structure at the edge of the particles. These negative lattice surface charges are balanced by adsorbed metal cations as shown in Figure 1. These sheet structures are interesting in that the kaolinites, with a formula of Al 2 Si 2 O 5 (OH) 4 , are virtually non-swelling, since water cannot overcome the affinity between the sheets and be absorbed between them, as illustrated in the left side structure in Figure 1. On the other hand, the swelling smectite group including talc, vermiculite, montmorillonite, and bentonite have negative charges within the lattices with a general formula of (Ca,Na,H)(Al,Mg,Fe,Zn) 2 (Si, Al) 4 O 10 (OH) 2 -xH 2 O, where x represents a variable amount of water. Replacing trivalent aluminum with divalent atoms and replacing the tetravalent silicon with trivalent or divalent atoms leads to a lattice that is highly negatively charged as illustrated on the right side of Figure 1; the charges are balanced by cations between the sheets as well as on the particle surface. This propping of sheets allows water and fatty acids to flow in between the sheets. The amount and type of cations influences this swelling character of the montmorillonites [27]. An important property of clays is their cation exchange capacities of 3-15 milliequivalents per 100 g for non-swelling kaolinites versus 80-120 for swelling smectites, which is also related to their surface areas 5-40 square meters per gram for kaolinites versus 40-800 for smectites and swelling abilities with an intersheet spacing of 7 Å for kaolinites versus 9.6-20 for smectites [26]. Because of the charge separation between the modified aluminate-silicate lattice and the adsorbed cations, these cations give the clays their acidity [25][26][27]. Thus, the smectite clays not only have many active cations with a large surface area but have expandable intersheet separations to allow the unsaturated FA to be adsorbed and converted to carbonium ions as the intermediate for dimerization. influences this swelling character of the montmorillonites [27]. An important property of clays is their cation exchange capacities of 3-15 milliequivalents per 100 g for non-swelling kaolinites versus 80-120 for swelling smectites, which is also related to their surface areas of 5-40 square meters per gram for kaolinites versus 40-800 for smectites and swelling abilities with an intersheet spacing of 7 Ǻ for kaolinites versus 9.6-20 for smectites [26]. Because of the charge separation between the modified aluminate-silicate lattice and the adsorbed cations, these cations give the clays their acidity [25][26][27]. Thus, the smectite clays not only have many active cations with a large surface area but have expandable intersheet separations to allow the unsaturated FA to be adsorbed and converted to carbonium ions as the intermediate for dimerization. Figure 1. Structure of clays, showing charge and sheet separation differences for the non-swelling kaolinite on the left, the swelling smectite on the right and the exfoliated clay on the bottom left. SiO represents silicon oxide layer; AlO represents aluminum oxide layer: SiMO is the silicon oxide layer where some of the silicon atoms are replaced by lower valent metals creating a negative charge on the lattice; AlMO is the silicon oxide layer where some of the aluminum atoms are replaced by lower valent metals creating a negative charge on the lattice. Minus signs show negative charge on the lattices, M + show counter balancing cations which can be mono-, di-or trivalent, and FA represents the adsorbed fatty acids. To avoid confusion with highly activated clays, these dimerization clay catalysts do not have the replacement of the metal cations by hydrogen atoms and are not subjected to the very high temperature calcining step that provide very reactive Lewis acid clays with collapsed layers and that along with zeolites are used in petroleum cracking [28]. Another point is that these clays still contain the stacked sheet structures and are not the exfoliated clays [29] which often are treated with bulky amine groups to separate the clay sheets, as illustrated by the bottom left structure on Figure 1 [30]. However, even without going to the extent of exfoliation, fatty acids (FA) can intercalate between the sheets of smectite clays, as illustrated in the structure on the right side of Figure 1 [8,31]. This good affinity of the clays for fatty acids leads to one drawback for the filtration process in that the To avoid confusion with highly activated clays, these dimerization clay catalysts do not have the replacement of the metal cations by hydrogen atoms and are not subjected to the very high temperature calcining step that provide very reactive Lewis acid clays with collapsed layers and that along with zeolites are used in petroleum cracking [28]. Another point is that these clays still contain the stacked sheet structures and are not the exfoliated clays [29] which often are treated with bulky amine groups to separate the clay sheets, as illustrated by the bottom left structure on Figure 1 [30]. However, even without going to the extent of exfoliation, fatty acids (FA) can intercalate between the sheets of smectite clays, as illustrated in the structure on the right side of Figure 1 [8,31]. This good affinity of the clays for fatty acids leads to one drawback for the filtration process in that the residual clay contains an equal weight of fatty acids and is a disposal challenge because oxygen exposure can lead to autoignition from uncontrolled oxidation of the residual FAs. The process of adding clay to the unsaturated FA and heating results in the FA penetrating the intersheet domains of the smectite particles [31,32] and being near the nu- [8,31,33]. Layered clays allow the reaction products (dimer and polymers) to escape from the reaction site, which may not be true for zeolites, and be replaced by fresh unsaturated fatty acids. Commercial production uses the montmorillonite/bentonite clay and lower temperatures of approximately 250 • C compared to the 300 • C of the thermal process without clays [10], along with lithium hydroxide addition to reduce color and increase the conversion to dimers [23,34]. Dimerization Products and Structures A typical procedure [35] uses TOFA (tall oil fatty acids, with a composition of 48.8% oleic acid, 34.3% linoleic acid, 6.4% conjugated linoleic acid, and 8.5% saturated C2-C20 acids), 4.3% montmorillonite clay, 1.1 mg of lithium salt per gram of clay, and 5% water stirred and heated in an autoclave at 260 • C. and 90 psi for 2.5 h. The crude product mixture is treated with 1.1% of 85% phosphoric acid at 130 • C, and then filtered to remove the catalyst. The yield of residual polymerized FA is commercially 63% recovered after vacuum distillation using a wiped film evaporator, which creates a hot thin film of fatty acid being distributed by a rotating blade wiping the interior of the heated cylinder under vacuum, allowing the evaporation of the more volatile monomeric FA ( Figure 2). The polymerized FA can then be redistilled at higher temperature and lower vacuum pressure to yield a purified dimer [8,11] along with the residual dimer, trimer, and higher-molecular-weight products. oxygen exposure can lead to autoignition from uncontrolled oxidation of the residual FAs. The process of adding clay to the unsaturated FA and heating results in the FA penetrating the intersheet domains of the smectite particles [31,32] and being near the numerous acidic cations in a confined environment supporting reactions of the FA olefinic portion [8,31,33]. Layered clays allow the reaction products (dimer and polymers) to escape from the reaction site, which may not be true for zeolites, and be replaced by fresh unsaturated fatty acids. Commercial production uses the montmorillonite/bentonite clay and lower temperatures of approximately 250 °C compared to the 300 °C of the thermal process without clays [10], along with lithium hydroxide addition to reduce color and increase the conversion to dimers [23,34]. Dimerization Products and Structures A typical procedure [35] uses TOFA (tall oil fatty acids, with a composition of 48.8% oleic acid, 34.3% linoleic acid, 6.4% conjugated linoleic acid, and 8.5% saturated C2-C20 acids), 4.3% montmorillonite clay, 1.1 mg of lithium salt per gram of clay, and 5% water stirred and heated in an autoclave at 260 °C. and 90 psi for 2.5 h. The crude product mixture is treated with 1.1% of 85% phosphoric acid at 130 °C, and then filtered to remove the catalyst. The yield of residual polymerized FA is commercially 63% recovered after vacuum distillation using a wiped film evaporator, which creates a hot thin film of fatty acid being distributed by a rotating blade wiping the interior of the heated cylinder under vacuum, allowing the evaporation of the more volatile monomeric FA ( Figure 2). The polymerized FA can then be redistilled at higher temperature and lower vacuum pressure to yield a purified dimer [8,11] along with the residual dimer, trimer, and higher-molecularweight products. Figure 2. Schematic of the FA dimerization process and separation of products with the thicker bordered boxes representing commercial fatty acid products; often the hydrogenation step is skipped to yield a partially unsaturated and less expensive isostearic acid. In the first distillation, the monomers are removed from the product mix. Instead of being just unreacted linear unsaturated FA, the majority are the branched isoacids [11]. These isoacids are isolated by hydrogenation in some cases and removal of the solid stearic acid, which melts at 69 °C. The resulting saturated isoacids are of interest because they are liquid, resistant to oxidation, and can be used in cosmetics, lubricants, and biodiesel fuel [11,36] in contrast to the liquid unsaturated fatty acids, which are sensitive to oxidative degradation, and to the saturated FA that are solids at room temperature. Some recent research has involved rearrangement of fatty esters or acids using acidic zeolites to produce isoacids [36]. The isostearic acids from the clay-catalyzed reaction involve rearranging the fatty acid chain to create methyl side groups, mainly in the middle of the chain near the sites of the olefinic sites in the feedstock; this reaction is favored since it produces the more stable tertiary carbonium ion instead of the first formed secondary one [34,37,38]. Thus, the liquid Figure 2. Schematic of the FA dimerization process and separation of products with the thicker bordered boxes representing commercial fatty acid products; often the hydrogenation step is skipped to yield a partially unsaturated and less expensive isostearic acid. In the first distillation, the monomers are removed from the product mix. Instead of being just unreacted linear unsaturated FA, the majority are the branched isoacids [11]. These isoacids are isolated by hydrogenation in some cases and removal of the solid stearic acid, which melts at 69 • C. The resulting saturated isoacids are of interest because they are liquid, resistant to oxidation, and can be used in cosmetics, lubricants, and biodiesel fuel [11,36] in contrast to the liquid unsaturated fatty acids, which are sensitive to oxidative degradation, and to the saturated FA that are solids at room temperature. Some recent research has involved rearrangement of fatty esters or acids using acidic zeolites to produce isoacids [36]. The isostearic acids from the clay-catalyzed reaction involve rearranging the fatty acid chain to create methyl side groups, mainly in the middle of the chain near the sites of the olefinic sites in the feedstock; this reaction is favored since it produces the more stable tertiary carbonium ion instead of the first formed secondary one [34,37,38]. Thus, the liquid nature and oxidative resistance of the saturated isoacids is their main market advantage [39]. There are some natural isoacids, but they are low in concentration in the plant material. There have been times when the by-product isostearic acid has been more valuable than the dimer acids, which led to investigation into ways to improve their yield [37]. There are a variety of commercial polymerized fatty acids since there are many different markets for these products, and the use of oleic acid from soybeans instead of TOFA from wood pulping provides different product ratios [8,11,40]. The polymerized FA contains dimer, trimer, and higher-molecular-weight products with the latter two being useful to increase the functional groups per molecule that are important for applications such as amidoamines used in epoxy curing agents. However, for many hot melt polyamide products, which is one focus of this paper, the dimer is the preferred product since it can be used to make higher-molecular-weight polymers without gelling. Although the TOFA is the preferred feedstock for many companies making polyamides, for some hot melt polyamides oleic acid is preferred to make polyamides where their light color and stability after hydrogenation are important. Hydrogenation is difficult for TOFA-derived dimer due to impurities that tend to deactivate the hydrogenation catalysts. Characterization of the reaction products has required several analytical methods. The data indicated that clay-catalyzed oleic dimer consisted chiefly of unsaturated non-cyclic and monocyclic non-aromatic dimer structures; see Figure 3 and Table 1 [38]. However, the more unsaturated FA monomers produced more aromatic and polycyclic dimers. The literature can be confusing in that for the earlier work, determining the structures in the complex reaction mixture was difficult since sophisticated analytical methods were not available to these researchers. The analysis is also complicated due to formation of lactones, hydroxyl FA, and inter-esters by addition of the carboxylic acid across the olefinic bond, in addition to the branched monomers, and dimers, trimers, and higher-molecular-weight products with different structures [22,38,41]. Even though distillation methods are difficult because of the high molecular weights and viscosity of the components, distillations can separate monomeric forms of dimeric and polymeric products, even using wiped-film evaporators. The purity of these fractions can be assessed using gel permeation chromatography [38]. The structure of the dimers has been studied in detail, using various chromatographic separations, nuclear magnetic resonance, ultraviolet, and mass spectroscopy for the molecular weight determination of the various components [38]. The molecular ion region of the mass spectrum was used to determine the number of ring systems and double bonds present in dimer structures. The confounding effect of olefinic bonds was eliminated by hydrogenation using conditions that eliminated only the olefinic bonds and not the aromatic ones, and methyl esterification was used to make the molecules more volatile. Thus, the non-cyclic (linear), monocyclic, bicyclic, and aromatic components were measured. From the fragmentation pattern of the mass spectrum, the lengths of the side chains attached to the ring systems were determined. Because mass spectroscopy is not quantitative, the aromatic content was determined by using ultraviolet spectroscopy, while nuclear magnetic spectroscopy of the non-hydrogenated and hydrogenated dimer was used to estimate the olefinic content of the dimer. With these experiments and other literature, McMahon and Crowell proposed structures in Figure 3 as representative of the dimer components [38]. Because of the variety of unsaturated fatty acids from natural sources and the multiple olefinic bond locations, these structures can only be representative and not the only isomers that exist, with the amount of unsaturation influencing the distribution ( Table 1). The data clearly indicate that, as expected, the oleic dimer which is high in monounsaturation, provides almost as much linear dimer as it does the non-aromatic monocyclic dimer. The data show that den Otter's assumption regarding the high hydrogen transfer and Diels-Alder cyclization as the main pathway for dimer formation with oleic acid [23,42] is not a valid assumption because of a high percentage of non-cyclic (linear) dimer. TOFA containing both mono-and poly-unsaturation has values between the oleic acid and the highly unsaturated linoleic acid leading to a higher content of non-aromatic monocyclic dimer. The dimer and isoacid structures are important for understanding the dimerization mechanism, which is discussed in the next section, and for knowing the structure-property relationships of polyamides. The variety of structures and the long flexible backbone make the dimer derivatives less likely to form large crystals compared to the common nylon structural polymers [43], but such as with structural nylons, the strong hydrogen bonds play an important role in the polymer strand associations [9]. A clear example of this is in making a resin for use in a hot melt ink jet printer that needs to be low in viscosity to be jetted by a printer head, clear to have sufficient color intensity, and hard to be rub resistant, as well as bonding sufficiently to plastic and coated papers. The composition has a narrow window to meet the requirements of this application, with the target oligomeric structure illustrated in Figure 4. Reacting more than the stoichiometric amount of stearic acid leads to cloudiness due to stearic acid chain crystals, while less leads to higher molecular weight and higher viscosity [44,45]. The successful associative oligomer probably has small crystallites that are less than the wavelength of normal light leading to clarity, but the insolubility and hardness of the product can be explained by the product having a high association of the oligomers through hydrogen bonding of the amide groups. This associative oligomer along with a compatible viscosity reducer for a single melt transition allowed the combination to be used commercially for high-end inkjet printers that even worked with glossy surfaces. ing more than the stoichiometric amount of stearic acid leads to cloudiness due to stearic acid chain crystals, while less leads to higher molecular weight and higher viscosity [44,45]. The successful associative oligomer probably has small crystallites that are less than the wavelength of normal light leading to clarity, but the insolubility and hardness of the product can be explained by the product having a high association of the oligomers through hydrogen bonding of the amide groups. This associative oligomer along with a compatible viscosity reducer for a single melt transition allowed the combination to be used commercially for high-end inkjet printers that even worked with glossy surfaces. In contrast to the high hardness and very low solubility of the hot melt jet printing ink oligomer, other associative dimer-based oligomers have high compatibility with nonpolar solvents such as mineral oil. These oligomers link two dimer units with a short chain diamine and then the terminal carboxylic acid is capped by reaction with a fatty alcohol, such as stearyl alcohol to provide non-polar domains; see Figure 5. These molecules use the good intermolecular hydrogen bonding ability of the center amides to provide structure for the gels while the low polarity, terminal esters provide compatibility with the mineral oil. After proving commercial success with clear scented candles [46,47], other proprietary formulations were developed for a variety of consumer products due to the presence of both polar and non-polar domains involving the individual oligomeric chains. Figure 5. Structure for ester-terminated amides using a dimer core for gel structure and terminal ester groups for compatibility with non-polar components. Image taken from [47], with n being the number of repeat units of the oligomer, R 2 being the 34 carbons of the dimer, R 3 being short chain primary diamines and R 1 being hydrocarbon groups with long chains. The two associative oligomer structures mentioned above show that the polar amide groups can provide strength, while the long terminal fatty chain dimer provides association of hydrophobic domains. Large crystalline domains do not form due to the various cyclic non-aromatic, cyclical aromatic, and acyclic domains inhibiting an orderly packing. Dimerization Mechanism From the earliest days of dimerization, an acidic process has been considered important, but the literature proposes other mechanisms. The most recited mechanism has been a Diels-Alder reaction to form six member rings [23,48] with den Otter proposing that this was the main route even for the mono-unsaturated oleic acid. The formation of sixmember rings were supported by early analysis of some of the dimer molecules [22], but the Diels-Alder cyclization mechanism is not valid for fatty acid dimerization. The well-studied In contrast to the high hardness and very low solubility of the hot melt jet printing ink oligomer, other associative dimer-based oligomers have high compatibility with non-polar solvents such as mineral oil. These oligomers link two dimer units with a short chain diamine and then the terminal carboxylic acid is capped by reaction with a fatty alcohol, such as stearyl alcohol to provide non-polar domains; see Figure 5. These molecules use the good intermolecular hydrogen bonding ability of the center amides to provide structure for the gels while the low polarity, terminal esters provide compatibility with the mineral oil. After proving commercial success with clear scented candles [46,47], other proprietary formulations were developed for a variety of consumer products due to the presence of both polar and non-polar domains involving the individual oligomeric chains. The successful associative oligomer probably has small crystallites that are less than the wavelength of normal light leading to clarity, but the insolubility and hardness of the product can be explained by the product having a high association of the oligomers through hydrogen bonding of the amide groups. This associative oligomer along with a compatible viscosity reducer for a single melt transition allowed the combination to be used commercially for high-end inkjet printers that even worked with glossy surfaces. In contrast to the high hardness and very low solubility of the hot melt jet printing ink oligomer, other associative dimer-based oligomers have high compatibility with nonpolar solvents such as mineral oil. These oligomers link two dimer units with a short chain diamine and then the terminal carboxylic acid is capped by reaction with a fatty alcohol, such as stearyl alcohol to provide non-polar domains; see Figure 5. These molecules use the good intermolecular hydrogen bonding ability of the center amides to provide structure for the gels while the low polarity, terminal esters provide compatibility with the mineral oil. After proving commercial success with clear scented candles [46,47], other proprietary formulations were developed for a variety of consumer products due to the presence of both polar and non-polar domains involving the individual oligomeric chains. Figure 5. Structure for ester-terminated amides using a dimer core for gel structure and terminal ester groups for compatibility with non-polar components. Image taken from [47], with n being the number of repeat units of the oligomer, R 2 being the 34 carbons of the dimer, R 3 being short chain primary diamines and R 1 being hydrocarbon groups with long chains. The two associative oligomer structures mentioned above show that the polar amide groups can provide strength, while the long terminal fatty chain dimer provides association of hydrophobic domains. Large crystalline domains do not form due to the various cyclic non-aromatic, cyclical aromatic, and acyclic domains inhibiting an orderly packing. Dimerization Mechanism From the earliest days of dimerization, an acidic process has been considered important, but the literature proposes other mechanisms. The most recited mechanism has been a Diels-Alder reaction to form six member rings [23,48] with den Otter proposing that this was the main route even for the mono-unsaturated oleic acid. The formation of sixmember rings were supported by early analysis of some of the dimer molecules [22], but the Diels-Alder cyclization mechanism is not valid for fatty acid dimerization. The well-studied Figure 5. Structure for ester-terminated amides using a dimer core for gel structure and terminal ester groups for compatibility with non-polar components. Image taken from [47], with n being the number of repeat units of the oligomer, R 2 being the 34 carbons of the dimer, R 3 being short chain primary diamines and R 1 being hydrocarbon groups with long chains. The two associative oligomer structures mentioned above show that the polar amide groups can provide strength, while the long terminal fatty chain dimer provides association of hydrophobic domains. Large crystalline domains do not form due to the various cyclic non-aromatic, cyclical aromatic, and acyclic domains inhibiting an orderly packing. Dimerization Mechanism From the earliest days of dimerization, an acidic process has been considered important, but the literature proposes other mechanisms. The most recited mechanism has been a Diels-Alder reaction to form six member rings [23,48] with den Otter proposing that this was the main route even for the mono-unsaturated oleic acid. The formation of six-member rings were supported by early analysis of some of the dimer molecules [22], but the Diels-Alder cyclization mechanism is not valid for fatty acid dimerization. The well-studied and modelled Diels-Alder reaction involves a concerted cyclization through a single, cyclic transition state, without any intermediates being produced along the way [49,50]. The model requires appropriate molecular overlap between a conjugated diene and an olefin, called a dienophile, with the latter being not just any olefin, but one that needs to be bonded to an electron withdrawing group. The mid-chain olefinic bond in the unsaturated FAs does not fit this description since it is not directly attached to a strong electron withdrawing group. To fit his model for oleic acid dimerization, den Otter mentioned that the Diels-Alder reaction can be acid catalyzed and assumed that this is the reason for the effectiveness of the clay [48]. However, the literature on acid catalysis of Diels-Alder reactions indicates that the catalyst needs to be a Lewis acid type associating with the electron withdrawing group attached to the dienophile [50,51]. This clay-catalyzed dimerization involves Bronsted acids, not a Lewis acid since they are used under aqueous conditions. In contrast, carbonium ion reactions explain both the observed formation of isoacids and the acyclic dimer [51,52], which the den Otter Diels-Alder mechanism does not. A conjugation diene can still cyclize with an olefin in a multi-step reaction carbonium ion process, as opposed to the single stage Diels-Alder reaction. Thus, the extensive reference to a Diels-Alder cyclization in the literature is not correct. The den Otter mechanism is also incorrect since it requires extensive hydrogen transfer between molecules and does not account for a non-cyclic dimer being a major component of the product and for the formation of isoacids [38]. The metallic or hydrogen cations can generate carbonium ions with the olefinic portion of unsaturated acids, leading to coupling of olefinic bonds, olefinic bond isomerization and conjugation, as illustrated in Figure 6 [51,52]. The stability of tertiary carbonium ions explains forming methyl side chains of the isostearic acid [51]. The carbonium ion can extract a hydrogen adjacent to an olefinic bond on another FA to produce a more stable carbonium ion due to resonance stabilization of the charge. Loss of a proton on the carbon adjacent to the carbonium ion leads to a conjugated olefin and a more reactive FA for further reaction, and the hydrogen transfer can then lead to the observed aromatic dimers [38]. Although den Otter did extensive analysis of the reaction of the monounsaturated oleic acid, his modelling was predicated on the dimerization being exclusively through a rapid Diels-Alder reaction for which approximately half of the oleic acid would have to be converted first to a diunsaturated linolenic acid in a slow hydrogen transfer step to provide the necessary diene for forming exclusively cyclic dimers. Not only does the cyclic dimer does not need to be formed by a concerted reaction Diels Alder reaction [23,42], but also the mass spectroscopy work shows that the predominate dimer is not a cyclic structure [38]. This unfortunate assumption renders all his systematic analysis moot. Polyamide Composition and Properties From the beginning of the dimerization of FA technology, the importance of converting these dimers to polyamides was realized as being an important outlet for FA [19]. Dimerbased amides are used in applications such as adhesives, printing inks and coatings, because of their adhesion to both polar and non-polar surfaces and their good strength while maintaining flexibility [9,22,24,39]. While one broad usage is as a curing agent for epoxy adhesive [10,19], this paper focuses on its use for hot-melt adhesives due to the limited number of papers or patents relating structure-property relationships to end use applications. This is especially true in how dimer-based amides differ from the more common nylons in being less crystalline and more hydrophobic, but also being similar to nylons in how specific hydrogen bonds of the amides play a large role in the polymer properties. Among the by-products of the dimerization reaction is the unwanted formation of inter-esters, also referred to as an estolide, that have been discussed even in the first patents on the dimerization process using a clay catalyst [20,21,41]. This reaction involves the addition of the carboxylic acid of one FA across the olefinic bond of another FA molecule. This type of reaction has been studied in a simpler system using a strongly acidic ion exchange resin [53] and modifying the clay catalyst to enhance desired production [41]. This process also occurs intra-molecularly to form a lactone, as well as inter-molecularly to form inter-ester. The easiest way to identify these side reactions is to measure the difference between the acid and saponification numbers [19]. Both inter-esters and lactones are problematic for making hot melt polyamides since their mono functionality serves as chain terminators limiting the polyamide molecular weight. Another issue is the potential of decarboxylation reducing the functionality of dimers and trimers based on the literature reports [19,54]. However, other literature has indicated that the decarboxylation requires higher temperatures and specific catalysts [55]. The clay-catalyzed dimerization minimizes these side products compared to strong acid catalysts such as sulfuric acid, probably by the counter ions forming salts of the fatty acids at the reaction site in the clay, and therefore is the best way to make dimer acids. Dimer-Based Polyamides Polyamide Composition and Properties From the beginning of the dimerization of FA technology, the importance of converting these dimers to polyamides was realized as being an important outlet for FA [19]. Dimer-based amides are used in applications such as adhesives, printing inks and coatings, because of their adhesion to both polar and non-polar surfaces and their good strength while maintaining flexibility [9,22,24,39]. While one broad usage is as a curing agent for epoxy adhesive [10,19], this paper focuses on its use for hot-melt adhesives due to the limited number of papers or patents relating structure-property relationships to end use applications. This is especially true in how dimer-based amides differ from the more common nylons in being less crystalline and more hydrophobic, but also being similar to nylons in how specific hydrogen bonds of the amides play a large role in the polymer properties. Typical nylons have good resistance to most organics due to the strong hydrogen bonding, but this comes at the cost of high moisture absorption relative to most thermoplastics. The amide groups made from primary amines provide very good reversible cross links through hydrogen bonds with the amides being both a proton donor and acceptor [43]. The regular structure of the homo-polymers leads to good crystallinity, although it should be noted that even chain length monomers have higher melting points than odd chain length monomers due to the difference in crystal structures, and as the number of methylene groups increases between the amide groups, the melting point also declines; see Table 2. The density of amide groups and the even versus odd chain lengths establish the nature of the crystallinity, which has a profound effect on the strength and heat resistance of the dimer polyamides. Thus, it is not surprising that with dimer acid having 34 carbon atoms between carboxyl groups, the softening point is much lower than the melting points of structural nylons; for dimer polyamides the softening point is measured since they do not have a distinct melting point. In addition, with the dimers having a variety of monoand bi-cyclic along with non-cyclic structures, the ability to form large, distinct crystalline structures is unlikely. The hydrogen bonding structure alters properties other than melting/softening points. As the length of the diacids and/or diamines increases, the stiffness drops, but the water resistance and elongation improve. While the hydrogen bonds play a key role in the dimer-based polyamides, the dimer structure also has a large effect on the polyamide properties [9,22,24,57]. As expected, the 34 carbons between the amide groups makes the product less hydrophilic and more hydrophobic than the 4 carbons in nylon 6,6. Even with the shortest diamine, the dimer polyamide has a low softening point (100 • C), which is not desirable for many hot-melt adhesive applications and is likely to have a creep problem. This drawback of reduced hydrogen bonding is even more serious by extending the length of the diamine portion with hexamethylenediamine and dimer diamine; see Table 2. Consequently, replacing part of the dimer acid with shorter chain diacids, such as the bio-based sebacic or azelaic acids, provides improvement in the softening point along with higher tensile strength, at only a modest reduction in elongation; see Table 3 [24,58]. The increase in sebacic acid amounts improved the softening point of the product, with just 5% replacement resulting in a 35 • C increase in the softening point and a 150 MPa increase in tensile strength, while reducing the elongation by 100%, emphasizing how important hydrogen bonding is to the overall properties of the polyamides. Table 3. Effect of co-diacid on the softening points, tensile strength, and elongation of dimer polyamides using ethylene diamine [24]. As mentioned, using the co-diacids sebacic and azelaic acids keep these polyamides as highly bio-based materials. The commercial natural product ricinoleic acid (12-hydroxy-9-cis-octadecenoic acid) is isolated from castor oil, and then subjected to caustic fusion to produce sebacic acid [59,60]. Although this is a truly bio-based compound, the use of lead oxide in some cases to improve the product yield in the oxidation reaction makes this material less green. On the other hand, the azelaic acid is made by oxidative ozonolysis of oleic acid in a more environmentally friendly process and produces useful pelargonic acid as the main by-product [59,61]. Most diamines are not bio-based, but they are generally a small weight percent of the various dimer polyamide formulations. The literature does not discuss the difference in improvement of properties when using azelaic compared to sebacic acid. However, the use of 20-or 18-carbon linear dibasic acid as the copolymerizing diacid with a polymeric fatty acid and various diamines to prepare polyamide hot-melt adhesives provides resins with better tensile strengths at ambient and elevated temperatures and increased moisture resistance than with the sebacic or azelaic acids [62]. It was proposed that these property improvements are derived primarily from the increased crystallinity imparted to the polyamides by the long chain, linear 20-or 18-carbon dibasic acid by incorporating the dimer into the co-diacid crystallites since they are more similar in chain lengths than with the dimer than the sebacic acid (10 carbons) and the azelaic acid (9 carbons). These much more expensive co-diacids are available commercially and are bio-based, but the improvement has not been cost effective because of the ability to form a wide range of properties with the conventual monomers [8,10]. These standard polyamides of dimer, co-diacid, and ethylenediamine or hexamethylenediamine are useful in many industrial applications [8,11,22,24,56,63] due to their good bonding to dissimilar substrates, resistance to water, oil and greases, and little softening of the polyamide until the softening point is reached. Most other hot melt adhesives, such as polyethylene, polypropylene, ethylene-vinyl acetate, and polyurethane generally cannot compete dimer acid polyamides in higher performance applications [24,64]. These other polymers depend on entanglement and van der Walls forces, rather than the reversible cross linking through hydrogen bonds. The wide range of properties of dimer acid, hot melt polyamide adhesives is controlled by the diacid and diamine formulations, which are mainly industrial secrets, but not by compounding with additives that can migrate with time to cause bond deterioration through loss of interface strength [24]. However, non-bio-based, moisture-cured hot-melt polyurethanes developed in recent years with good high temperature and moisture resistance properties have provided competition to the dimer polyamides in some markets [65]. Over the years, use of other diamines has brought greater utility to the polyamides for other important markets. A key example is that the standard polyamides have little adhesion to plastics, including polyvinyl chloride. Replacing some of the shorter chain diamines with piperazine ( Figure 7) results in adhesion to polyvinyl chloride and other plastics [66]. This improvement in plastic bonding was not explained until Frihart provide an explanation for the role of piperazine [67]. After showing that rheology and solubility parameter models of piperazine-containing polyamides were inconsistent with the excellent performance of the piperazine-containing dimer polyamides compared to those without piperazine; the acid-base interaction model was shown to be consistent with the data in the literature [58,64,[67][68][69][70]. The amide made from piperazine with its secondary amines is only a proton acceptor compared to amides made from the typical primary amines despite the increased rigidity of the individual diamide bonds given the lower flexibility of the six-member ring. Even polyamides made from dimer and dimerdiamine do not bond vinyl despite the limited number of hydrogen bonds. Increasing the amount of piperazine with respect to ethylenediamine led to decreased rate-and time-dependent shear thinning rheometry indicating that the increased piperazine amount led to decreased interchain hydrogen bonding [64]. Thus, the piperazine-containing polyamides have sufficient acceptor sites not tied up in internal cohesion bonds so that many are available for externally bonding to the acidic proton on polyvinyl chloride, while most polyamides made from just primary amines have their acceptor sites mainly tied up in internal hydrogen bonds [67]. Another approach is to use diols or amino-alcohols to provide mode flexibility and bonding to plastics [71,72] due to the alcohol only forming hydrogen-acceptor esters, but not hydrogen-donating bonds. Even though the amide groups are typically approximately 15% by weight of the polyamide, they show a strong role in the polyamide properties. The crucial role that the diamines play in the polyamide properties is revealed by comparison of 1,2-diaminopropane (DAP, Figure 7) and 1,3-daimopropane or ethylenediamine (EDA) [68]. EDA is a standard diamine component used to obtain strong fast setting polyamides, most likely due to good ability to form hydrogen bonds that lock up polymer segments due to the very limited rotation of the diamide structures. However, seemingly small changes in the diamines make a big difference in the polyamide properties, see Table 4. The DAP, which has an even chain length between amide groups such as ethylenediamine, provides polyamides with reasonable tensile strength and modulus, while the 1,3-diaminopropane, which has an odd chain length between amide groups, never develops a truly solid material since it slowly cold flows at ambient conditions; see Figure 4. The poor performance of the odd chain length diamines is reinforced by the example of 2-methyl-1,5-pentanediamine giving a polyamide with cold flow problems [68], while its isomer hexamethylene diamine does not. An interesting property is that the more hindered DAP has a longer open time allowing for better positioning of the substrates before the adhesive hardens in contrast to the fast-setting ethylenediamine but still forms a polyamide with good strength Even though the amide groups are typically approximately 15% by weight of the polyamide, they show a strong role in the polyamide properties. The crucial role that the diamines play in the polyamide properties is revealed by comparison of 1,2-diaminopropane (DAP, Figure 7) and 1,3-daimopropane or ethylenediamine (EDA) [68]. EDA is a standard diamine component used to obtain strong fast setting polyamides, most likely due to good ability to form hydrogen bonds that lock up polymer segments due to the very limited rotation of the diamide structures. However, seemingly small changes in the diamines make a big difference in the polyamide properties, see Table 4. The DAP, which has an even chain length between amide groups such as ethylenediamine, provides polyamides with reasonable tensile strength and modulus, while the 1,3-diaminopropane, which has an odd chain length between amide groups, never develops a truly solid material since it slowly cold flows at ambient conditions; see Figure 4. The poor performance of the odd chain length diamines is reinforced by the example of 2-methyl-1,5-pentanediamine giving a polyamide with cold flow problems [68], while its isomer hexamethylene diamine does not. An interesting property is that the more hindered DAP has a longer open time allowing for better positioning of the substrates before the adhesive hardens in contrast to the fast-setting ethylenediamine but still forms a polyamide with good strength despite the hinderance of the methyl group near one of the amine groups. Thus, the extra methyl group hinders bond rotation but does not prevent the formation of intermolecular hydrogen bonds, while the more flexible 1,3-dimainopropane in these formulations, also containing piperazine, does not establish strong hydrogen bonding domains, most likely preventing formation of substantial crystallites. Another change in the diamine part of the polyamide formulation is to use diamines with polyether groups to improve impact resistance and low temperature properties [70]. The polyether domains provide low glass transition temperature due to the reduce hinderance to rotation compared to hydrocarbon domains. Traditional markets for these adhesives are in footwear, cabinet assembly, multiwall bag closure, vinyl-clad windows, heat shrinkable telecommunication cable connectors, and other assembly applications in packaging, automotive, and electrical industries [22,24]. The commercial advantage of the hot melt polyamides is that the amide bonds retain their structure until near the softening point followed by rapid viscosity drop for ease of application and then fast strength development when the heat is removed. The combination of high aliphatic bio-based content and the strong amide bonds led to wide chemical and grease resistance, along with good heat resistance. Unfortunately, the literature is lacking some data that can provide an improved understanding of the effect of crystallinity on polyamide performance, with a notable example being dynamic mechanical analysis sweeps with increasing temperature. These data could provide the glass transition temperature, and softening point transitions. and loss of rigidity with increasing temperature. Related bio-based research areas that have been mainly ignored are naval stores, which includes other TOFA adhesive uses [14] and TOFA based amine hardeners for epoxies [15]. Conclusions The dimerization of unsaturated fatty acids with a clay catalyst, and conversion of these to polyamide adhesives, have been commercialized for over half a century, but a critical examination of the chemistry involved in both processes has not been published. Some confusion over the structure of dimer products has been resolved by improved analytical methods [38]. Many places in the literature wrongly describe the dimerization process as occurring by a Diels-Alder reaction, which needs a dienophile activated by an attached electron withdrawing group, which is not present in this case. The clays are known to be acidic catalysts and the montmorillonites used in this process are swelling clays that allow for a confined acidic reaction environment [8,10]. This type of reaction is also consistent with the branched monomers and the variety of dimer and higher polymer structures produced in dimerization. Although the original polymerized fatty acids find many uses, the isolation of the dimer fraction results in a variety of amides and polyamides produced that have unusual properties compared to other thermoplastic polymers. Certain associative oligomers can vary from hard, clear, and low solubility ink jet resin to molecules that form gels with mineral oil and other low polarity compounds for candles and other consumer products, depending upon the composition. The amide bonds provide the oligomer association strength, while the variety of dimer structures limits crystal sizes to less than that needed for dispersion of visible light leading to clear products. For dimer polyamides, incorporation of medium chain bio-based diacids provide improved strength for the polyamides, while specific diamines provide a wide range of performance characteristics. The effect of the different diamines is explained by changes in hydrogen bonding between the different amide groups. These materials with a high bio-based content have been mainly ignored by current researchers working on bio-based materials.
2023-08-11T15:19:57.272Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "45d2df4a5b510a6587a572e48b6d26635fb78b48", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/16/3345/pdf?version=1691559332", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eee71875a65f0fbfdd973d0e74a3d9ff0cf86437", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
81683961
pes2o/s2orc
v3-fos-license
A Review of the Pattern of Malaria in Children above Neonatal Age at the University of Port Harcourt Teaching Hospital (2006-2011) A retrospective study of children presenting with symptoms suggestive of malaria between 2006 and 2011 was carried out in the University of Port Harcourt Teaching Hospital. Sociodemographic data, clinical information and laboratory investigations were retrieved from the laboratory records of the medical microbiology department. Chi-square analysis was used to assess the prevalence of malaria in the different age groups and sexes. The results showed a 70% prevalence of malaria out of the 23698 patients reviewed. Malaria was significantly higher (χ2 = 18.66, p<0.0001) in male patients compared to female patients There was a significantly higher (χ2 = 6.76, p =0.0093) prevalence (72.70%) of malaria among children under 5 years and 593 (3.47%) patients had severe malaria (≥3+ parasitemia). Severe anaemia, fever and bronchopneumonia were mostly associated with severe malaria. There was an average prevalence of 70.61% from 2006 to 2011. The annual prevalence of malaria declined from 76.7% in 2009 to 60.6% in 2011. The study showed a high prevalence of malaria among the patients, with children under 5 years being the most significantly affected. Introduction Malaria is still a major cause of morbidity and mortality globally. [1] In 2010 it accounted for 655,000 deaths all over the world. Nigeria was responsible for 32% of thisdeath s. [1] It is endemic in Nigeria. [1] Malaria has remained a major cause of morbidity and mortality in Nigeria. [1] It has an all year round transmission, Plasmodium falciparum is the predominate specie transmitted by the anopheles mosquitos. [1]There has been a 20% decline in under-five mortality due to malaria from 2010 to 2013. [1] This decline could be attributed to the efforts of the National Malaria Control Strategic Plan (NMCSP) which include amongst others reducing malaria related mortality, reducing malaria parasite prevalence in children under age 5, increasing possession and use of insecticide treated nets (ITNS) and long lasting insecticides nets, introducing and scaling up indoor residual spraying (IRS), increasing the use of diagnostic tests for fever patients, improving efforts related to appropriate and timely treatment of malaria and increasing coverage of intermittent preventive treatment of malaria during pregnancy. [1] According to the current Nigerian demographic and health survey 2013, malaria accounts for 20% under-five mortality. [1] There are about 3.3 billion people at risk for malaria, with about 247 million people affected yearly resulting in a million death mostly in children under the age of 5 years [2]. It accounts for about 50% of the out-patient hospital visits in endemic areas like Nigeria. [3]. Older children may serve as reservoirs for the transmission of malaria, leading to persistent asymptomatic infections [4]. There have been various reports of malaria among school age children, ranging from 11 -26% prevalence especially among children between 8-16 years old malaria endemic countries [5][6][7]. The epidemiological patterns of malaria vary widely in the different regions of Nigeria, with infant malaria posing a major public health challenge. A retrospective review of the prevalence of malaria among children above the neonatal age presenting at the University of Port Harcourt Teaching Hospital, South-South, Nigeria between 2006 and 2011 was carried out in this study. Study Design A retrospective analysis of laboratory records of paediatric patients suspected of malaria presenting at the University of Port Harcourt Teaching hospital between 2006 and 2011 was carried out. Study Area The study was carried out at the University of Port Harcourt Teaching hospital, a tertiary health care institution located in Port Harcourt, Rivers state, Nigeria. The hospital is a 500 bed capacity hospital which is also a referral center for the Niger-Delta area of Nigeria. Study Population The study population consisted of Paediatric patients (1 -192months old) presenting with symptoms suggestive of malaria (fever, malaise, loss of appetite, pain, etc.) to the hospital. Patients between 1 -192 months old were included in the study, while patients less than 1 month old were excluded from the study population. Data Collection Data on age, sex, clinical diagnosis, associated conditions, laboratory diagnosis and results of the study population were collected from the data register of the department of medical microbiology of the hospital. Diagnosis of Malaria Malaria diagnosis was done by the microscopic examination of Geimsa-stained thin film blood smears from each patient according to standard protocol Ojurongbe et al [5]. Data Analysis The Epi Info statistical software v7 (CDC, USA) was used to analyze the data collected. The Student's T-test was used to determine the difference in the age groups among patients positive for malaria and those negative for malaria. Chi-square was used to determine the difference in the proportion of occurrence of malaria in the patients according to sex. A p-value of < 0.05 was considered significant and all tests was carried out at a 95% confidence interval. Table 2 showed a significant difference between patients positive for malaria and patient that did not have malaria (χ 2 : 18.66; p < 0.0001). There were 9619 male patients and 7497 female patients with malaria. Table 3 shows that 12443 (72.70%) of the patients positive for malaria were below the age of 5 years, 3269 (19.10%) were between 5 -10years old and 1404 (8.20) were between 10 -16 years old. There was a statistically significant occurrence of malaria among the children under 5 years (χ 2 =6.76, p = 0.0093). The occurrence of malaria in the age groups above 5 years old was not statistically significant (p > 0.05) The microscopic grading of parasitemia in the patients showed that 13257 (77.45%) cases of malaria with 1+, 3266 (19.08%) with 2+, while 593 (3.47%) had ≥3+ parasitemia as shown in Table 4. There was severe complicated malaria in 348 (3.62%) of the male patients and 245 (3.27%) of the female patients, while 9271 (96.38%) of the male patients and 7252 (96.73%) of the female patients had acute non severe malaria (Table 5). The distribution of severe malaria according to age groups showed that severe malaria was significantly high (p <0.0001) among children under 5 years (67.8%), followed by children between 5 -10 years (21.8%) and children between 11 -16 years old (10.5%) as shown in Table 6. Fig 2 shows that occurrence of malaria ranged from 63.5 % to 65.7% between January and April, while it ranged from 68.1% to 75.0% between May and August. There was a decline in occurrence from 73.5% to 58.0% from September to December. Discussion There was a 70% prevalence of malaria among all the patients seen, with 72.7% of the patients with malaria being under the age of five. This is consistent with the 70% transmission rate reported by the WHO in 2015 [8], and prevalence rates between 73 -80% reported in other studies across Nigeria [9][10][11]. This high prevalence may be attributed to the endemic nature of malaria in Nigeria and the large tropical rainforest area of Rivers state. Implications on this high prevalence may include anemia [12], poor cognitive function due to cerebral malaria [13] and increased mortality among school-age children [14]. The occurrence of malaria was significantly higher in children under 5 years old (72.7%). Among the patients with severe malaria, children under the age of 5 years was also significantly affected (67.8%; p < 0.0001). This is similar to the findings of previous studies indicating severe malaria significantly occurs in children below 5 years especially in malaria endemic regions. The Nigerian demographic and Health Survey final report of 2013 also reports that children under age 5 and pregnant women are the groups most vulnerable to illness and death from malaria infection in Nigeria. [1,[10][11][12][13][14][15][16]. Severe anaemia, fever and bronchopneumonia (26.2%, 18.4% and 13.6% respectively) was mostly seen among patients with severe malaria. These conditions have been associated with severe malaria in different studies [6, 13, and 15]. These conditions have also been shown to impair effective immune response to malaria leading to progression of symptoms [16]. The endemic nature of malaria in a country like Nigeria, also translates to continuous exposure to mosquito bites and transmission of malaria parasite that ultimately leads to the formation of high affinity antibodies that inhibit parasite growth and disease progression [1,17]. The study showed that occurrence of malaria peaked during the period of heavy rain (May -July) in the country. This is in agreement with the reports of Olasehinde et al., which reported a peak in the occurrence of malaria during the period of heavy rain fall Nigeria [10]. There is an increase in the pools of stagnant water, leading to an increase in the mosquito population which increases the rate of parasite transmission [18]. There was an average prevalence of 70.61% between 2006 and 2011, with a decline in prevalence of malaria from 76.7% to 60.6% observed from 2009 to 2011. The decline in prevalence may be attributed to efforts by the NMCSP which include amongst others reducing malaria related mortality, reducing malaria parasite prevalence in children under age 5, increasing possession and use of insecticide treated nets (ITNS) and long lasting insecticides nets, introducing and scaling up indoor residual spraying (IRS), increasing the use of diagnostic tests for fever patients, improving efforts related to appropriate and timely treatment of malaria and increasing coverage of intermittent preventive treatment of malaria during pregnancy.( intermittent preventive treatment using sulphadoxine-pyramethamine (IPT-SP) treatment for malaria control in Africa) [1,18,19]. Conclusions Malaria was significantly more prevalent in children under 5 years. Though there was a decline in prevalence
2019-03-18T14:06:30.369Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "76756310dba922783ddd1821551cfdb0ebc21875", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20180930/UJCM2-16911741.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7959156247fd068fe9904f3ccd8013f36a53965c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252600792
pes2o/s2orc
v3-fos-license
Microscopic marine invertebrates are reservoirs for cryptic and diverse protists and fungi Background Microbial symbioses in marine invertebrates are commonplace. However, characterizations of invertebrate microbiomes are vastly outnumbered by those of vertebrates. Protists and fungi run the gamut of symbiosis, yet eukaryotic microbiome sequencing is rarely undertaken, with much of the focus on bacteria. To explore the importance of microscopic marine invertebrates as potential symbiont reservoirs, we used a phylogenetic-focused approach to analyze the host-associated eukaryotic microbiomes of 220 animal specimens spanning nine different animal phyla. Results Our data expanded the traditional host range of several microbial taxa and identified numerous undescribed lineages. A lack of comparable reference sequences resulted in several cryptic clades within the Apicomplexa and Ciliophora and emphasized the potential for microbial invertebrates to harbor novel protistan and fungal diversity. Conclusions Microscopic marine invertebrates, spanning a wide range of animal phyla, host various protist and fungal sequences and may therefore serve as a useful resource in the detection and characterization of undescribed symbioses. Video Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s40168-022-01363-3. Background The ubiquity of single-celled protists and fungi in various environments, alongside their ecological and metabolic diversity, has facilitated their capacity for niche exploitation. Some species support vast ecosystems as photosynthetic primary producers, while others utilize varying forms of heterotrophy, such as phagotrophs and parasites, which link ecological networks and trophic scales [1]. Indeed, all eukaryotic groups contain parasites that have evolved to exploit a host [2,3]. The Apicomplexa, for example, form an exclusively symbiotic phylum (many being harmful parasites), characterized by the presence of a morphological structure called the apical complex, which aids host cell penetration and the initiation of infection [4]. However, relationships between protists, fungi, and metazoan hosts span the entire range of symbiosis. At the other end of the spectrum, photosynthetic dinoflagellates of the Symbiodiniaceae are well-known mutualists in coral [5], lignocellulose-degrading metamonads have facilitated niche expansion and the subsequent success of termites [6], while leucocoprineous fungi form a (typically) vertically transmitted association with fungus-growing attine ants [7]. Nevertheless, microbiome research has most commonly focused exclusively on bacterial communities, despite the understanding that microbial eukaryotes, along with archaea and viruses, are likely contributing to the pool of interactions that the term defines [3]. The lack of eukaryotic data in microbiome marker gene surveys is mostly due to methodological limitations rather than a genuine absence of protists. The co-amplification of host DNA when targeting symbiotic microbial eukaryotes can often dwarf non-metazoan reads and nullify attempts to fully characterize the eukaryome. However, various approaches have been developed to mitigate this problem, and sequence characterization of the eukaryotic microbiome is a possibility [8,9]. The majority of microbiome studies focus on vertebrate hosts which ultimately represent a minute proportion of animal diversity [10,11]. The Arthropoda alone makes up ~80% of all known animal species [11]. Microscopic marine invertebrates remain highly underrepresented in even bacterial microbiome literature [12]. As a consequence, eukaryotic microbiomes of these animals are almost completely unknown, and even for larger, commercially relevant marine invertebrates, the data are scarce [13]. Most invertebrate phyla include species smaller than 1-2 mm and belong to either planktonic or meiofaunal communities [14]; their abundance and diversity raise the possibility that microscopic marine invertebrates may interact with microbial eukaryotes in various ways. Many stable, symbiotic, microeukaryote-invertebrate associations are well documented [15][16][17], but protists can also be found inside/associated with these animals because they are consumed in their diet [18]. Therefore, the classification of microbial eukaryotes as true symbionts or components of a host-associated microbiome may be difficult with marker gene analysis alone. Here, in an attempt to characterize protistan and fungal diversity in over 200 microscopic marine invertebrates, we rely on phylogenetic reconstruction to identify taxa that fall within typically host-associated clades, mitigating potential overemphasis and misidentification of microorganisms in the diet as symbionts. We expected that these minute animals could either be too small to host microbial eukaryotes, in which case we would not find sequence variants that could be reliably identified as symbiotic (i.e., fall within our target taxa), or simply understudied as viable hosts, which would result in the detection of a large proportion of unidentified lineages. Specimen collection Microscopic invertebrate specimens were taken from a larger cohort of animals collected for bacterial microbiome analysis [12]. All specimens were isolated from one of three locations in British Columbia, Canada (Calvert Island, Quadra Island, and Vancouver) or Curaçao in the Dutch Caribbean, from July 2017 to January 2019. The majority of specimens were collected from either sediment, with a meiobenthic dredge (subtidal), or shovel (intertidal and subtidal); water column via horizontal and vertical plankton tows with a 64-μm mesh; or macroalgae, picked from rock pools. A small number of animals were sampled extemporaneously from other habitats, referred to as "other" in Fig. 1. The samples were taken back to the laboratory and stored at 4 °C. Animals were extracted from sediment and macroalgal samples with MgCl 2 treatment [19] or the "bubble and blot" protocol [20], and specimens were isolated with Irwin Loops [21] under a dissecting microscope (Zeiss Stemi 508) within 24 h of arrival. All tools were sterilized with 10% bleach and 70% ethanol before use. Prior to the preservation, specimens were transferred to droplets of sterile marine water, imaged on either a Zeiss Axioscope A1 or Leica DMIL microscope (British Columbia and Curaçao locations, respectively) with Axiocam 503 color or Sony a6000 cameras. Specimens were then assigned taxonomic groups following the World Register of Marine Species (WoRMS, https:// www. marin espec ies. org/ index. php) and given a unique alphanumeric code. Recorded specimens were then washed in at least three successive transfers of sterile water and immediately frozen in 20 μL of sterile water at −20 °C until DNA extraction. Amplicon sequence variant (ASV) generation Raw reads were initially trimmed with Cutadapt (v3.4) [25] to remove primers, before being processed in R [26] using the DADA2 package (v.1.14.1) [27]. Following the standard pipeline, trimmed reads were truncated based on quality profiles and filtered using the default parameters (maxN=0 and max EE=c(2,2)). Error rates were modeled on the first 100 million bases and "pseudo" pooling was implemented during sample inference to allow singletons, providing they appear in more than one library. Paired-end reads were then merged before chimera detection and taxonomic classification with the RDP Naïve Bayesian Classifier against the PR2 database (v.4.12.0) [28]. Preprocessing and filtering were done with the phyloseq package [29]. Libraries were removed if their relative abundance of metazoan reads was greater than 70%. Subsequently, all metazoan reads were removed from the remaining samples, as were non-eukaryotic sequences. Libraries with less than 1000 reads were discarded and any ASVs left with a read count of zero were also removed. To account for sequence variation across multiple copies of the 18S rRNA gene in individual protists, a phylogeny was reconstructed for all reads using IQTree (v.1.6.12) [30] and ASV sequences were subsequently grouped based on phyloseq's tip_glom function (using Agnes hierarchical clustering and a tree height (h) of 0.05) [29]. The Sankey diagram, detailing specimen metadata of final library selection, was produced with the ggaluvial package in R [31]. Phylogenetics To reconstruct potential symbiont phylogenies, ASVs above a minimum relative abundance threshold of 0.1% in any one library were selected according to broad taxa assigned by the ASV pipeline (e.g., Apicomplexa, Ciliophora, Fungi, etc.). Unassigned sequences were also included if BLAST results showed similarities to the taxon in question. A selection of diverse, near full-length, reference sequences was added to structure each phylogeny including the top five BLAST hits (for each ASV) in the NCBI nt database (blastn, e value threshold of 1e−25). All sequences were trimmed to a maximum length of 2000 bp prior to aligning. The multiple sequence alignment was produced using the mafft EINSi iterative alignment algorithm [32] and masked with trimal to remove sites with gaps in more than 90% of sequences or with a similarity score of less than 0.001 [33]. A maximum-likelihood phylogeny was reconstructed with IQtree, using a GTR+F+R7 substitution model and 1000 ultrafast bootstraps [34]. The unaligned fasta files were trimmed to remove sequences that were irrelevant and/or represented minor strain variations from the same studies, and alignment, masking and phylogenies were repeated. To visualize phylogenies, IQtree output trees were imported in R, rooted with the treeio package [35], and plotted using ggtree [36] and ggtreeExtra [37]. Branch lengths were removed to improve visualization of topologies, and bar charts displaying ASV prevalence measures in specimens were produced with ggplot2 [38] and later added using Adobe Illustrator. The Annelida contained the largest number of distinct apicomplexan sequences (Fig. 2b, STable 1); indeed, it is assumed that annelids are the ancestral hosts of gregarines (a subgroup of the Apicomplexa), before these parasites spread to other marine invertebrates [39]. Forty of the 52 ASVs were found in association with only a single host phylum, suggesting that many apicomplexans may have a high degree of host specificity. Although no single ASV was detected in all host phyla, six ASVs were found in four or more phyla. The majority of ASVs (n=19) were spread across known Eugregarinorida diversity, which is also true of other amplicon surveys [40]. Our data also provide evidence for wider host ranges of many known apicomplexan clades. For instance, one cluster of ASVs (spread across all nine host phyla) was found within the insect-infecting Neogregarinorida (Fig. 2c, Sfig.1), branching sister to a clade containing Syncystis mirabilis isolated from a "water scorpion" (Nepa cincera, Insecta), but also found in dragonflies, and Quadruspinospora mexicana from the Mexican lubber grasshopper (Taeniopoda centurio) [41]. Typically, the Neogregarinorida are known for infecting terrestrial hosts [42][43][44][45], and they are often found in amplicon surveys of soils and marine sediment [40]. Our phylogeny does include BLAST hits of environmental sequences isolated from soil (and sediment) within this cluster; therefore, we should not discount the idea that Neogregarinorida sequences found in our marine invertebrates could be derived from ingested cysts in terrestrial runoff. However, only 14 of the 46 occurrences of Neogregarinorida ASVs were from animals isolated from the sediment. Gregarines, in general, are thought to be mostly monoxenous, meaning their life cycle involves just one host organism. Although two sequences of the Neogregarina were found in multiple host phyla (four and eight phyla respectively), most gregarine ASVs (20/27) were phylumspecific, with the remaining five ASVs found in just two host phyla (Fig. 2). Notably, two of these dixenous gregarines were detected in nematodes; six apicomplexan ASVs were detected in eleven individual nematodes in total (Fig. 2a, STable 1), despite no prior record of nematode-infecting Apicomplexa in the literature. In the Marosporida, two ASVs found in various host phyla branched with a group of mollusc parasites as sisters to the Rhytidocystidae. ASV_713, found in a single kinorhynch, mollusc, and chaetognath, is the sister group of the remainder of this mollusc-infecting clade, whereas ASV_168 (one of our most widespread apicomplexan lineages and found in all host phyla except Cnidaria and Xenacoelomorpha) was identical to Margolisiella islandica, a heart-infecting parasite of the Islandic scallop Chlamys islandica [45] (Fig. 2d, SFig. 1). We also found several ASVs in the Rhytidocystidae-some of which were isolated from molluscs, platyhelminthes, and arthropods and therefore found outside of their typically associated hosts (annelids) [46]. The most abundant ASV is a sister lineage of the blastogregarine Siedleckia cf. nematoides, a parasite of the bristle worm (Scoloplos armiger), but only shares 87.32% sequence identity (Fig. 2e). This lineage was found in all phyla with the exception of Cnidaria. Another of the more abundant ASV branches as a novel clade within Coccidia (the subclass to which Corallicolida, Adeleorina, and Eimeriorina belong; Fig. 2). This position within the coccidia has low support (Fig. 2f, SFig. 1). Again, these cryptic sequences share low sequence similarity scores to GenBank accessions (< 85% to environmental sequences), and there appears to be no specific host and/ or environmental trait consistent across all associated specimens in our dataset. Finally, we detected lineages closely related to fishinfecting Goussia (Fig. 2g), supporting the hypothesis that small invertebrates may serve as paratenic hosts for some species. Three distinct sequences were found in two annelids, three chaetognaths, and two molluscs, respectively. The sequence found in annelids shares over 96% identity to Goussia ameliae, which was isolated from the pyloric caecum of landlocked alewives (Alosa pseudoharengus) and is not known to infect other hosts [47]. The chaetognath isolate is slightly more dissimilar (94.0% sequence identity) to the highest scoring reference sequence (Goussia washuti from wild bluegill, Lepomis macrochirus) [48] and likely represents an undescribed species. Finally, the molluscan sequence is closest to that of Goussia pannonica (99.2% sequence identity) from the blue bream (Abramis syn. Ballerus sapa) [49]. Ciliophora Contrary to apicomplexans, which are entirely hostrestricted, most described ciliates are free-living. Consequently, the distribution of ciliate ASVs found in association with microscopic invertebrates does not match the taxonomic diversity and relative abundance predicted by environmental surveys of the group as a whole, but it does reflect what is known about lineages that are predominantly parasitic. We detected relatively few Spirotrichea (mostly oligotrichs and choreotrichs) and Litostomatea, which alone typically make up 70-90% of free-living ciliates in the marine environment [50]. ASVs belonging to these common ciliate groups are almost exclusively found in arthropods-animals at the largest end of the size range investigated here-and are most likely food. Indeed, nearly all ciliate ASVs were detected in arthropod specimens, 41 were detected in arthropods alone, and 50% (41/82) of all arthropod specimens contained at least one ciliate ASV (Fig. 3a, STable 1), resulting in the largest number of distinct ciliate ASVs compared to the other host phyla (Fig. 3b, STable 1). Half of all Cnidaria (4/8) also contained more than one ciliate ASV (although the total number of specimens analyzed was considerably lower). Conversely, ciliate ASVs were found in just 14.3% of Kinorhyncha and Nematoda (1/7 and 4/28, respectively) (Fig. 3a, STable 1). The majority of ciliate ASVs in our dataset clearly belong to clades of known ecto-and endosymbionts, with a marked overrepresentation (compared to their relatively lower known diversity) of taxa from Suctoria and especially Apostomatia (epibiotic and parasitic subgroups of the Phyllopharyngea and Oligohymenophorea, respectively). Although most of these taxa are already known symbionts of marine invertebrates, they are generally documented in much larger specimens: adult echinoderms [51,52], large cephalopods [53] and other molluscs [54], hydroids [55], and crustaceans [56]. Notably, associations between the suctorian genus, Ephelota, and small crustaceans (copepods) have also been reported [57,58]. ASV_013, found in 18 host specimens across six phyla, formed a small cluster with other ASVs appearing as a sister group to species of Ephelota (Fig. 3c, SFig. 2). ASV_016 appears to be a member of the Rhabdostyla genus (Fig. 3d, SFig. 2), a well-known invertebrate epibiont and noted for their symbiotic relationship with annelids of the Salvatoria genus [59,60]. We observed the same ciliate genus on a specimen of the Syllis polychaete (Fig. 1d). These epibionts are noted to sometimes result in the misidentification of some annelid species, given their morphological similarity to papillae [59]. Many of our ciliate ASVs were notably dissimilar from known reference sequences and often formed uninformative clusters. The most prevalent lineage (ASV_072) appeared in 24 animal specimens across seven of the nine phyla investigated. It branched in a weakly supported cluster with two, more spurious, ASVs and uncultured sequences from various marine environments, within the usually host-associated Oligohymenophorea (Fig. 3e, SFig. 2). The detection of two Colpoda-like ASVs is unusual (Fig. 3, SFig. 2), given that the genus Colpoda is quintessentially terrestrial. Despite an old report (based on morphology) of a Colpoda commensal of the sea urchin (Toxopneustes variegatus) [61], and the existence of marine species within the class Colpodea [62,63], these signals could also be soil-derived cysts ingested by the animals. Fungi We detected a large diversity of fungal ASVs associated with marine invertebrates (Fig. 4, SFig. 3) and observed fungal-like structures emanating from some specimens (Fig. 1e). Many putatively marine fungi are assigned to species that are also found in terrestrial habitats-this is particularly true of the Ascomycota and Basidiomycota [64,65] which make up the majority of species in our dataset (Fig. 4, SFig. 3). This may be indicative of terrestrial contamination, for instance, if marine invertebrates ingested spores, but fungal phylogenies often show putatively marine fungi nested within clades of typically terrestrial lineages [65]. This led to a hypothesis that most marine fungi diversified before animals transitioned to a terrestrial lifestyle [65], but it has also been proposed that many truly marine isolates recently evolved from terrestrial ancestors [64,66]. Some fungi are capable of tolerating vastly different habitats [67], so may inhabit both marine and terrestrial environments. Our data does, however, support the idea that habitat can influence species localization [65,68,69]. Eighty-two of the 121 unique ASVs were from specimens localized to a single habitat; 49 were from sediment. The Ascomycota and Basidiomycota represented 56 and 54 ASVs, respectively. By comparison, we found just ten ASVs belonging to the Chytridiomycota, which typically dominate other nearshore and sediment samples [64]. Fungal ASVs were found in the majority of specimens in all phyla except Cnidaria (where they were found in only 37.5% of specimens; Fig. 4a, STable 1). Furthermore, all Platyhelminthes, Chaetognatha, and Kinorhyncha contain at least one fungal ASV (Fig. 4a, STable 1). Despite this, there were relatively few unique fungal ASVs in both Chaetognatha and Kinorhyncha (Fig. 4b, STable 1). Of the total 121 unique fungal ASVs, 74 were found to be specific to just one host phylum, 25 of these host-phylum-specific sequences were found in arthropods and 19 in platyhelminthes. Although there is ample evidence of coevolution between fungal species and plant hosts [70], each host phylum-specific lineage in our dataset only ever occurred in one or two specimens. In contrast, two fungal sequences were found in more than 50 specimens and previous reports have shown how a single fungal species can engage with multiple ant genera [71]. ASV_117, found in six different host phyla, branches Environmental and host-associated ciliate lineages. Maximum-likelihood phylogeny of all Ciliophora ASVs, reference sequences and best BLAST hits using the GTR+F+R7 substitution model. Individual ASVs indicated by white rectangles in a gray ring. Accompanying dots reflect the presence in each host phylum (colored accordingly). Black bars in the outer ring reflect the number of specimens associated with each ASV (on a log scale). Nodes are labeled to show UltraFast bootstrap support and taxonomic clades are annotated by color. Outer red clade labels show host-associated taxa (single line) and epibiotic symbionts (double line). a Percentage of individuals with at least one ASV in the tree. b Absolute number of distinct ASVs. c-e Highlighted lineages discussed in the text sister to a sequence from the Chaetothyriales (Fig. 4c, SFig. 3): often referred to as "black yeasts" and sometimes implicated as potentially pathogenic [72,73]. ASV_019 is identical to several Aspergillus and Penicillium spp. (Fig. 4d, SFig. 3), which are often co-isolated from marine samples. Aspergillus spp. infect a wide range of vertebrate hosts, including cetaceans [74], and can produce metabolites detrimental to the photophysiological performance of the coral symbiont, Symbiodinium [75]. Both fungal genera have been isolated from diseased coral and sponges [76,77]. Phylogenetic and microsatellite-based analyses have been unable to distinguish between aquatic and terrestrial strains of some species [78], but marine sequences are common [79] and ASV_019 was found in all host phyla except Xenacoelomorpha (Fig. 4d). Our most common fungal sequence was found in all phyla except in the Cnidaria and appears to be related to Cladosporium spp., along with several uncultured sequences obtained from the marine environment Fig. 4 Evidence of fungal ASVs in marine invertebrates. Maximum-likelihood phylogeny of all Fungi ASVs, reference sequences and best BLAST hits using the GTR+F+R7 substitution model. Individual ASVs indicated by white rectangles in a gray ring. Accompanying dots reflect the presence in each host phylum (colored accordingly). Black bars in the outer ring reflect the number of specimens associated with each ASV (on a log scale). Nodes are labeled to show UltraFast bootstrap support and taxonomic clades are annotated by color. a Percentage of individuals with at least one ASV in the tree. b Absolute number of distinct ASVs. c-e Highlighted lineages discussed in the text (Fig. 4e, SFig. 3). Notably, Cladosporium produces an enzyme that digests phytoplankton-derived organic matter, and its abundance has been linked to diatoms in the ocean [80], which are likely ingested by our hosts. Some fungi, like the Cryptomycota (of which we detected one ASV), are indeed parasites of protists and other fungi [80]. Other potential symbionts Syndiniales Marine alveolates (MALVs), or Syndiniales, are thought to be exclusively parasitic lineages that form a paraphyletic group outside of the core dinoflagellate clade [81]. Despite often being the most dominant microbial eukaryote in environmental marker gene surveys [82,83], the vast majority of Syndiniales are still uncultured, their hosts are unknown, and they are represented only by environmental sequences [84]. There are currently only five characterized species spread across three of the five recognized SSU rRNA clades (Groups I, II, and IV); Groups III and V are inferred only from environmental sequencing and are yet to be observed. We found Syndiniales in all invertebrate phyla except Xenacoelomorpha, with most ASVs being found in arthropods and molluscs (Fig. 5a, SFig. 4). This reflects our current understanding of these protists: Syndiniales are thought to be small flagellates that dominate seawater samples and would therefore be found in filter feeders like molluscs, and two of the five known Syndiniales genera typically infect arthropods. Group II, in which the genus Amoebophrya is described, represents eight of our ASVs. Amoebophrya has been found in a wide range of dinoflagellate hosts and was recently estimated to represent eight different species [85]. Most of our Group II ASVs branched outside of the Amoebophrya clade. Of the further eight ASVs that fall within the Group IV Syndiniales, all but one were found in, but are not exclusive to, arthropods. Three of these sequences form orphan lineages that appear to have diverged prior to the clade containing both known Group IV genera: Syndinium (found in copepods and radiolarians [86]) and Hematodinium (found in crustaceans [87]). The most frequently detected Syndiniales sequence in our dataset (ASV_198) belongs to Group I (Fig. 5b, SFig. 4). However, it appears distinct from the two described genera within Group I: Ichthyodinium and Euduboscquella (syn. Dubosquella). Ichthyodinium spp. infect fish eggs [88] whereas Euduboscquella spp. are found in tintinnid ciliates [89]. ASV_198, found in the Annelida, Mollusca, Platyhelminthes, Kinorhyncha, Arthropoda, and Nematoda, branches sister to an environmental sequence from the Northwest Pacific Ocean. We did, however, find ASVs that shared much more recent ancestors with both Group I type species. Furthermore, most of our Syndiniales likely fall in Group I, which is also noted to be the dominant group in other zooplanktonic hosts [90]. Perkinsea Perkinsea are alveolate parasites that can cause mass mortality events in fish, molluscs, and amphibians [91]. We detected a single perkinsid ASV in seven microscopic invertebrate specimens (two annelids, two molluscs, and three arthropods) (Fig. 5c, SFig. 4). Notably, this sequence is nearly 96% identical to that of Perkinsus qugwadi, a species that has caused sporadic mass mortality events in Yesso scallop (Patinopecten yessoensis) stocks in British Columbia [92]. Given that P. qugwadi shares ~96% sequence identity with some other Perkinsus species, it is likely we have detected a novel but related species. Although, notably, all of our associated specimens were isolated from the same location as previous P. qugwadi outbreaks (Quadra Island) [92,93]. Stramenopiles We generally found lower proportions of specimens with potentially host-associated stramenopile ASVsranging from zero kinorhynchs and cnidarians to 26.7% (4/15) of molluscs (Fig. 5d, SFig. 4). The most prevalent sequence was found in just seven individual hosts and appears to be related to the Labyrinthula genus (Fig. 5e, SFig. 4), a well-known pathogen of various seagrass species and also noted for its association with other algae and phytoplankton [94]. These seven specimens were not however isolated for macroalgae, but rather sediment. Several studies describe pathologies caused by thraustochytrids in large molluscs [95], and it has been suggested that a specific pathological association may exist. However, most of our sequences within the Thraustochytrida were isolated from host phyla other than Mollusca; ASV_430 was found in one molluscan specimen (Solenogastres), but not exclusively (Fig. 5f, SFig. 4). Sequences from both of these Labyrinthulomycetes orders (Labyrinthulida, to which the abovementioned Labyrinthula belongs, and Thraustochytrida) could represent saprotrophic organisms; however, several invertebrate associations exist [96,97]. The same can be said of the oomycetes, which are notable for their wide host and geographic range [98][99][100]. Conclusions Our sampling uncovers a role for microscopic invertebrates in the ecology of microbial eukaryotes. We detected a wide range of diverse organisms, often expanding the host range of previously characterized microbes, and several clades that we could not identify using presently archived reference sequences. Thus, our data support the hypothesis that, despite their size, microscopic marine invertebrates still harbor protist and fungal symbionts-many of which are currently uncharacterized. It should be noted that short regions of the 18S SSU gene alone limit our ability to distinguish unique species; even distinct protistan species can have almost identical 18S genes [101]. Utilizing the full length of the 18S gene sequence would be the next step in improving the taxonomic resolution of our potential symbionts [102]. Although we acknowledge the potential for terrestrial run-off, these works support the notion that protists and fungi should be included in analyses of invertebrate microbiomes, and highlight host taxa that could warrant further exploration.
2022-09-30T14:01:46.041Z
2022-09-30T00:00:00.000
{ "year": 2022, "sha1": "93d95955575f60af3c6e7665cee4ce0c57c4bf56", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "93d95955575f60af3c6e7665cee4ce0c57c4bf56", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248512906
pes2o/s2orc
v3-fos-license
Discrete Isothermic Nets Based on Checkerboard Patterns This paper studies the discrete differential geometry of the checkerboard pattern inscribed in a quadrilateral net by connecting edge midpoints. It turns out to be a versatile tool which allows us to consistently define principal nets, Koenigs nets and eventually isothermic nets as a combination of both. Principal nets are based on the notions of orthogonality and conjugacy and can be identified with sphere congruences that are entities of Möbius geometry. Discrete Koenigs nets are defined via the existence of the so-called conic of Koenigs. We find several interesting properties of Koenigs nets, including their being dualizable and having equal Laplace invariants. Isothermic nets can be defined as Koenigs nets that are also principal nets. We prove that the class of isothermic nets is invariant under both dualization and Möbius transformations. Among other things, this allows a natural construction of discrete minimal surfaces and their Goursat transformations. Introduction Discretizing principal curvature nets is of great interest not only from a differential geometric point of view, but also in geometry processing, computer graphics and even freeform architecture [14,10].The most prominent versions of discrete principal nets are circular nets and conical nets [1,10].A new discretization, suggested by [13], is based on the checkerboard pattern inscribed in a quadrilateral net constructed by connecting edge midpoints.This approach has already proven to be useful in various applications [6,12,7].Its effectiveness suggests that there is more to the concept than just the good numerical approximation qualities already hinted at by [13].Indeed, we find that a rich discrete theory can be built upon these checkerboard patterns. A checkerboard pattern is a quadrilateral net where every second face is a parallelogram.The edges of these parallelograms can be seen as discrete derivatives.If the faces in between the parallelograms are all planar we speak of a conjugate checkerboard pattern.If additionally the parallelograms are all rectangles we speak of a principal checkerboard pattern.As the concept of checkerboard patterns is Euclidean in nature, it is surprising that we can show principal nets to be Möbius invariant if they are seen as sphere congruences [15].Lifting these sphere congruences to the projective model of Möbius geometry preserves principality and offers the appropriate environment to efficiently study these geometric objects.For a net with planar faces the supporting lines of neighboring edges intersect.Every face can be associated with six such intersection points.In [4] discrete Koenigs nets have been characterized by the property that these six points lie on a common conic section, the so called conic of Koenigs [8].We apply this definition to a checkerboard pattern.The resulting discrete Koenigs nets enjoy several interesting properties such as projective invariance and the existence of dual nets similar to the approach in [2].Usually, Koenigs nets have been known as nets with equal Laplace invariants.While this property has been lost with previous discretizations of Koenigs nets, we manage to retain it in a natural way. We define discrete isothermic nets as discrete Koenigs nets that are also principal.Analogous to the classical smooth theory, the class of discrete isothermic nets is invariant under both dualization and Möbius transformations.This is not only interesting from a theoretical point of view, but also offers a practical way to define and construct discrete minimal surfaces as surfaces that are dual to their own Gauß image.Consequently the dual of any isothermic net on the unit sphere can be seen as a minimal surface.All of these steps can now be easily discretized with our approach. 3. Checkerboard patterns 3.1.Preliminaries.In this paper we study two-dimensional nets f ∶ D → R 3 .All our constructions are local which is why we can always assume D = Z 2 .To denote the onering or two-ring neighborhood of a vertex f (k, l) we use the shift notation as can be seen in Figure 1, left.The index i resp.ī indicates that the i-th coordinate is increased resp.decreased by one with i ∈ {1, 2}.For instance, We call the images of f the vertices and the pairs (f, f 1 ) or (f, f 2 ) the edges of the net.Further we denote by Q f the face (f, f 1 , f 12 , f 2 ).If no confusion can arise, we drop the index and just write Q. Definition 1.A checkerboard pattern is a regular quad net where every second face is a parallelogram: Even if at first glance the definition of checkerboard patterns seems quite restrictive, they are actually very natural objects.From any given net f we can easily construct a checkerboard pattern c f by midpoint subdivision as described in [13]: The vertices of c f are the edge midpoints of f .There are then two kinds of faces in c f .The first type of face is formed by the midpoints of edges of each face Q of f (compare Figure 1, right).It is elementary that these faces are parallelograms whose edges are parallel to the two diagonals of Q.We will refer to them as first order faces, as their edges can be interpreted as discrete first order derivatives.We denote the first order face associated to the quadrilateral Q f (k, l) by B f (k, l). The second type of face is formed by the midpoints of edges emanating from a common vertex of f .Those faces are, in general, non-planar quadrilaterals.We will refer to them as second order faces, because we associate properties related to second order derivatives with them.The second order face associated to the vertex f (k, l) will be denoted by W f (k, l), compare Figure 2. If no confusion can arise we will drop the index f in all quantities.Following Peng et al. [13] we call c f the checkerboard pattern of f and f the control net of c f , see Figure 2. Note that for a given checkerboard pattern there is a three parameter family of control nets.A control net is uniquely determined after the choice of an initial vertex as all other vertices can be obtained through iterated reflection at the vertices of the checkerboard pattern.Remark 2. The checkerboard pattern approach can be extended to nets with combinatorial singularities.For each n-gon, midpoint subdivision creates an inscribed n-gon, see e.g. an inscribed triangle in Figure 2, right. (−1, 1) T .Intuitively speaking, the parameter lines of f and c f enclose an angle of 45 degrees.So, we can think of c f as being parameterized along the directions u and v in the coordinate plane. The edge vectors 2 ) up to second order.Indeed, as a simple Taylor expansion shows.Moreover it can be shown by Taylor expansion that the difference of opposite edge vectors in a second order face W f (k, l) approximates ∂ uv φ( k, l) by first order.This motivates the notation of δ u f and δ v f for the edge vectors of B f and gives rise to the following definition. Definition 3. We call a checkerboard pattern orthogonal if its first order faces are rectangles.We call it conjugate if its second order faces are planar.A checkerboard pattern is principal if it is both conjugate and orthogonal. Remark 4. Conjugacy of a checkerboard pattern c f is already determined by its control net f and so are orthogonality and principality.Indeed, second order faces of c f are planar if and only if the two nets defined by the diagonals of f have planar faces, compare Figure 3, right.Thus the class of conjugate checkerboard patterns is invariant under projective transformations applied to the vertices of the control net. Curvature Theory In this section we define a discrete version of the shape operator connecting nets to their Gauß images.We find that the properties of the shape operator for conjugate or principal nets are consistent with the smooth theory, see Figure 5.Moreover, the discrete shape operator provides a way to numerically approximate smooth principal curvature directions, compare Figure 6.We start by defining the Gauß image of a net. is a net with vertices on the unit sphere S 2 .We call n the Gauß image or vertex normals of f .Additionally, for the face The generalized surface area of Q f is the surface area of the orthogonal projection of Remark 6.For planar quadrilaterals without self-intersections the generalized surface area is the same as the surface area.The face normal N is a normal vector to B f and for a planar face Q it coincides with a normal vector to Q.The vertex normal n at f is also the face normal of the corresponding second order face W f in the sense of formula (1). Having defined a Gauß image n for a net f , we can relate the discrete derivatives (δ u f, δ v f ) and (δ u n, δ v n) with the help of the corresponding checkerboard patterns c f and c n .The idea is to define the shape operator as the linear mapping (δ u f, δ v f ) ↦ (δ u n, δ v n).However, we face the problem that (δ u f, δ v f ) and (δ u n, δ v n) not necessarily span the same two-dimensional subspace.This is overcome by projecting in direction of N , leading to the following definition: Definition 7. Let f be a net, let n f be its Gauß image and let P N be the orthogonal projection along the corresponding face Gauß image N .We define S as the function on Z 2 that maps (k, l) to a linear operator in the space spanned by where all entities are evaluated at a point (k, l) ∈ Z 2 .We call S(k, l) the shape operator of the face Q f (k, l).If no confusion can arise we drop the argument (k, l).The eigenvalues of S(k, l) are denoted by the symbols κ 1 and κ 2 and are called the principal curvatures. The eigenvectors of S(k, l) are the principal curvature directions. For each face Q we can define an offset face Q t by intersecting the plane parallel to B f at distance t with the lines spanned by the vertices of Q and their corresponding vertex normals n.Similar to [5,11,14], the area of Q t can be expressed by the Steiner formula which can be shown by short algebraic manipulations.Lemma 8.For a conjugate checkerboard pattern the identities ⟨Sδ u f, δ v f ⟩ = ⟨δ u f, Sδ v f ⟩ = 0 hold.Thus the shape operator is symmetric. Proof.For a conjugate checkerboard pattern c f the Gauß image n is the normal vector of the corresponding second order face W f .Thus, it is orthogonal to all the edges that W f shares with neighboring first order faces.As B f is a parallelogram, both n f and (n f ) 12 are orthogonal to the edge δ v f .We find that The same argument applies to ⟨Sδ v f, δ u f ⟩.As δ u f, δ v f constitute a basis of the domain of the shape operator, the shape operator is symmetric. Corollary 9.For a principal checkerboard pattern the edge vectors (δ u f, δ v f ) of B f are eigenvectors of the shape operator. Proof.This follows immediately from ⟨Sδ u f, As the partial derivatives can be observed in first order faces, so can the first fundamental form I. By using the first order face B n of the Gauß image and the corresponding derivatives δ u n and δ v n we can analogously define a second fundamental form.Definition 10.Consider a net f and its Gauß image n.We define the first and second fundamental forms by letting Lemma 11.A matrix representation Σ of the shape operator with respect to the basis (δ u f, δ v f ) is given by Proof.When using coordinates with respect to (δ u f, δ v f ) the inner product is ⟨⋅, ⋅⟩ is represented by the coordinate matrix I.For any vector v ∈ span(δ u f, δ v f ) we have ⟨v, δ u n⟩ = ⟨v, P N δ u n⟩ and likewise for δ v n.Thus the bilinear form ⟨⋅, S⋅⟩ is represented by the coordinate matrix II.For two vectors w 1 and w 2 with coordinates w 1 and w 2 we find that It follows that II = I Σ. Remark 12. Due to Lemma 8, in a conjugate checkerboard pattern the second fundamental form is a diagonal matrix. In analogy to [11] and [5] the area defined in Definition 5 can be computed by a mixed area form.This motivates the following definition. Lemma and Definition 13.Let A(⋅, ⋅) be the mixed area form defined by for two quadrilaterals with the same normal N f = N g .Then The mixed area form is closely related to the mean and Gaußian curvatures. Lemma 14.For a net f and its Gauß image n we define Qn as the orthogonal projection of Q n onto the supporting plane of B f .The following identities hold Proof.These identities can be shown by algebraic manipulations in particular making use of the Lagrange identity ⟨a × b, c × d⟩ = det ⟨a,c⟩ ⟨b,c⟩ ⟨a,d⟩ ⟨b,d⟩ .Remark 15.Definition 5 requires that every normal vector lies exactly on the unit sphere.For principal nets one can relax this requirement and instead adapt the lengths of normal vectors, such that the first order face B n of n is parallel to the first order face B f of f , as we will see in Section 5.3.This does not change principal directions and the Steiner formula (3) still holds. Möbius Transformations of Checkerboard Patterns This section discusses a way to apply Möbius transformations to orthogonal nets.This was originally introduced by Techter [15], who showed that the orthogonality of a net is equivalent to the existence of a sphere congruence of orthogonally intersecting spheres.A Möbius transformation can then be applied to these spheres, and from the transformed congruence we can obtain the transformed orthogonal net.We show that the class of principal nets is invariant under such Möbius transformations.Moreover, the orthogonal sphere congruence allows us to embed principal nets in the projective model of Möbius geometry PR 4,1 .This turns out to be a powerful tool for studying principal nets and gives rise to a non-Euclidean generalization of discrete principal nets. We write s = (c, r 2 ) for a sphere s with center c ∈ R 3 and squared radius r 2 ∈ R. Two spheres s 1 = (c 1 , r 2 1 ) and 2 ) intersect orthogonally, if and only if Note that by definition this extends to spheres of negative squared radius.We can interpret this in the projective model of Möbius geometry by including the points inside the light cone as will be explained in more detail later in this section.Geometrically, the orthogonal intersection of a sphere with negative squared radius can be understood as illustrated by Figure 7.This setup allows for the following lemma and definition. Lemma and Definition 16.Let f be a net and r 2 ∶ Z 2 → R. We call the function s = (f, r 2 ) a sphere congruence and interpret it as a family of spheres with centers in f and possibly complex radius r.If and only if the checkerboard pattern of c f is orthogonal, there exists a one-parameter family of sphere congruences s = (f, r 2 ) such that neighboring spheres intersect orthogonally.We call such a sphere congruence the Möbius representation s f of f and c f .If the checkerboard pattern associated to a sphere congruence s is principal, we call s a principal sphere congruence. Proof.Consider a quadrilateral Q = (f, f 1 , f 12 , f 2 ).We fix the squared radius r 2 of s = (f, r 2 ) at an initial point (k, l) ∈ Z 2 .This uniquely determines the radii r 1 and r 2 since Now an easy computation shows that Hence, the radius r 12 is well defined if and only if the checkerboard pattern c f is orthogonal.This process can be continued unambiguously, so every radius only depends on the choice of the initial radius. Remark 17.If the domain of the net f is not simply connected, the orthogonal sphere congruences s f do not exist in general.It is an interesting question for further research which properties might guarantee the existence of a Möbius representation for more complex topology or for combinatorial singularities. Lemma and Definition 18.Let f be the control net of an orthogonal checkerboard pattern c f and let s f be its Möbius representation.The image of s f under a Möbius transformation is again an orthogonal sphere congruence with a corresponding net f ′ and checkerboard pattern The invariance of orthogonal nets under such Möbius transformations was observed in [15].The extension to complex spheres and the next theorem are contributions of this paper. Theorem 19.Principal checkerboard patterns are mapped to principal checkerboard patterns under Möbius transformations. Proof.This follows directly from Theorem 22 as will be explained later on in this section. Projective model of Möbius geometry. To prove Theorem 19 we embed the Möbius representation s f of a principal checkerboard pattern c f into the projective model of Möbius geometry, see [1].Let e 1 , . . ., e 5 be the canonical basis vectors of the fivedimensional Minkowski space R 4,1 .It is equipped with the inner product For x ∈ R 4,1 {0} we write [x] for its projective equivalence class, i.e. We write PR 4,1 for the space of these equivalence classes.Any sphere s = (c, r 2 ) can be identified with a point of PR 4,1 by the mapping We view c as a vector in R 4,1 where the fourth and fifth components are zero and we define the vectors e 0 ∶= 1 2 (e 5 − e 4 ) and e ∞ ∶= 1 2 (e 4 + e 5 ).Then we can write Points can be seen as spheres with radius zero, so ι extends to points in R 3 .Observe that ⟪ι(s), ι(s)⟫ = r 2 .Thus the set of spheres with radius zero is identified with the light cone L ∶= {x ∈ PR 4,1 ∶ ⟪x, x⟫ = 0}.The points inside the light cone are those with ⟪x, x⟫ < 0 and correspond to spheres with negative squared radius. From a Möbius geometric point of view, planes in R 3 are spheres with infinite radius and center at infinity.We write = (n, d) for the plane defined by the equation ⟨n, x⟩ = d.The mapping ι can now be extended to spheres with infinite radius (i.e., planes) by The advantage of the projective model of Möbius geometry lies in the well known linearization of orthogonal intersection and Möbius transformations [1]. Theorem 20.Two spheres s 1 and s 2 in R 3 with squared radii in R ∪ {∞} intersect orthogonally if and only if ⟪ι(s 1 ), ι(s 2 )⟫ = 0.If one sphere has radius 0, orthogonal intersection is equivalent to just intersection.Möbius transformations in R 3 canonically extended to spheres an planes are exactly the orthogonal transformations in PR 4,1 . Definition 21.Let g be a net Z 2 → PR 4,1 .If adjacent vertices are orthogonal, i.e., ⟪g, g 1 ⟫ = ⟪g, g 2 ⟫ = 0, and the corresponding checkerboard pattern c g is conjugate, we call g a pseudo-principal net in PR 4,1 .In order to avoid confusion we will denote nets in PR 4,1 by g, while we use f for nets in R 3 . Let f be a net with orthogonal checkerboard pattern and let s f be a corresponding sphere congruence.Then ι ○ s f is a net Z 2 → PR 4,1 , where the vertices are the images of s f under ι. Proof.Orthogonality of adjacent vertices of a net in PR 4,1 is equivalent to the orthogonal intersection of adjacent spheres in R 3 . Let s f be a principal sphere congruence in R 3 .The four spheres s1, s2, s 1 and s 2 all intersect both s and the plane spanned by the centers f1, f2, f 1 orthogonally, compare Figure 8, left.Consequently, the four points ι(s 1 ), ι(s1), ι(s 2 ) and ι(s2) all lie in the subspace ι(s) ⊥ ∩ ι( ) ⊥ .Its dimension is two, since ι( ) and ι(s) are linearly independent.Hence, ι(s) is a pseudo-principal checkerboard pattern in PR 4,1 .Now let g be a pseudo-principal net in PR 4,1 and let U be the two-dimensional projective subspace that contains the four vertices g 1 , g1, g 2 and g2.We denote by U ⊥ its orthogonal complement with respect to the Minkowski inner product ⟪⋅, ⋅⟫.The space of all points in PR 4,1 that represent a plane in R 3 is given by {e ∞ } ⊥ .Referring to the projective space PR 4,1 we have dim U ⊥ = 1 and dim{e ∞ } ⊥ = 3.It follows that dim(U ⊥ ∩ {e ∞ } ⊥ ) ≥ 0 and thus contains at least one point .Since is a plane that intersects all points in U orthogonally, we conclude that the centers of g 1 , g1, g 2 and g2 all lie in and thus ι −1 (g) is a principal sphere congruence.Now Theorem 19 easily follows from Theorem 22. Proof of Theorem 19.As Möbius transformations in PR 4,1 are given by orthogonal transformations of R 4,1 , they preserve both orthogonality and k-dimensional subspaces.Thus pseudo-principal nets are mapped to pseudo-principal nets in PR 4,1 and by Theorem 22 this translates to principal nets in R 3 as well. Remark 23.In classical differential geometry, a principal net f can be characterized by the fact that its lift to the light cone f = f + e 0 + f 2 e ∞ is a conjugate net.The mapping ι(⋅) is a natural discretization of f ↦ f as ι(s) converges to f if the radius of the sphere s with center f converges to zero.Like in the classical theory ι(s) is a conjugate net.However, ι(s) reveals even more structure, namely the orthogonality of spheres, that cannot be observed in the limit anymore. A projective point of view. It is enlightening to also study the embedding of the sphere congruence to PR 4,1 from a more geometric perspective. The mapping ι can be seen as stereographically projecting a sphere s to the unit sphere S 3 and further mapping the image s ′ ⊆ S 3 to its polar point p = ι(s) with respect to S 3 , compare Figure 8, right.The polar point p is the apex point of the cone that touches S 3 along s ′ .The polar point of any sphere s ′ 1 ⊆ S 3 that intersects s ′ orthogonally lies in the polar hyperplane of p and is thus conjugate to p. Hence, the diagonals of the quadrilateral ι(s), ι(s 1 ), ι(s 12 ), ι(s 2 ) are not only orthogonal but conjugate with respect to S 3 .The projective approach also gives meaning to the vertices of f in the projective model.They are the images of ι(s) under the central projection R 4 → R 3 through the north pole of S 3 . Remark 24.The unique sphere with center ι(s) that intersects S 3 orthogonally, intersects S 3 along s ′ .Hence, the vertices ι(s f ) define a unique sphere congruence S of threedimensional spheres, where every sphere intersects its neighbors and also S 3 orthogonally.Remark 25.This geometric approach further allows us to generalize orthogonal sphere congruences to non-Euclidean geometry by replacing the stereographic projection from S 3 to R 3 by a central projection ψ ∶ S 3 → R 3 .A sphere congruence on S 3 conjugate with respect to S 3 gets mapped to a congruence of non-Euclidean spheres.These non-Euclidean spheres intersect in directions conjugate with respect to ψ(S 3 ) * the contour quadric of ψ(S 3 ), compare Figure 10 and Lemma 55 in the Appendix. 5.3. A Gauß image for principal nets.As mentioned in Remark 15, we can find an alternative definition of a Gauß image making use of the polarity properties of principal nets.This alternative is particularly interesting in connection with the minimal surfaces described in Section 7.1. Definition 26.If f is a net with principal checkerboard pattern c f , then n is a principal Gauß image of f and c f , if the edges of c f are parallel to the edges of c n and every sphere of s n intersects the unit sphere orthogonally. The principal Gauß image f from Definition 26 of f can be seem as a parallel net of of f on the unit sphere.The parallelism can be observed in the corresponding checkerboard patterns c f and c n , while the connection to the unit sphere can be observed in the Möbius representation s n .Instead of requiring vertices to lie exactly on the unit sphere, we require their corresponding spheres to intersect the unit sphere orthogonally.In the Proof.To show the polarity of diagonals we consider a quadrilateral of four spheres (s, s 1 , s 12 , s 2 ) with centers (n, n 1 , n 12 , n 2 ) that intersect S 2 orthogonally.Additionally every sphere intersects its neighbors orthogonally.The centers of all spheres that intersect both S 2 and s orthogonally lie on a plane that contains the circle S 2 ∩ s.This plane is nothing but the polar plane of n.The same argument goes for n 12 and thus the diagonals (n 1 , n 2 ) and (n, n 12 ) lie on conjugate lines. From the polarity the uniqueness follows immediately.Let us fix one vertex n(k, l) of n.Due to the parallelism of checkerboard patterns, we know the directions of diagonals emanating from n(k, l).The four corresponding polar lines all lie in the polar plane of n(k, l) and their intersection points determine the neighbors of n(k, l).Thus, the initial vertex n(k, l) corresponding to f (k, l) needs to be chosen on a line orthogonal to W f .Note that polar lines are orthogonal and thus the parallelism is preserved.As polarity is a symmetric relation this process can be continued over the entire net.Now we can choose the initial radius of the sphere s(k, l) at vertex n(k, l) such that it intersects S 2 orthogonally.The neighboring spheres of s(k, l) have their centers in the plane of all centers of spheres that intersect s(k, l) and S 2 orthogonally.Hence all radii can be chosen such that the orthogonal intersection with both, all neighbors and the unit sphere is met.Hence the so constructed net n is indeed the principal Gauß image of f .Remark 28.We could also use the principal Gauß image in Definition 7 of the shape operator.This only works for principal nets but it allows us to drop the orthogonal projection P N .Moreover, this approach fits the theory of minimal surfaces very well, as we will discuss in section 7.1. Koenigs nets In [4] Adam Doliwa defined discrete Koenigs nets as those conjugate nets where for every quadrilateral the six focal points lie on a common conic section, the so called conic of Koenigs.We apply the same definition to the checkerboard pattern c f instead of the control net f .This adaptation proves to be very useful as we can naturally dualize checkerboard patterns.Analogous to the smooth theory such a dual checkerboard pattern exists if and only if c f is a Koenigs net.Even though the definition of Koenigs nets is based on checkerboard patterns we find that the class of Koenigs nets is invariant under projective transformations applied to the vertices of the corresponding control nets.Again in [4] Adam Doliwa defined discrete analogs of the so called Laplace invariants of a conjugate net.These projective invariants appear, in a slightly adapted way, in the checkerboard approach as well.However, it is only in this setting that Koenigs nets can be characterized as exactly those nets that have equal Laplace invariants analogously to the smooth theory. 6.1.Characterization of Koenigs nets.The discretization in both this paper and in [4] is based on the smooth characterization of Koenigs nets that can be found in [9]. Definition 29.Let c be a conjugate checkerboard pattern.For the edge (c, c i ) we denote the supporting line by (c, c i ).We call the checkerboard pattern c a Koenigs checkerboard pattern if for every first order face (c, c 1 , c 12 , c 2 ) the six points are all different and lie on a common conic section, see Figure 11.If all of them lie on a common conic section the checkerboard pattern is Koenigs.The points p 1 and p 2 are always at infinity, here indicated by the dotted line, so the conic section is a hyperbola. Remark 30.Since in Definition 29 the points p 1 and p 2 are always at infinity, we know that the conic of Koenigs is always a hyperbola. The multiplicative one-form q is defined on the edge (g, g 1 ) as q(g, g1) = cr(g, g 1 , p, p ′ ). The Koenigs nets defined in this way are a special instance of a class mentioned in [1, p. 79], where a result similar to Theorem 32 is presented.Analogous to [2] the Koenigs property is equivalent to the existence of a closed multiplicative one-form on the edges of the checkerboard pattern. Definition 31 (Multiplicative one-form).Let g be a net with planar quadrilaterals.Let further p = (g, g 1 ) ∩ (g 2 , g 12 ) and p ′ = (g, g 1 ) ∩ (g2, g 1 2), see Figure 12.Then the multiplicative one-form q along this edge (g, g 1 ) is defined as the cross-ratio of the four points g, g 1 , p and p ′ q(g, g 1 ) ∶= (g − p) (g 1 − p) Theorem 32.Let c be a conjugate checkerboard pattern such that the six points p 1 , . . ., p 6 from Definition 29 are all distinct.Let further q be the multiplicative one-form from Definition 31 defined on the edges of c.Then q is closed if and only if c is Koenigs. Proof.This theorem can be proven by introducing a projective coordinate system followed by lengthy computations that can be found in detail in the Appendix. The multiplicative one-form can also be formulated via the vertices of the control net as the following lemma shows. This gives rise to the following lemma and definition. Lemma and Definition 34.Consider the setting of Figure 14 with the first order face (c 2 , c 12 , c 122 , c 22 ).The product q(c 2 , c 12 )q(c 122 , c 22 ) is a projective invariant of the . The oneform q defined on the edges of the checkerboard pattern can be expressed by the vertices of the control net. control net.It is called Laplace invariant and can be expressed via the control net by To every face of the control net we can associate two Laplace invariants. Theorem 35.Let c f be a conjugate checkerboard pattern with control net f such that the six points p 1 , . . ., p 6 from Definition 29 are all distinct.Then c f is Koenigs if and only if the two Laplace invariants defined in each face of the control net are equal. Proof.The two Laplace invariants of a face of the control net are equal if and only if the multiplicative one-form defined on the edges of the inscribed first order face is closed.Hence the statement follows from Theorem 32. Remark 36.There are special cases where not all points p 1 , . . ., p 6 are distinct, but the Laplace invariants are still equal.Those cases will turn out to be dualizable as well, so it makes sense to consider these nets to be Koenigs nets as well. Remark 37. It is worth noticing that Theorem 35 is independent of the choice of the control net.So if a checkerboard pattern is Koenigs every associated control net has equal Laplace invariants. Corollary 38.Koenigs checkerboard patterns are mapped to Koenigs checkerboard patterns under projective transformations applied to the vertices of the control nets. Proof.The Laplace invariants are defined as cross ratios of vertices and intersection points of lines of the control net.Hence it is invariant under projective transformations and so the property of equal invariants is preserved as well. Remark 39.Discrete Laplace invariants are defined for Koenigs nets in [4, p. 5] in a similar fashion.The benefit of the checkerboard pattern approach is that now the Koenigs nets can be characterized as "nets with equal invariants", compare Theorem 35, like one would expect coming from the smooth theory. Dualization. Definition 40.Let c be a checkerboard pattern.We call c ′ a dual checkerboard pattern of c, if it is edgewise parallel and corresponding first order faces are similar but have reversed orientation.If such a dual checkerboard pattern c ′ exists, we call c dualizable. Figure 14.The product q(c 2 , c 12 )q(c 122 , c 22 ) equals the cross ratio cr(f 1 , f 2 , P, Q) and is thus invariant under projective transformations applied to the control net. In analogy to the smooth case we find that the dualizable checkerboard patterns are precisely the Koenigs checkerboard patterns.The following theorem holds. Theorem 41.Let c be a conjugate checkerboard pattern.We introduce the following local notation in the face patch of a given first order face, see Figure 15: • Let a = δ v f and b = δ u f be the edge lengths of the central first order face. • We enumerate the exterior first order faces counterclockwise and denote their edge lengths with a i and b i accordingly.• For every second order face in the patch we denote its interior angles by α i , β i , γ i and δ i in counterclockwise order.Let r i = a i b i be the ratio of edge lengths for each first order face.If no two of the six points p 1 , . . ., p 6 from Definition 29 are equal, the following conditions are equivalent: Proof.First we show that (a) ⇐⇒ (b).If j is even, the edge lengths a j−1 and a j are related by the formula where k j ∶= sin(γ j ) sin(β j ) and c j ∶= sin(γ j + δ j ) sin(β j ) . Hence we can compute a 1 only from the given angles and ratios if and only if Consequently a nontrivial parallel net with the same ratios r and r i exists if and only if (a) holds. Next we show that (a) ⇐⇒ (c).When we dualize the net f all angles are replaced by their respective complements, i.e. Hence the coefficients k i are invariant under dualization while the coefficients c i change sign.If we denote the transformed edge lengths by a * i , a * and b * i , b * respectively, then the transformed relations read So the closing condition becomes Again we find that a * 1 can be determined from this equation if Equation ( 9) holds.However, comparing the potential formulas for a 1 and a * 1 we find that a * 1 = −a 1 .As no negative edge lengths can exist we conclude that a * 1 exists only if (a) holds.On the other hand if (a) holds, we can construct a dual net for any value a * 1 implying (c).Next we show that (a) ⇐⇒ (d).To do so we use the inscribed angle theorem (see Theorem 56).Let k( (p i , p j )) denote the slope of the line (p i , p j ) with respect to a coordinate system aligned with the asymptotes of the hyperbola, compare Theorem 56.We find Note that sin(γ 1 + δ 1 ) = 0 is equivalent to (c1, c1 2 ) ∥ (c, c 2 ) and thus is equivalent to p 5 = p 2 .So if the points p 1 , . . ., p 6 are all distinct, the denominators in the above equations are all nonzero.Computing the quotients yields sin(γ 1 ) sin(γ 2 ) sin(δ 3 + γ 3 ) sin(β 2 ) sin(β 3 ) sin(γ 1 + δ 1 ) . By Theorem 56 the points p 1 , . . ., p6 lie on a common hyperbola if and only if which is equivalent to (a).This concludes the proof. Remark 42.If we find p i = p j for some i ≠ j everything in the proof of Theorem 41 still holds except for the application of Theorem 56.So in such a case we still find that (a) ⇐⇒ (b) ⇐⇒ (c). Corollary 43.Let c f be a conjugate checkerboard pattern with control net f .Then c f is dualizable if and only if each two Laplace invariants defined in the faces of f are equal. Proof.If the six points p 1 , . . ., p 6 from Definition 29 are distinct, the statement follows from Theorem 41.Hence condition (a) in Theorem 41 is equivalent to the multiplicative one-form q being closed if p 1 , . . ., p 6 are all distinct.However these terms depend continuously on the vertices of the checkerboard pattern.Hence, any face patch, on witch q is closed, can be approximated with a sequence of dualizable face patches where p 1 , . . ., p 6 are distinct.Since condition (a) is preserved in the limit, so is the existence of a dual. Remark 44.For a given Koenigs checkerboard pattern, there is a two-parameter family of dual checkerboard patterns that differ in the scaling of corresponding first order faces.By choosing the initial scaling factors of two adjacent first order faces α 1 and α 2 , all other scaling factors can be computed recursively by the formulas where a i are the oriented edges of the corresponding first order faces, see Figure 16, left.This permits a stable dualization algorithm. The following lemma provides an easy way to generate Koenigs nets in PR 2 . Lemma 45.Let M and N be two commuting projective transformations PR 2 → PR 2 and let P ∈ PR 2 .Then the net f defined by is the control net of a Koenigs checkerboard pattern in PR 2 . Proof.We show that the condition of Theorem 35 is met in the quadrilateral (f, f 1 , f 12 , f 2 ), compare Figure 16, right.Let Let F ∶ Z 2 → R 3 be the net of homogeneous coordinates of the vertices of f .We find From this we can formulate the cross ratios as Now let M and N be the matrix representations of M and N in homogeneous coordinates. Then we can express the cross ratios as So we see that the two Laplace invariants cr(f 1 , f 2 , p, q) and cr(f, f 12 , p ′ , q ′ ) are equal. Isothermic nets Discrete isothermic nets can now be defined as principal nets that are also Koenigs nets.Analogous to the smooth case or to other discrete approaches [2] we find that the class of discrete isothermic nets is invariant under dualizations and Möbius transformations.This permits a construction of discrete minimal surfaces and their Goursat transformations as will be described later on. Definition 46.We call a checkerboard pattern c isothermic, if it is principal and Koenigs. As orthogonal first order faces are mapped to orthogonal faces under dualization, the next corollary follows immediately from Theorem 41.See Figure 17 for an illustration. Corollary 47. Isothermic checkerboard patterns are dualizable.Their dual is again an isothermic checkerboard pattern. The proof will be a direct consequence of Lemma 50 and is thus postponed for now.In order to the prove Theorem 48, we study isothermic nets again in the space PR 4,1 under the embedding ι.We have already defined pseudo-principal nets in PR 4,1 and can now extend them to pseudo-isothermic nets. Definition 49.We call a net g in PR 4,1 pseudo-isothermic if it is pseudo-principal and the two Laplace invariants for each face are equal.It turns out that the lift ι(f ) of an isothermic net f in R 3 is a pseudo-isothermic net in PR 4,1 as the following lemma shows. Lemma 50.Let f be the control net of an isothermic checkerboard pattern and ι(f ) be its lift to PR 4,1 .The Laplace invariants of corresponding faces of f and ι(f ) are equal. Proof.First note that ι(f ) has a conjugate checkerboard pattern and thus the Laplace invariants are well defined.Hence not only the supporting lines (f1, f2) and (f 1 , f 2 ) intersect, but also the corresponding pencils of spheres, compare Figure 19.However, we know that the first three components under the lift ι are the same as the original centers of spheres and when we compute the cross ratio of points lying on a line it is sufficient to use just one coordinate.So it follows that the Laplace invariants remain unchanged under ι. From Lemma 50 the proof of Theorem 48 follows immediately. Proof of Theorem 48.Every Möbius transformation can be seen as a projective transformation in PR 4,1 that preserves the inner product.Obviously these transformations preserve the cross ratio and since ι also preserves the Laplace invariants we can conclude that not only conjugacy and orthogonality, but also the Koenigs property is preserved under Möbius transformations. Figure 19.The idea behind the proof of Lemma 50: Not only do the lines (f 1 , f 2 ) and (f1, f2) intersect in P , but also the corresponding pencils of spheres intersect in a sphere with center at P .This means that there is a sphere with center at P that intersects both the sphere with center at f and the sphere with center at f 12 orthogonally. 7.1.Minimal surfaces.Minimal surfaces can be constructed by dualizing an isothermic net on the unit sphere, since the theory of minimal surfaces tells us that for any minimal surface its dual and its Gauß image are equal.With the Möbius transformation and dualization at hand we can reproduce this construction in the discrete setting. Definition 51.Let f be the control net of an isothermic checkerboard pattern c f .We call c f minimal if it has a dual checkerboard pattern c ′ that is also the checkerboard pattern of a principal Gauß image of f in the sense of Definition 26. Definition 52.Let f and f be control nets of minimal checkerboard patterns.They are related by a Goursat transformation if their principal Gauß images are related by a Möbius transformation. Definition 53.We say that a checkerboard pattern c f is on the unit sphere, if there is a Möbius representation s f where every sphere intersects the unit sphere orthogonally. Corollary 54.Let c n be an isothermic checkerboard pattern on the unit sphere.The dual checkerboard pattern c ′ n is a minimal checkerboard pattern and n is its principal Gauß image.If n is used to compute the discrete shape operator of c ′ n , the mean curvature of c ′ n is zero.Proof.The first statement follows directly from the definition of minimal checkerboard patterns.The principal curvature κ 1 and κ 2 are just the oriented scaling factors between edges of c n and c ′ n .If the Gauß image is the dual net at the same time the relation κ 1 = −κ 2 holds. Conclusion In this paper we presented a novel discretization approach based on the checkerboard pattern inscribed to a quadrilateral net.On the one hand this allows a discrete curvature theory (Definition 7) that is compatible with discrete offsets (Formula 3) similar to [11,5].On the other hand this approach allows a new discretization of conjugate nets, orthogonal nets and principal nets (Definition 3).We showed several properties of these nets, most noticeably that principal nets are consistent with the curvature theory (Corollary 9) and are invariant under Möbius transformations (Theorem 19) applied to the corresponding sphere congruence introduced in [15]. Further the checkerboard pattern could be used to define discrete Koenigs nets using the conic of Koenigs (Definition 29) analogous to [4].We find that discrete Koenigs nets are exactly those nets that are dualizable (Theorem 41) which links the approach taken in [4] to the approach of [2].Other characterizations of discrete Koenigs nets that have been found in this paper are the existence of a closed multiplicative one-form defined on the edges of a checkerboard pattern (Theorem 32) similar to [2].A new characterization of Koenigs nets that has been found is the equality of Laplace invariants (Theorem 35) which fits the original definition of these nets in the classical differential geometry.From the characterization via equal Laplace invariants we could deduce that the class of discrete Koenigs nets is invariant under projective transformations (Corollary 38). Despite the discretization idea of Koenigs nets and principal nets being quite different they work well together for isothermic nets which are defined as principal Koenigs nets.This means that the Koenigs property is preserved upon Möbius transformations (Theorem 48) and the principality is preserved upon dualization (Corollary 47).Consequently we can apply Möbius transformations and dualizations to discrete isothermic nets.This allows a construction of discrete minimal surfaces from an isothermic net in the plane.First we map it to the unit sphere with a Möbius transformation, where it can be interpreted as the Gauß image of a minimal surface.Then it is dualized to gain the corresponding minimal surface from its Gauß image, compare both Figure 20 Proof.First we show that q is closed around every second order face, i.e., multiplying the contribution of every edges of a second order face in counter clockwise order yields (c, c 1 , q) and the triangle (c 2 , c 12 , q) we find that Next we show that q is closed on the edges of every first order face B = (c, c 1 , c 12 , c 2 ) if and only if the checkerboard pattern is a Koenigs net.As the multiplicative one-form q is projectively invariant, we choose a projective coordinate system such that c = (0, 0, 1), c 1 = (1, 0, 1), c 2 = (0, 1, 1) and c 12 = (1, 1, 1).The intersection points then have the following coordinates for suitable s, v ∈ R.Those six points lie on a common conic section if the system of equations Ax 2 i +Bx i y i +Cy 2 i +Dx i z i +Ey i z i +F z 2 i = 0 has a nontrivial solution.Here x i , y i and z i stand for the three homogeneous coordinates of p i .We compute the determinant of the matrix of this system of equations: A nontrivial solution exists if and only if the determinant is zero.We can exclude the cases t = 0 and u = 0 since no p i are the same.We find that the determinant is zero if and only if s − st + vst = v − vu + vus.Now we compute the multiplicative one-form along the edges of the quadrilateral.We find p 3 = c 1 + (t − 1)c and p 1 = c 1 − c.So if we use c and c 1 as the bases for the line (c, c 1 ), we obtain For the next cross ratio we express the points p 2 and p 6 via c 1 and c 12 , obtaining p 2 = −c 1 + c 12 and p 6 = (1 − v)c 1 + vc 12 .So the cross ratio is Now q is closed if and only if Thus the existence of the conic of Koenigs is equivalent to q being closed. Appendix B. Some Theorems The following lemma is known as trace polarity of a quadric.For a detailed description of trace polarity in German language see [3].However, since a proof in English of the following lemma is hard to find, we give a version of a proof suitable to our setting. Lemma 55.Let s 1 and s 2 be two orthogonally intersecting spheres on S 3 , i.e. intersections of S 3 with conjugate hyperplanes h 1 and h 2 .If ψ is the central projection from point Z ∈ PR 4 onto a hyperplane ζ ≅ PR 3 , then in this hyperplane ψ(s 1 ) and ψ(s 2 ) intersect in conjugate tangent planes with respect to the contour quadric ψ(S 3 ) * of ψ(S 3 ).This means the corresponding tangent planes at the intersection points of ψ(s 1 ) and ψ(s 2 ) are orthogonal with respect to the inner product induced by ψ(S 3 ) * . Proof.For the proof we use homogeneous coordinates of PR 4 .We choose a basis (b 0 , . . ., b 4 ) such that the center of projection Z = b 0 .Let Q be the matrix such that the homogeneous coordinates of all points in S 3 are given by {x ∈ R 5 {0} ∶ x T Qx = 0}.We can assume without loss of generality that Q is a diagonal matrix.Since projections onto different planes are projectively equivalent, we can further assume that ζ is the polar hyperplane of Z. Thus ζ is given by the equation x 0 = 0. We introduce the block notation Q = diag(q 0 , Q). Since ζ is the polar hyperplane of Z, the contour quadric ψ(S 3 ) * is the intersection of S 3 with ζ.In homogeneous coordinates it is given by {(0, x) ∈ R 5 ∶ xT Qx = 0}. Let P ∈ s 1 ∩ s 2 be a point in the intersection of s 1 and s 2 and let τ be the tangent hyperplane to S 3 in P .Then the tangent planes to s 1 and s 2 are given by τ 1 = h 1 ∩ τ and τ 2 = h 2 ∩ τ .Note that two hyperplanes are conjugate with respect to a quadric, if and only if each contains the polar point of the other.Let H 1 ∈ h 2 ∩ τ be the polar point of h 1 and let H 2 ∈ h 1 ∩ τ be the polar point of h 2 .The line g 1 ∶= (P, H 2 ) lies in τ We now show the orthogonality of ψ(τ 1 ) and ψ(τ 2 ) with respect to Q.In order to facilitate the notation we identify points with their projective coordinates.As the projection ψ just sets the first coordinate of P to zero and due to the form of Q, we find that Hence, the point G 1 is the polar point of ψ(τ 2 ) with respect to ψ(S 3 ) * in ζ and vice versa.This shows the conjugacy of the tangent planes ψ(τ 1 ) and ψ(τ 2 ) of the projected spheres s 1 and s 2 . Theorem 56 (Inscribed Angle Theorem for Hyperbolas).Consider R 2 and coordinates (x, y) with respect to a basis.The slope of a vector (a, b) is defined as b a for a ≠ 0. Four points p i = (x i , y i ) ∈ R 2 with x j ≠ x k and y j ≠ y k lie on a hyperbola with equation y = c x , if and only if the quotient of slopes of (p 4 , p 2 ) and (p 4 , p 1 ) equals the quotient of slopes of (p 3 , p 2 ) and (p 3 , p 1 ), compare The gray ellipse ψ(S 2 ) * is the contour of the unit sphere under the same projection.We see that the ellipses ψ(s 1 ) and ψ(s 2 ) intersect in conjugate lines with respect to ψ(S 2 ) * as the polar point ψ(G 2 ) of the tangent ψ(τ 2 ) is contained in ψ(τ 2 ).The preimage of this polar point drawn in beige is the intersection of the corresponding tangent line to the unit sphere with the polar plane of the center of projection.The gray circle on the unit sphere is the preimage of the contour quadric, i.e., the intersection of the unit sphere with the polar plane of Z. Figure 1 . Figure 1.Left: Notation for vertices and faces.Right: An inscribed first order face, which is always a parallelogram. Figure 2 . Figure 2. Left: The first order faces of the checkerboard pattern are the blue parallelograms B f inscribed in the faces of the control net f .The white quadrilaterals W f in between are the second order faces.Right: Control net and associated checkerboard pattern with a combinatorial singularity. Figure 3 . Figure 3. Left: A principal checkerboard pattern.All the white faces are planar and all blue faces are rectangles.Right: The two nets defined by the diagonals of the control net have planar faces if and only if the checkerboard pattern is conjugate. Figure 4 . Figure 4. Left: A face Q f of f and the corresponding Gauß image Q n .The shape operator maps the first order face B f to the first order face B n projected into the plane of B f .Right: Face Q and its offset Q t . Figure 5 . Figure 5. Left: A control net of a principal checkerboard pattern and the eigenvectors of the shape operator.Right: The checkerboard pattern of the same net.We see that the first order faces are aligned with the eigenvectors of the shape operator as stated by Corollary 9. Figure 6 . Figure 6.If the net f samples a smooth parametric surface φ, the underlying checkerboard pattern can be used to compute the discrete principal curvature directions (left) which are visually not distinguishable from the analytically computed directions (right). 1 Figure 7 . Figure 7. Left: The Möbius representation of a two-dimensional orthogonal net.Every red circle intersects every neighboring blue circle orthogonally and vice versa.The green circle represents a circle with negative squared radius.Right: The circle s 1= (c 1 , r 2 1 ) intersects the circle s 2 = (c 2 , r22 ) with r 2 2 < 0 orthogonally if and only if it intersects the circle s2 = (c 2 , −r 22 ) in diametrically opposite points. Figure 8 . Figure 8. Left: This figure illustrates why conjugacy of a checkerboard pattern is preserved under Möbius transformations.The four gray spheres are intersected orthogonally by the pencil spanned by the orange sphere and the orange plane.After applying a Möbius transformation the four gray spheres still intersect a pencil orthogonally which contains a plane.Hence the centers of the transformed spheres are still planar.Right: The geometric description of the mapping ι in R 2 .A planar orthogonal circle pattern is stereographically projected onto the unit sphere.A new orthogonal net in space is obtained by the polar points of the circles on the sphere (not shown in the figure). Figure 9 . Figure 9.In the first row we see, from left to right, the control net, the checkerboard pattern and a Möbius representation of a principal net on the torus.The second row shows the image of the first row after a Möbius transformation is applied to the Möbius representation. H 2 H 2 Figure 10 . Figure 10.Both images show an orthogonal net of circles on S 2 which by a central projection is mapped to a net of conics.The net of conics is an h-orthogonal net of h-circles in the Cayley-Klein model of hyperbolic geometry. 6 Figure 11 . Figure 11.Definition of a Koenigs net.The supporting lines of neighboring edges in the checkerboard pattern intersect in the six points p 1 , . . ., p 6 .If all of them lie on a common conic section the checkerboard pattern is Koenigs.The points p 1 and p 2 are always at infinity, here indicated by the dotted line, so the conic section is a hyperbola. (a) ∏ 4 i=1 sin(γ i ) sin(β i ) r (−1) i i = 1 (b) There exists a nontrivial conformal Combescure transformation of c.This means that a checkerboard pattern with parallel edges exists where corresponding first order faces differ only by a similarity transformation.If one non-trivial conformal Combescure transformation exists, an entire two-parameter family of such transformations exists.(c) c is dualizable.(d) c is a Koenigs checkerboard pattern. Figure 16 . Figure 16.(a): The edges a i have to close in the initial net as well as in the dualized net.From this condition the scaling factors that guide the dualization can be computed.(b): The setting of Lemma 45. Figure 17 . Figure 17.An isothermic checkerboard pattern and its dual with the corresponding conics of Koenigs.The points on the hyperbolas are the points of intersecting supporting lines of neighboring edges. Figure 18 . Figure 18.An isothermic checkerboard pattern and its Möbius transform together with the corresponding conics of Koenigs.The figure features non-convex quads as the examples were constructed in such a way that the points of intersecting lines are all close to the checkerboard pattern. Figure 20 . Figure 20.Enepper surface: In the top row we see from left to right the Weierstrass data of the Enepper surface, the Gauß image of the Enepper surface and the Enepper surface itself.In the second row we see the checkerboard patterns of the corresponding nets. Figure 21 . Figure 21.Catenoid: In the top row we see from left to right the Weierstrass data of the Catenoid, the Gauß image of the Catenoid and the Catenoid itself.In the second row we see the checkerboard patterns of the corresponding nets. Figure 22 . Figure 22.A Goursat transform of a periodically extended Catenoid. 1 .Figure 23 . Figure 23.The multiplicative one form is automatically closed around every second order face as a consequence of Menelaus' Theorem. Figure 25 . 10 )Figure 24 . Figure24.The two-dimensional case of Lemma 55.The circles s 1 and s 2 on the unit sphere intersect orthogonally.The ellipses are their projections through the point Z.The gray ellipse ψ(S 2 ) * is the contour of the unit sphere under the same projection.We see that the ellipses ψ(s 1 ) and ψ(s 2 ) intersect in conjugate lines with respect to ψ(S 2 ) * as the polar point ψ(G 2 ) of the tangent ψ(τ 2 ) is contained in ψ(τ 2 ).The preimage of this polar point drawn in beige is the intersection of the corresponding tangent line to the unit sphere with the polar plane of the center of projection.The gray circle on the unit sphere is the preimage of the contour quadric, i.e., the intersection of the unit sphere with the polar plane of Z. p 1 p 2 p 3 p 4 Figure 25 . Figure 25.Inscribed angle theorem for hyperbolas.The four points p 1 , p 2 , p 3 and p 4 lie on a rectangular hyperbola if and only if the quotient of slopes of p 4 p 2 and p 4 , p 1 equals the one of p 3 p 2 and p 3 p 1 .
2022-05-05T01:16:32.316Z
2022-05-04T00:00:00.000
{ "year": 2023, "sha1": "1fb4a4e64c0220eb67d5d4cd0c823da8c167a6d4", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00454-023-00558-1.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "1fb4a4e64c0220eb67d5d4cd0c823da8c167a6d4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
16485998
pes2o/s2orc
v3-fos-license
Breaking the log n barrier on rumor spreading $O(\log n)$ rounds has been a well known upper bound for rumor spreading using push&pull in the random phone call model (i.e., uniform gossip in the complete graph). A matching lower bound of $\Omega(\log n)$ is also known for this special case. Under the assumption of this model and with a natural addition that nodes can call a partner once they learn its address (e.g., its IP address) we present a new distributed, address-oblivious and robust algorithm that uses push&pull with pointer jumping to spread a rumor to all nodes in only $O(\sqrt{\log n})$ rounds, w.h.p. This algorithm can also cope with $F= O(n/2^{\sqrt{\log n}})$ node failures, in which case all but $O(F)$ nodes become informed within $O(\sqrt{\log n})$ rounds, w.h.p. according to a gossip algorithm (e.g., selecting a random neighbor). Once a partner is chosen the node calls its partner and a limited amount of data is transferred between the partners, as defined by the gossip protocol. Three basic actions are considered in the literature: either the caller pushes information to its partner (push), pulls information from the partner (pull), or does both (push&pull). In the most basic information dissemination task, a token or a rumor in placed arbitrary in the network and we are interested in the number of rounds and message transmissions until all nodes in the networks receive the rumor. The selection of the protocol can lead to significant differences in the performance. Take for example the star graph, let nodes call a neighbor selected uniformly at random and assume the rumor is placed at one of the leafs. It is easy to see that both push and pull will require ω(n) rounds to complete the spreading of a single rumor while push&pull will take only two rounds. Somewhat surpassingly, but by now well understood, randomized rumor-spreading turned out to be very efficient in terms of time and message complexity while keeping robustness to failures [23,13]. In addition, this type of algorithms are very simple and distributed in nature so it is clear why gossip protocols have gained popularity in recent years and have found many applications both in communication networks and social networks. To name a few examples: updating a database replicated at many sites [9,23], resource discovery [22], computation of aggregate information [24], multicast via network coding [8], membership services [19], or the spread of influence and gossip in social networks [25,6]. In this paper we consider the most basic scenario, the random phone call model [23], where the underlying network is the complete graph and nodes can call a random neighbor according to some given distribution. In addition, the model requires the algorithm to be distributed and address-oblivious: it cannot use the address of the current communication partners to determine its state (for an exact definition see Section 2). For example this setting fits well to applications which require communication over the internet such as peer-to-peer protocols and database synchronization. A node can pick and call any (random or given) neighbor via its IP address, but it is desired to keep the algorithm address-oblivious otherwise it may have critical points of failure. For example agreeing before hand on a leader to contact (by its IP address) is not an address-oblivious algorithm. Furthermore, such a protocol is also highly fragile, although it leads to efficient information spreading (as pointed out in the star graph example above). The random phone call model was thoroughly studied in the literature starting with the work of Frieze and Gimmet [17] and following by Pittel [33] who proved an upper bound of O(log n) rounds for push in the complete graph. Demers et al. [9] considered both push and pull as a simple and decentralized way to disseminate information in a network and studied their rate of progress. Finally, Karp et al. [23] gave a detailed analysis for this model. They used push&pull to optimize the message complexity and showed the robustness of the scheme. They proved that while using only push the communication overhead is Ω(n log n), their algorithm only requires O(n log log n) message transmissions by having a running time of O(log n), even under arbitrary oblivious failures. Moreover they proved that any address-oblivious algorithm (that selects neighbors uniformly at random) will require Ω(n log log n) message transmissions. Our contribution We consider the same assumptions as in the random phone call model: the algorithm needs to be distributed, address-oblivious and it can select neighbors at random. In addition we use the fact that given an address of a node (e.g., its IP address) the caller can call directly on that address. This slight addition leads to a significant improvement in the number of rounds from O(log n) to O( √ log n), but still keeps the algorithm robust. Furthermore, assume that a node may fail (at the beginning or during the algorithm is executed) with probability O(1/2 √ log n ), independently. The main result of the paper is the following theorem: where F is the number of failed nodes (as described above). The algorithm has running time O( √ log n) and produces a bit communication complexity of O(n(log 3/2 n+ b·log log n)), w.h.p., where b is the bit length of the message. Clearly, if there are no failures (i.e., F = 0), then all nodes become informed in the number of rounds given in Theorem 1. As mentioned, we inform all nodes in O( √ log n) rounds vs. O(log n) rounds achieved by the algorithm of Karp et al. Our message complexity is O(n √ log n) compared to O(n log log n) and if the rumor is of bit length b = Ω( log 3/2 n log log n ) both of the algorithms bit complexity is Ω(b · n log log n). Moreover, if there are Ω(n) messages to be distributed in the network, then the first term in the expression describing the bit communication complexity is amortized over the total number of message transmissions (cf. [23]), and we obtain the same communication overhead as in [23]. Few words on the basic idea of the algorithm are in place. In a nutshell our approach has two phases: first we try to build an infrastructure, a virtual topology, that is efficient for push&pull. Second, we perform a simple push&pull on the virtual topology. The running time is the combination of both these tasks. For example, constructing a random star would be preferable since the second phase will then take only a constant number of rounds, but as it turns out the cost of the first phase, in this case, is too high. Interestingly, our algorithm results in balancing these two phases where each task requires O( √ log n) rounds. Instead of a star with a single leader we build a virtual topology with about random n/2 √ log n leaders and each leader is connected to about 2 √ log n nodes we call connectors (a node is either a leader or a connector). Each connector is then linked to two leaders after a process of pointer jumping [28] . This simple 2-level hierarchy results in a very efficient information spreading. Leaders are a source of fast pull mechanism and connectors are essential for fast spreading among leaders using push. Our approach was motivated from similar phenomena in social networks [16,2] (see the related work section for a more detailed description of these results). Journal version update: Motivated by the conference version of this paper [1], Haeupler and Malkhi [21] improved our bound and presented an elegant algorithm that solves the problem we study here in O(log log n) rounds together with a macthing lower bound. Nevertheless we think our work contributes to the understanding of the gossiping process and may be useful in extension of the model to general graphs. Preliminaries -Rumor Spreading Let G(V, E) be an undirected graph, with V the set of nodes and E the set of edges. Let n = |V | and Initially a single arbitrary node holds a rumor (i.e., a token) of size b bits; then the process of rumor-spreading (or gossiping) progresses in synchronous rounds. At each round, each node v selects a single communication partner, u ∈ N (v) from its neighbors and v calls u. The method by which v choses u is called the goosip algorithm. The algorithm is called address-oblivious if v's state in round t does not depend on the addresses of its communication partners at time t. Meaning, any decision about if, how and what to send in the current round is made before the current round. Nevertheless, v's state can still depend on the addresses of its communication partners from previous rounds [23]. Randomized gossip is maybe the most basic addressoblivious algorithm, in particular, when the communication partners are selected uniformly at random the process is known as uniform gossip. A well studied such case is the random phone call model [23] where G is the complete graph and u is selected u.a.r from V \ v. Upon selecting a communication partner the gossip protocol defines the way and which information is transferred between v and u. Three basic options are considered to deliver information between communication partners: push, pull and push&pull. In push the calling node, v, sends a message to the called node u, in pull a message is only transferred the other way (if the called node, u, has what to send) and in push&pull each of the communication partners sends a message to the node at the other end of the edge. The content of the messages is defined by the protocol and can contain only the rumor (in the simplest case) or additional information like counters or state information (e.g., like in [23]). After selecting the graph (or graph model), the gossip algorithm and protocol, the main metrics of interest are the dissemination time and the message complexity. Namely how many rounds and messages are needed until all vertices are informed 2 (on average or with high probability), even under node failures. The bit complexity is also a metric of interest and counts the total number of bits sent during the dissemination time. This quantity is a bit more involved since it depends also on b (the size of the rumor) and messages at different phases of the algorithms may have different sizes. A pointer jumping is a classical operation from parallel algorithm design [28] where the destination of your next round pointer is the pointer at which your current pointer points to. Our algorithm uses pointer jumping by sending the addresses (i.e., pointers) of previous communication partners to current partners (see Section 4 for a detailed description). Related Work Beside the basic random phone call model, gossip algorithms and rumor spreading were generalized in several different ways. The basic extension was to study uniform gossip (i.e., the called partner is selected uniformly at random from the neighbors lists) on graphs other than the clique. Feige et. al. [15] studied randomized broadcast in networks and extended the result of O(log n) rounds for push to different types of graphs like hypercubes and random graphs models. Following the work of Karp et al. [23], and in particular in recent years the push&pull protocol was studied intensively, both to give tight bounds for general graphs and to understand its performance advantages on specific families of graphs. A lower bound of Ω(log n) for uniform gossip on the clique can be conclude from [35] that studies the sequential case. We are not aware of a lower bound for general, address-oblivious push&pull. Recently Giakkoupis [18] proved an upper bound for general graphs as a function of the conductance, φ, of the graph, which is O(φ −1 log n) rounds. Since the conductance is at most a constant this bound cannot lead to a value of o(log n), but is tight for many graphs. Doerr et al. [10] studied information spreading on a known model of social networks and showed for the first time an upper bound which is o(log n) for a family of natural graphs. They proved that while uniform gossip with push&pull results in Θ(log n) rounds in preferential attachment graphs, a slightly improved version where nodes are not allowed to repeat their last call results in a spreading time of O( log n log log n ). A similar idea was previously used in [14,3] to reduce the message complexity of push&pull in random graphs. Fountoulakis et al. [16] considered spreading arumor to all but a small ǫ-fraction of the population. For random power law graphs [7] they proved that push&pull informs all but an ǫ-fraction of the nodes in O(log log n) rounds. Their proof relies on the existence of many connectors (i.e., nodes with low degree connected to high degree nodes) which amplify the spread of the rumor between high degree nodes, and this influenced our approach; in some sense our algorithm tries to imitate the structure of the social network they studied. Another line of research was to study push&pull (as well as push and pull separately) but not under the uniform gossip model. Censor-Hillel et al. [5], gave an algorithm for all-to-all dissemination in arbitrary graphs which eliminates the dependency on the conductance. For unlimited message sizes (essentially you can send everything you know), their randomized algorithm informs all nodes in O(D + polylog(n)) rounds where D is the graph diameter; clearly this is tight for many graphs. Quasirandom rumor spreading was first offered by Doerr et al. in [11,12] and showed to outperform the randomize algorithms in some cases (see also [4] for a study of the message complexity of quasirandom rumor spreading). Most recently Haeupler [20] proposed a completely deterministic algorithm that spread a rumor with 2(D + log n) log n rounds (but also requires unlimited message size). In a somewhat different model (but similar to ours), where nodes can contact any address as soon as they learn about it, Harchol-Balter et. al. [22] considered the problem of resource discovery (i.e., learning about all nodes in the graph) starting from an arbitrary graph. They used a form of one hop pointer jumping with push&pull and gave an upper bound of O(log 2 n)rounds for their algorithm. Kutten at. el. [27,26] studied resource discovery both in the deterministic and the asynchronous cases and presented improve bound. The idea of first building a virtual structure (i.e.; topology control) and then do gossip on top of this structure is not novel and similar idea was presented by Melamed and Keidar [31]. Another source of influence to our work was the work on pointer jumping with push&pull in the context of efficient construction of peer-to-peer networks [30] and on computing minimum spanning tress [29]. First, we present the algorithm, which disseminates a rumor by push&pull in O( √ log n) time, w.h.p. Then, we analyze our algorithm, show its corectness, and prove the runtime bound. Algorithm -Rumor Spreading with Pointer Jumping First, we provide a high-level overview of our algorithm. At the beginning, a message resides on one of the nodes, and the goal is to distribute this message (or rumor) to every node in the network. We assume that each node has a unique address (which can e.g. be its IP-address), and every node can select a vertex uniformly at random from the set of all nodes (i.e., like in the random phone call model). Additionally, a node can store a constant number of addresses, out of which it can call one of them in a future round. However, a node must decide in each round whether it chooses an address uniformly at random or from the pool of the addresses stored before the current round. In our analysis, we assume for simplicity that every node knows n exactly. However, a slightly modified version of our algorithm also works if the nodes have an estimate of log n, which is correct up to some constant factor. We discuss this case in Section 5. The algorithm consists of five main phases and these phases may contain several rounds of communication. Basically there are two type of nodes in the algorithm, which we call leaders and connectors, and the algorithm is: Phase 0 -each informed node performs push in every step of this phase. The phase consists of c log log n steps, where c is some suitable constant. According to e.g. [23], the message is contained in log 2 n many nodes at the end of this phase. Phase 1 -each node flips a coin to decide whether it will be a leader, with probability 1/2 √ log n , or a connector, with probability 1 − 1/2 √ log n . Phase 2 -each connector chooses leaders by preforming five pointer jumping sub-phases, each for c √ log n rounds. At the end, all but o(n) connectors will have at least 2 leader addresses stored with high probability. Every such connector keeps exactly 2 leader addresses (chosen uniformly at random) and forgets all the others. A detailed description of this phase is given below. Phase 3 -each connector opens in each round of this phase a communication channel to a randomly chosen node from the list of leaders received in the previous phase. However, once a connector receives the message, it only transmits once in the next round using push communication to its other leader. The leaders send the message in each round over all incoming channels during the whole phase (i.e., the leaders send the message by pull). The length of this phase is c √ log n rounds. Phase 4 -every node performs the usual push&pull (median counter algorithm according to [23]) for c √ log n rounds. All informed nodes are considered to be in state B 1 at the beginning of this phase (cf. [23]). The second phase needs some clarification: it consists of 5 sub-phases in which connectors chose leaders. In each sub-phase, every connector performs so called pointer-jumping [28] for c √ log n rounds, where c is some large constant. The leaders do not participate in pointer jumping, and when contacted by a connector, they let it know that it has reached a leader. The pointer jumping sub-phase works as follow: in the first round every connector chooses a node uniformly at random, and opens a communication channel to it. Then, each (connector or leader) node, which has incoming communication channels, sends its address by pull to the nodes at the other end of these channels. In each round i > 1 of this sub-phase, every connector calls on the address obtained in step i − 1, and opens a channel to it. Every node, which is incident to an incoming channel, transmits the address obtained in step i − 1. Clearly, at some time t each node stores only the address received in the previous step t − 1 of the current sub-phase, and the addresses stored at the end of the previous sub-phases. If in some sub-phase a connector v does not receive a leader address at all, then it forgets the address stored in the last step of this sub-phase. In this case we say that v is "black" in this sub-phase. The idea of using connectors to amplify the information propagation in graphs has already been used in e.g. [16]. From the description of the algorithm it follows that its running time is O( √ log n). In the next section we show that every node becomes informed with probability 1 − n −1−Ω(1) . Analysis of the Algorithm For our analysis we assume the following failure model. Each node may fail (before or during the execution of the algorithm) with some probability O(1/2 √ log n ). This implies that e.g. n 1−ǫ nodes may fail in total, where ǫ > 0 can be any small constant. If a node fails, then it does not participate in any pointer-or messageforwarding process. Moreover, we assume that the other nodes do not realize that a node has failed, even if they contact him directly. That is, all nodes which contact (directly or by pointer-jumping) a failed node in some sub-phase are also considered to be failed. First, we give a high-level overview of our proofs. Basically, we do not consider phases 0 and 1 in the analysis; the resulting properties on the set of informed nodes are straight-forward, and have already been discussed in e.g. [23]. Thus, we know that at the end of phase 0, the rumor is contained in at least log 2 n nodes, and at the end of phase 1 there are n/2 √ log n · (1 ± o(1)) leaders, w.h.p. Lemma 1 analyzes phase 2. We show that most of the connectors will point to a leader after a sub-phase, w.h.p. To show this, we bound the probability that for a node v, the choices of the nodes in the first step of this sub-phase lead to a cycle of connectors, such that after performing pointer jumping for c √ log n steps, v will point to a node in this cycle. Since we have in total 5 sub-phases, which are run independetly, we conclude that each connector will point to a leader, after at least 2 sub-phases. At this point we do not consider node failures. In Lemma 2, we basically bound the number of nodes pointing to the same leader. For this, we consider the layers of nodes, which are at distance 1, 2, etc... from an arbitrary but fixed leader u after the first step of a sub-phase. Since we know how many layers we have in total, and bound the growth of a layer i compared to the previous layer i − 1 by standard balls into bins techniques, we obtain an upper bound, which is polynomial in 2 √ log n . In Lemma 3 we show that most of the connectors share a leader address at the end of a sub-phase with Ω(2 √ log n / log n) many connectors, w.h.p. Here, we start to consider node failures too. To show this, we compute the expected length of the path from a connector to a leader after the first step of a sub-phase. However, since these distances are not independent, we apply Martingale techniques to show that for most nodes these distances occur with high probability. Lemma 4 analyzes then the growth in the number of informed nodes within two steps of phase 3. What we basically show is that after any two steps, the number of informed nodes is increased by a factor of 2 √ log n/2 , w.h.p., and most of the newly informed nodes are connected to a (second) leader, which is not informed yet. Thus, most connectors which point to these leaders are also not informed. These will become informed two steps later. The main theorem then uses the fact that at the end of phase 3 a 2 7 √ log n fraction of the nodes is informed, w.h.p. Then, we can apply the algorithm of [23] to inform all nodes within additional O( √ log n) steps, w.h.p. Now we start with the details. In the first lemma we do not consider node failures. For this case, we show that, w.h.p., there is no connector which is "black" in more than two sub-phases of the second phase. Let r(v) be the choice of an arbitrary but fixed connector node v in the first round of a sub-phase. Furthermore, let R(v) be the set of nodes which can be reached by node v using (directed) edges of the form (u, r(u)) only. That is, a node u is in R(v) iff there exist some nodes u 1 , . . . , u k such that u 1 = r(v), u i+1 = r(u i ) for any i ∈ {1, . . . , k − 1}, and u = r(u k ). Clearly, if there are no node failures, then only one of the following cases may occur: either a leader u exists with u ∈ R(v), or R(v) has a cycle. We prove the following lemma. Proof Let P (v) be a directed path (v, u 1 , . . . , u k ), where u 1 = r(v), u i+1 = r(u i ) for any i ∈ {1, . . . , k − 1}, and u i = u j , v for any i, j ∈ {1, . . . , k}, i = j. Then, r(u k ) ∈ {v, u 1 , . . . , u k−1 } with probability k/(n − 1). Let this event be denoted by A k . Furthermore, let B k be the event that r(u k ) is not a leader (B 0 is the event that neither r(v) is not a leader). If L is the set of leaders, then since communication partners are selected independently we have Simple application of Chernoff bounds imply that |L| = n(1 ± o(1))/2 √ log n , w.h.p. We condition on the event that this bound holds on |L|, and obtain for some k > c · 2 √ log n log n that whenever c is large enough. The first inequality follows from |L| = ω(k). This implies that the size of R(v) is at most c · 2 √ log n log n, w.h.p. Applying Inequality (1) with k = c · 2 √ log n , we obtain that the size of R(v) is at most c · 2 √ log n , with some constant probability tending to 1 as c tends to ∞. Now we prove that We know that where B 0 is the event that r(v) ∈ L and A 0 = ∅. Then, |R(v)| has a cycle, with probability less than As already shown, if i > c2 √ log n log n, then P r[ From the previous lemma we obtain the following corollary. Corollary 1 Assume there are no node failures. After phase 2, every connector stores the address of at least 2 leaders, with probability at least 1 − n −2 . We can also show the following upper bound on the number of connectors sharing the same leader address. This bound also holds in the case of node failures, since failed nodes can only decrease the number of connectors sharing the same leader address. Proof Let S be a set of nodes, and let r(S) = {v ∈ V | r(v) ∈ S}. We model the parallel process of choosing nodes in the first round of a fixed sub-phase by the following sequential process (that is, the first round of the sub-phase is modeled by the whole sequence of steps of the sequential process). In the first step of the sequential process, all connectors choose a random node. We keep all edges between (u, r(u)) with r(u) ∈ L, and release all other edges. Let L 1 denote the set of nodes u with r(u) ∈ L. In the ith step, we let each node of V \ ∪ i−1 j=0 L j choose a node from the set V \ ∪ i−2 j=0 L j uniformly at random, where L 0 = L. Clearly, the nodes are not allowed to choose themselves. Then, L i is the set of nodes u with r(u) ∈ L i−1 , and all edges (u, r(u)) (generated in this step) with r(u) ∈ L i−1 are released. Obviously, the sequential process produces the same edge distribution on the nodes of the graph as the parallel process. If now S ⊂ L i−1 , then the probability for a node v ∈ V \ ∪ i−1 j=0 L j to choose a node in S is |S|/|V \ ∪ i−2 j=0 L j |. Then, according to [34] the number of nodes v with r(v) ∈ S is at most |S| + O(log n + |S| log n), w.h.p. Similar to the definition of L i , for a leader u the nodes v with r(v) = u are in set L 1 (u), the nodes v with r(r(v)) = u are in set L 2 (u), and generally, the nodes v with r(v) ∈ L i−1 (u) define the set L i (u). Then, according to the arguments above |L i+1 (u)| = |L i (u)| + O(log n + |L i (u)| log n), w.h.p. We assume now that |L 1 (u)| = Θ(log n) (from [34] we may conclude that |L 1 (u)| = O(log n), w.h.p.). Then, for any i ≤ c · 2 √ log n log n, we assume the highest growth for log n log n) for any v (cf. Lemma 1), and assuming that |L i (u)| ≤ ci 2 log n for each i, we obtain the claim. ⊓ ⊔ Let us fix a sub-phase. We allow now node failures (i.e., each node may fail with some probability O(1/(2 √ log n ))), and prove the following lemma. Lemma 3 There are cn connectors, where c > 0 is a constant, which store the addresses of at least two leaders, and each of these leader addresses is shared by at least Ω 2 √ log n log n connectors, w.h.p. Proof First, we consider the case in which no node failures are allowed. Then, we extend the proof. Now let us assume that no failures occur. We have shown in Lemma 1 that the length of a path (v, u 1 , . . . , u k , u) from a node v to a leader u is O(2 √ log n log n), w.h.p., where u 1 = r(v), u i = r(u i−1 ) for any i ∈ {2, . . . , k}, and u = r(u k ). Let u be a leader, and let L i (u) be the set of connectors which have distance i from u after a certain (arbitrary but fixed) sub-phase of the second phase. Furthermore, let L i (L) = ∪ u∈L L i (u). For our analysis, we model the process of choosing nodes in the first step of this sub-phase by a sequential process (similar to the proof of the previous lemma), in which first v chooses a node, then r(v) chooses a node, then r(r(v)) chooses a node, etc... In step i of this sequential process the i node u i−1 on the path P (v) chooses a node. For log n log 2 n/n) (cf. Lemma 1), we obtain that, given R(v) ∩ L = ∅ (note that the number of nodes satisfying this property is n (1 − o(1)), w.h.p.), a node has a path of length Ω(2 √ log n / log n) to a leader with probability 1 − o(1), and thus the expected number of such nodes is n(1o(1)). Now we consider node failures. A node v is considered failed, if it fails (as described at the beginning each node fails with probability O(1/2 √ log n )), or there is a node in R(v), which fails. Since |R(v)| = O(2 √ log n ) with constant probability, there is a node of such an R(v) that fails with at most some constant probability. However, these probabilities are not independent. Nevertheless, the expected number of nodes, which will not be considered failed and have a path of length Ω 2 √ log n log n to a leader, is Θ(n). Now, consider the following Martingale sequence. Let v 1 , . . . , v n−|L| denote the connectors. In step j, we reveal the directed edges and nodes from node v j to all nodes in all R(v j ) obtained from the different subphases. Given that |R(v j )| = O(2 √ log n log n), we apply the Azuma-Hoeffding inequality [32], and obtain that Θ(n) nodes are connected to a leader by a path of length Ω 2 √ log n log n and will not be considered failed, w.h.p. Summarizing, a Θ(n) fraction of the nodes store at the end of the first phase the addresses of at least two leaders, and such a connector shares each of these addresses with Ω(2 √ log n / log n) other connectors, w.h.p. ⊓ ⊔ Applying pointer jumping on all connectors as described in the algorithm, we obtain the following result. Observation 1 If in an arbitrary but fixed sub-phase of the second phase R(v) ∩ L = ∅ for some connector v, then v stores the address of a leader u at the end of this phase, w.h.p. This observation is a simple application of the pointer jumping algorithm [28] on a directed path of length |R(v)|. According to Lemma 1, |R(v)| = O(2 √ log n log n), w.h.p. Now we concentrate on the third phase. We condition on the event that each connector has stored at least two and at most 5 different leader addresses. Furthermore, an address stored by a connector is shared with at least Ω(2 √ log n / log n) other connectors, with high probability (see Lemma 3). Out of these connectors, let C be the set of nodes v with the following property. The first time a leader of v receives the message, v will contact this leader in the next step, pulls the message, and in the next step it will push the message to the other leader. Clearly, for a node v this event occurs with constant probability, independently of the other nodes. Therefore, the total number of nodes in C with at least two different leader addresses, where each of these addresses is shared by at least Ω(2 √ log n / log n) other connectors, is Θ(n), w.h.p. We call the set of these nodesC. Now we have the following observation. Observation 2 Let C i be the set of nodes which store the same (arbitrary but fixed) leader address after a certain subsphase, and assume that |C i | = Ω(2 √ log n / log n). Then, |C i ∩C| = Θ(|C i |), w.h.p. The proof of this observation follows from the fact that if two nodes share the same address after a certain subphase, then each of these nodes will share with probability 1−o(1) a leader address obtained in some other subphase with at least Ω(2 √ log n / log n) other connectors. However, these events are not independent. Let now C j be some other set, which contains a node v ∈ C i . Since Lemma 2), there will be with probability at least 1 − n −2 at most 4 nodes in C i ∩ C j . Conditioning on this, we apply for the nodes of C i ∩ C the same Martingale sequence as in the proof of Lemma 3. By taking into account that in this case the Martingale sequence satisfies the 4-Lipschitz condition (the nodes of C i are part of the Martingale only), we obtain the statement of the observation. Now we are ready to show the following lemma. Lemma 4 After the third phase the number of informed nodes is at least Proof For a node v ∈C, let C (1) v and C (2) v represent two sets of nodes, which store the same leader address as v (obtained in the same sub-phases of the second phase), and for which we have |C (1) v |, |C (2) v | = Ω(2 √ log n / log n). We know that each node has exactly 2 leader addresses. Since after phase 0 at least log 2 n nodes are informed, we may assume that at the beginning of this phase a node w ∈C is informed, and w pushes the message exactly once. That is, after two steps all nodes of C j w ∩C are informed, where j is either 1 or 2 (we may assume w.l.o.g. that j = 1). Furthermore, we assume that these are the only nodes which are informed after the second step. Now, we show by induction that the following holds. After 2i steps, the number of informed nodes I(i) inC is at least min{2 √ log n·i/2 , n/2 7 √ log n }, w.h.p. Furthermore, there is a partition of the set {C Roughly speaking, the sets belonging to E (j) (i) contain some nodes, which have just been informed in the last time step, and most of the nodes from these sets are still uninformed. If now these nodes perform push, and in the next step the nodes of the sets in E (j) (i) a pull, then these nodes become informed as well. Our assumption is that the number of sets E (j) v (i) is Ω(|I(i)|/ log n), w.h.p. This obviously holds before the first or after the second step. Assume that the induction hypothesis holds after step 2i and we are going to show that it also holds after step 2(i + 1). Clearly, if U is some set of nodes which have the same leader address after an arbitrary but fixed subphase of the second phase, where |U | = Ω(2 √ log n / log n), then we have |U ∩C| = Θ(|U |), w.h.p. (see Observation 2). On the other hand, there are at least Ω(n/2 3.1 √ log n ) such sets U with U ∈ ∪ j=1,2 F (j) (i), w.h.p., since the largest set we can obtain has size O(2 3.1 √ log n ), w.h.p. (cf. Lemma 2). According to our induction hypothesis, at least Ω(|I(i)|/ log n) and at most O(|I(i)|) of these sets are elements of E (j) (i), where v ∈ I(i). Clearly, a node v ∈C \ I(i) will be in at most one of these sets, w.h.p. Since any of these sets accomodates at least Θ(2 √ log n / log n) nodes fromC, w.h.p., the number of informed nodes increases within two steps by at least a factor of Θ(2 √ log n / log 2 n) ≫ 2 √ log n/2 , which leads to |I(i+1)| ≥ 2 √ log n·(i+1)/2 , w.h.p. The induction step can be performed as long as |I(i)| ≤ n/2 7 √ log n . Now we concentrate on the distribution of these nodes among the sets U ∈ {E Note that each such node belongs to two sets; one of these sets is an element of E v (i + 1) | v ∈ I(i + 1), j ∈ {1, 2}}, w.h.p., where U is some set of nodes which have the same leader address after an arbitrary but fixed subphase of the second phase, and |U | = Ω(2 √ log n / log n). Thus, a node v ∈ (I(i + 1) \ I(i)) ∩C is assigned to a fixed such U with probability O(1/|I(i + 1)|). Therefore, none of the sets E (j) v (i + 1) will accomodate more than O(log n) nodes from (I(i + 1) \ I(i)) ∩C, w.h.p. [34], and the claim follows. ⊓ ⊔ Now we are ready to prove our main theorem, which also compares the communication overhead of the usual push&pull algorithm of [23] to our algorithm. Note that the bit communication complexity of [23] w.r.t. one rumor is O(nb · log log n), w.h.p., where b is the bit length of that rumor. We should also mention here that in [23] the authors assumed that messages (so called updates in replicated data-bases) are frequently generated, and thus the cost of opening communication channels amortizes over the cost of sending messages through these channels. If in our scenario messages are frequently generated, then we may also assume that the cost of the pointer jumping phase is negligable compared to the cost of sending messages, and thus the communication overhead in our case would also be O(nb log log n). In our theorem, however, we assume that one message has to be distributed, and sending the IP-address of a node through a communication channel is O(log n). Also, opening a channel without sending messages generates an O(log n) communication cost. Theorem 1 At the end of the JPP algorithm, all but O(F) nodes are informed w.h.p., where F is the number of failed nodes as described above. The algorithm has running time O( √ log n) and produces a bit communication complexity of O(n(log 3/2 n + b · log log n)), w.h.p., where b is the bit length of the message. Proof In the fourth phase we apply the (median counter) algorithm presented in [23]. For the sake of completeness, we describe this algorithm here as given in [23]. There, each node can be in a state called A, B, C, or D. State B is further subdivided in substates B 1 , . . . , B ctrmax , where ctr max = O(log log n) is some suitable integer. At the beginning of this phase, all informed nodes are in state B 1 and all uninformed nodes are in state A. The rules are as follows: -If a node v in state A receives the rumor only from nodes in state B, then it switches to state B 1 . If v obtains the rumor from a state C node, then it switches to state C. -If a node v in state B i communicates with more nodes in some state B j with j ≥ i than with nodes in state A or B j ′ with j ′ < i, then v switches to state B i+1 . If v gets the rumor from a state C node, then it switches to state C. -A node in state C sends the rumor for O(log log n) further steps. Then, it switches to state D and stops sending the rumor. We know that at the end of the third phase, there are at least n/2 7 √ log n informed nodes, w.h.p. (cf. Lemma 4). In order to apply Theorem 3.1 of [23], we have to couple the original median counter algorithm with our algorithm. Let I(t 0 ) be the set of informed nodes at the end of the third phase. Clearly, the communication overhead w.r.t. the rumor is O(n · b) in the third phase, since each connector transmits at most twice the message, and the number of leaders is bounded by O(n/2 √ log n ), w.h.p. Then, there is a time step in the original median counter algorithm such that the number of informed nodes is |I(t 0 )| too 3 . Obviously, there might exist nodes at this time step, which are in some state B j , with j > 1, C, or D. At this time step, we couple the random choices of the nodes in the two algorithms. As long as |I(i)| ≤ n/ log 2 n, it holds that |I(i + 1)| > (1 + ǫ)|I(i)|, w.h.p. (see exponential growth phase in Theorem 3.1, [23]), for some constant ǫ > 0, and the number of informed nodes (as well as the constant ǫ) produced by our algorithm dominates the number of infomed nodes in the original median counter algorithm. This holds since at time step t 0 we only have state B 1 or A nodes in our algorithm, while the original median counter algorithm may contain state B j and C nodes at that time step, where j > 1. Therefore, these nodes will stop earlier sending the message. When |I(i)| ≥ n/ log 2 n for the first time, the communication overhead w.r.t. the rumor is bounded by O(n · b). Once the message is distributed to n/ log 2 n nodes, one needs O(log log n) additional steps to disseminate the rumor among all vertices of the graph (see quadratic shrinking phase in Theorem 3.1, [23]). Moreover, all nodes stop sending the rumor after O(log log n) additional steps, once all nodes are informed (cf. Theorem 3.1, [23]). Thus, the total communication overhead w.r.t. the rumor is bounded by O(nb · log log n), w.h.p. The communication overhead w.r.t. the addresses sent by the nodes in the pointer jumping phase is upper bounded by O(n √ log n · log n), where √ log n stands for the number of steps in the second phase, while the log n term describes the bit size of a message (an address is some polynomial in n). Discussion -Non-exact Case As mentioned in Section 4.1, a modified version of our algorithm also works if the nodes only have an estimate of log n, which is accurate up to some constant factor. In this case, we introduce some dummy subphases between any two phases and any sub-phases of phase 2. Now, for a node v the length of sub-phase i of phase 2 will be ρ 2i c √ log n v , and between sub-phase i and i + 1, there will be a dummy sub-phase of length ρ 2i+1 c √ log n v . Here n v is the estimate of n at node v. Accordingly, the dummy sub-phase between phase 1 and 2 will have length ρc √ log n v , between phases 2 and 3 length ρ 11 c √ log n v , and between 3 and 4 length ρ 13 c √ log n v . The length of phase 3 will be ρ 12 c √ log n v , and that of phase 4 will be ρ 14 c √ log n v . Here ρ will be a large constant, such that ρ i ≫ The role of the dummy sub-phases is to synchronize the actions of the nodes. That is, no node will enter a phase or sub-phase before the last node leaves the previous phase or sub-phase. Accordingly, no node will leave a phase or a sub-phase, before the last node enters this phase or sub-phase. Moreover, the whole set of nodes will be together for at least c √ log n steps in every phase or sub-phase. This ensures that all the phases and subphases of the algorithm will work correctly, and lead to the results we have derived in the previous section. Note that, however, the communication overhead might increase to some value O(n(log 3/2 n + b √ n).
2015-12-08T16:56:19.000Z
2015-12-08T00:00:00.000
{ "year": 2015, "sha1": "56fbf3cd288e92e75ab401d6c7d98439ea045f50", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "56fbf3cd288e92e75ab401d6c7d98439ea045f50", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
25308316
pes2o/s2orc
v3-fos-license
Bis{(E)-3-[(diethylmethylammonio)methyl]-N-[3-(N,N-dimethylsulfamoyl)-1-methylpyridin-4-ylidene]-4-methoxyanilinium} tetraiodide pentahydrate The title compound, 2C21H34N4O3S2+·4I−·5H2O, was prepared exclusively as the E isomer by methylation of the corresponding N-phenylpyridin-4-amine. There are two symmetry-independent molecules in the asymmetric unit with no significant differences in bond lengths and angles. The aromatic rings are not coplanar with the pyridin-4-imine groups, as indicated by the C—N—C—C torsion angles of 47.7 (7) and 132.6 (5)°. The title compound, 2C 21 H 34 N 4 O 3 S 2+ Á4I À Á5H 2 O, was prepared exclusively as the E isomer by methylation of the corresponding N-phenylpyridin-4-amine. There are two symmetryindependent molecules in the asymmetric unit with no significant differences in bond lengths and angles. The aromatic rings are not coplanar with the pyridin-4-imine groups, as indicated by the C-N-C-C torsion angles of 47.7 (7) and 132.6 (5) . H atoms treated by a mixture of independent and constrained refinement Á max = 2.07 e Å À3 Á min = À1.42 e Å À3 Table 1 Hydrogen-bond geometry (Å , ). Comment Malaria is accounted as one of the major diseases worldwide, and for which few efficient drugs are known today [Bjorkman and Bhattarai, 2005]. 4(1H)-Pyridones are currently being developed as important and potential antimalarial agents, capable of inhibiting the bc 1 complex, at the oxidation site (Q o site) level in the Plasmodium falciparum mitochondrion [Yeates et al., 2008]. As part of our project towards the synthesis of 4(1H)-pyridone bioisosteric scaffolds, the (1H-pyridin-4-ylidene)amine scaffold was studied. The title compound was prepared by reaction of the corresponding N-phenylpyridin-4-amine with methyl iodide. Interestingly, only the E isomer of the compound was obtained, as it was previously observed for amodiaquine analogues [Lopes et al., 2004]. There are two symmetry-independent molecules in the asymmetric unit with no significant differences in bond lengths and angles. The observed imine bond distances C4-N14 and C44-N54 are longer than the expected by ca 0.035 Å [Wang et al., 2008 andDjedouani et al., 2008], a consequence of the imine group being protonated. The aromatic rings are not coplanar relatively to the pyridin-4-imine moieties, as indicated by the C4-N14-C15-C16 and C44-N54-C55-C56 dihedral angles of 47.7 (7)° and 132.6 (5)°, respectively. The molecules are hydrogen-bonded through the imine nitrogen atoms at N14 and N54, acting as donors towards the sulfonyl oxygen atoms O9 and O19 of each sulfonamide moiety, respectively. The (1H-pyridin-4-ylidene)amine scaffold is nearly planar and the C5-C4-N14-C15 dihedral angle is 7.9 (7)° for one of the molecules, whereas the C43-C44-N54-C55 dihedral angle on the other molecule is -14.1 (7)°. Experimental The title compound was prepared at room temperature by reacting 2-[(diethylamino)methyl]-4-(pyridin-4-ylamino)phenol with methyl iodide in the presence of NaH in DMF. Crystals were grown from water. Refinement The hydroxy H atoms for the water solvent molecules were initially located in a difference Fourier map, but their distances were constrained with DFIX at 0.9 Å from the O atom and with DANG at 2.5 Å from the other H water atom. The hydrogen atoms linked to the charged N14 and N54 atoms were located in a difference Fourier map, but the distances N-H were constrained at 0.9 Å, in order to get the refinement stabilization. The rest of the H atoms were positioned geometrically and included as riding atoms with C-H = 0.95 or 0.98 Å and U iso (H)= 1.2 or 1.5 times U eq (C). Special details Geometry. All s.u.'s (except the s.u. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
2016-05-12T22:15:10.714Z
2009-01-10T00:00:00.000
{ "year": 2009, "sha1": "53d5b6b19541939ffba139a07a1f8116f46bc65a", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2009/02/00/bg2227/bg2227.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c0aa0c391ea1d258482dbafc91a4ac4e357046a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233626304
pes2o/s2orc
v3-fos-license
Record‐breaking daily rainfall in the United Kingdom and the role of anthropogenic forcings The breaking of the United Kingdom's daily rainfall record in October 2020 made a striking addition to the list of recent heavy precipitation events in the country. Mounting evidence from attribution research suggests that such extremes become more frequent and intense in a warming climate. Although most studies consider extreme events in specific months or seasons, here we investigate for the first time how extremes of the wettest day of the year may be influenced by anthropogenic forcings. Data from large multimodel ensembles indicate that the moderate historical trend towards wetter conditions will emerge more strongly in coming decades, while a notable anthropogenic influence on the variability of the wettest day may be identified as early as the 1900s. Experiments with different forcings are employed to estimate the changing probability of extremes due to anthropogenic climate change in a risk‐based attribution framework. We introduce a new methodology of estimating probabilities of extremes in the present and future that calibrates data from long simulations of the preindustrial climate to the mean state and variability of the reference climatic period. The new approach utilises larger samples of rainfall data than alternative methods, which is a major advantage when analysing extremely rare events. The record rainfall of the wettest day in year 2020 is estimated to have become about 2.5 times more likely because of human influence, while its return time, currently about 100 years, will decrease to only about 30 years by 2100. Compared to a hypothetical natural climate, we estimate a 10‐fold increase in the chances of such extreme rainfall events in the United Kingdom by the end of this century, which underlines the need for effective adaptation planning. Kingdom with 4-day accumulations reaching 150 mm in some regions (Kendon and McCarthy, 2020). Thankfully, the impacts from the rainfall were not severe, though wet extremes with more catastrophic impacts have recently hit the United Kingdom and are still fresh in public memory. Only a year before the 2020 record, extreme flooding wreaked havoc in Yorkshire, leading to loss of life and livelihood (Kendon, 2019). Although the prevalent atmospheric conditions are invariably major drivers of such events, understanding the underpinning role of anthropogenic climate change and how it might alter their frequency is crucial, in order to help communities effectively plan their adaptation and reduce their vulnerability. In a warming climate, the atmosphere can hold more water vapour in line with the Clausius-Clapeyron relation, and wet extremes would therefore be expected to become more intense (Allan et al., 2014). Indeed, attribution studies provide evidence that the hydrological cycle has been strengthened in recent decades under the influence of anthropogenic forcings (Wu et al., 2013;Padrón et al., 2020), leading to a detectable intensification of extreme rainfall on global and continental scales (Dong et al., 2020). Regional changes are often too complex to be explained by the simple Clausius-Clapeyron relation (Kumar et al., 2015), stressing the need for in-depth studies with a regional focus. For example, Christidis and Stott (2021) report opposite trends in European summer rainfall extremes, with increases in the north and decreases in the south of the continent. Shifting the focus to the United Kingdom and changes in autumn events, Cotterill et al. (2021) estimate a 60% increase in the frequency of extreme daily precipitation since 1900 with a further 85% increase by the end of this century under a high emissions scenario. In addition to attribution of climatic trends, attribution research also examines how specific extreme weather and climate events may be influenced by anthropogenic forcings and estimates how certain event characteristics, like the frequency or intensity, may be altered by human influence . Attribution assessments of high-impact events around the world for different types of extremes are published on an annual basis in a popular special report by the Bulletin of the American Meteorological Society (BAMS; e.g., Herring et al., 2020). There is a pressing demand for information on the changing likelihood of extremes, which can aid, for example, a more effective design of flood defences, buildings, or transport infrastructure, making them most suitable for the future climate (Betts, 2021). Therefore, the importance of integrating event attribution into the framework of developing climate services has long been recognised (Hewitt et al., 2012). Studies of flooding and extreme rainfall events with dire socioeconomic impacts in the United Kingdom corroborate that their likelihood has been on the rise under the influence of anthropogenic warming (Pall et al., 2011;Christidis and Stott, 2015;Schaller et al., 2016;Otto et al., 2018;Davies et al., 2021). In this article, we (a) show how anthropogenic influence may have led to notable temporal changes in both the mean state and the variability of the wettest day, (b) investigate the anthropogenic influence on the likelihood of breaking the wettest day record in 2020 and on the risk of having days with rainfall higher than that in 2020, and (c) estimate how the likelihood of such events may further change during the course of the century. Unlike previous studies that consider seasonal and monthly events, or shorter events linked to specific seasons, here we use the wettest day of the year as our event definition, which would generally occur at different times each year and develop under different synoptic conditions. This definition may implicate higher variability that could potentially make the anthropogenic effect more difficult to detect. Finally, we introduce a new method of constructing the present-day and future distributions of the wettest day from which the probabilities of extremes are derived. The method provides larger samples, which are valuable for the likelihood estimation of extremely rare events. The remainder of this article is structured as follows: the observational and model data used in the attribution analysis and the methodology are discussed in Section 2. Section 3 presents results, including changes in the return times of extreme events due to human influence, as well as risk ratio estimates. The main findings and concluding remarks are discussed in Section 4. | Observations We compute annual values of the wettest day in the United Kingdom, henceforth referred to as Rx01, from mean daily rainfall observations and simulated data averaged over UK land. The observational data come from HadUK-Grid (v1.0.2.1; Hollis et al., 2019), a dataset derived from meteorological stations across the United Kingdom and interpolated onto a uniform grid, which offers full land coverage. The full data acquisition and quality control of raingauge observations take around 6 months. The 2020 values reported in this manuscript are therefore provisional based on the real-time observing networks available in October 2020, and as such are subject to further minor revision upon completion of the quality control. Timeseries of Rx01 anomalies constructed with HadUK-Grid data since year 1891 are illustrated in Figure 1a and indicate a moderate increase in Rx01 of 0.025 mm/year. The 2020 Rx01 record, as well as the previous UK record (August 25, 1986), is marked in the figure. Time series from ECMWF's reanalysis of the 20th century (ERA-20C; Poli et al., 2016) reassuringly indicates a variability pattern akin to HadUK-Grid (supporting information, Figure S1). In most years, Rx01 falls in autumn and winter months with the highest percentage corresponding to October (Figure 1b). A similar distribution among the months is also seen in the models used in this study. Figure 1c,d depicts the atmospheric circulation on the 2 days with the highest Rx01 values in the United Kingdom, as represented by the mean 500 hPa geopotential height (Z500) anomaly. The anomalies were constructed with data from the ERA5 reanalysis (Hersbach et al., 2020). During the 2020 event, a negative Z500 anomaly (marking the centre of storm Alex) was prominent south of the United Kingdom with associated weather fronts bringing heavy rainfall across the United Kingdom. The 1986 event, on the other hand, was linked to the passage of former hurricane Charley, identified by the negative Z500 anomalies west of the United Kingdom. | CMIP6 models We also compute model-based estimates of Rx01 using daily rainfall data from simulations with nine coupled climate models that took part in the World Climate Research Programme's Coupled Model Intercomparison Project phase 6 (CMIP6; Eyring et al., 2016). We select models that provide ensembles of simulations with all historical climatic forcings (ALL) starting in 1850 and extended to the end of the century with the medium emissions scenario SSP2-4.5 (Riahi et al., 2017) and simulations with natural forcings only (NAT) to year 2020. Each model also provides long control simulations of the preindustrial climate (CTL) with no external forcings, which are used in our attribution methodology. In summary, we utilise ensembles of 32 ALL simulations, 41 NAT simulations, and 5,752 CTL years in total. Details of the CMIP6 data are given in the supporting information, Table S1. The model data were re-gridded onto a common grid (N216). Using different versions on HadUK-Grid with horizontal resolution from 1 to 60 km, we confirm that data re-gridding does not introduce a major uncertainty on estimated Rx01 trends and variability. | Modelled changes in the mean state and variability Timeseries of Rx01 anomalies from the ALL and NAT simulations are shown in Figure 2a. We compute anomalies relative to the 19th-century period 1850-1899, which is closer to the preindustrial climate and can therefore account for most of the anthropogenic influence when comparing the ALL with the NAT climate. The observed 2020 and 1986 values of Rx01 provide thresholds for the definition of extreme events in our attribution analysis. Their equivalent values in the model climate are approximated by the anomalies in the ALL ensemble that lie the same amount of SDs above the 1961-1990 mean as in F I G U R E 1 United Kingdom's wettest day of the year. (a) Timeseries of UK mean Rx01 anomalies relative to 1961-1990 from observational data. The observed anomalies in 2020 and 1986 are marked by a cross. (b) Percentage of Rx01 occurrences for each month of the year from the observations (black) and the 32 model simulations with ALL forcings (red) estimated over the observational period 1891-2020. (c) Geopotential height anomalies at 500 hPa on October 3, 2020. Anomalies are relative to the October mean geopotential height during . (c) Geopotential height anomalies (relative to the climatological August mean) for August 25,1986 HadUK-Grid (Christidis et al., 2019). The two thresholds are marked on Figure 2a. The thresholds could also be estimated by matching percentiles of the generalised extreme value (GEV) distribution applied to observed and modelled data. This alternative approach is found to yield very similar threshold values. The yellow line on the figure represents the smoothed ALL ensemble mean and illustrates the temporal change of Rx01 under the influence of external forcings. The models show no clear change in the mean Rx01 until about the end of the 20th century, but a steady increase thereafter. Interestingly, we find that human influence not only changes the mean state of Rx01 but also its variability. Using 30-year rolling windows, we construct timeseries of the SD from the ALL and NAT simulations and plot the means of the two ensembles in Figure 2b. The ALL simulations suggest a rapid increase in variability after the mid-20th century. Christidis and Stott (2021) reported similar increases in European summer rainfall variability in CMIP6 simulations. Although one might expect that human influence would be less prominent in the earlier part of the timeseries, we note a clear separation between ALL and NAT since the end of the 19th century, with lower variability under the effect of human influence. This could indicate an early manifestation of the aerosol forcing effect in the United Kingdom that could be driven by the modification of cloud properties by aerosol particles. As the greenhouse gas forcing intensifies during the course of the 20th century, the anthropogenic warming dominates over the aerosol effect leading to an increase in the Rx01 variability. Even in the more stationary NAT climate, the Rx01 variability appears to have a characteristic pattern that might be linked, for example, to volcanic effects. The SD from equal segments of CTL simulations with the nine models lies, as expected, within the range of the NAT experiment. Although these preliminary findings are very revealing, a more detailed followup study needs to be undertaken to better understand the contribution of different external drivers to changes in rainfall variability. | Model evaluation We next carry out simple standard model evaluation tests to ensure the CMIP6 models provide a realistic representation of Rx01 and are therefore fit for purpose. Historical trends in Rx01 are small and sensitive to the end-points of the period used. Figure 3a shows trends to the present day from different starting points from the observations and the ALL simulations. The observed trends are higher when the earlier decades are included in the trend estimation but become consistent with the model range and are close to the ensemble mean by 1920. As an increase in Rx01 emerges more clearly towards the end of the 20th century, the consistency in more recent decades is reassuring. The discrepancy when early years are included does not necessarily indicate a limitation of the models but could arise from the poorer observational coverage in the early years. Indeed, the number of stations contributing to the HadUK-Grid data set increases from approximately 100 in 1891 to a peak of approximately 5,000 in the mid-1970s, and declining to 2,400 presently. Rx01 timeseries from ERA-20C (supporting information, Figure S1a) also indicate higher anomalies than HadUK-Grid in earlier F I G U R E 2 (a) Timeseries of UK mean Rx01 anomalies relative to 1850-1899 from model simulations with all external forcings (red) and natural forcings only (green). The yellow line represents the smoothed mean of the ALL simulations. Anomalies corresponding to years 2020 and 1986 are marked by the black horizontal lines. (b) Timeseries of the Rx01 SD constructed as the mean of individual simulation timeseries from the ALL (red) and NAT (green) experiments. The SD for the pre-industrial climate estimated from long CTL simulations is shown in blue. The present-day estimate is marked by the black horizontal line years and yield smaller trends that agree better with the models. Historical distributions of Rx01 constructed from detrended HadUK-Grid and ALL data over the common observational period are in good agreement (Figure 3b), and a Kolmogorov-Smirnov test indicates they are indistinguishable when tested at the 10% significance level. Finally, power spectra suggest that the simulated variability over different timescales is consistent with the observed variability (Figure 3c). Although the evaluation tests are more indicative rather than conclusive due to the relatively small observational sample, they do not raise concerns about the ability of the models to represent the UK mean Rx01, but in fact indicate that they provide sufficiently good data for our attribution study. | Methodology The majority of event attribution studies follow a riskbased framework that we also adopt here . This approach utilises large ensembles of simulations with and without anthropogenic forcings (ALL vs. NAT) to construct probability distributions of the relevant variable (in this case Rx01) in a hypothetical natural world and the present (or also future) climate. Estimates of the probability of exceeding a threshold that defines extreme events are then obtained from the different distributions and, by comparing the ALL and NAT probabilities, the anthropogenic influence on the likelihood of extreme events is assessed. We derive changes in the likelihood of setting a new UK record in year 2020 by defining extreme events as exceedances of the 1986 Rx01 value. For the likelihood of events more extreme than in year 2020, we consider exceedances of the 2020 Rx01. The same two thresholds are used to estimate the likelihood of extreme events at the end of the century. We construct the NAT distribution of Rx01 using data from all the simulated years (1850-2020) of the NAT experiment. This yields a sample size of 7,011 (171 years × 41 simulations), which is large enough to estimate the likelihood of extremely rare events. As the ALL climate is non-stationary, we cannot utilise all the simulated data of the ALL experiment, but we can select a subset of the data in a time-window around the year of interest (Christidis and Stott, 2015). For example, we can construct present and future distributions of Rx01 using data from the ALL simulations in the 30-year periods 2005-2034 and 2071-2,100. This generates samples with a size of 960 (30 years × 32 simulations). Although this approach yields smaller, although still good-sized, ALL samples, a large amount of simulated data remain unused. A way round would be to assume time-varying distribution parameters (Maraun et al. 2009;van Oldenborgh et al., 2016), which in our case would be challenging, as we find a distinct change not only in the mean state of Rx01 but also in its variability. Here, we propose a new approach that increases the sample size and realistically represents the mean state and variability of Rx01. The new method employs the long CTL simulations of the pre-industrial climate and adjusts them to the mean state and variance of the desired climatic period. Although scaling-based techniques have been previously used in bias-correction methodologies, here we extend the application to event attribution research. For the present-day climate, we adjust (by simple scaling) the CTL SD of the models to the one that best represents year 2020 ( Figure 2b) and then shift the adjusted CTL data to the 2020 mean state (yellow line Figure 2a). Similarly, for the end of the century, we adjust the SD to an estimated value of 3.2 mm and shift to the mean state of 2,100. Using this approach, we calibrate the large sample of CTL data to the specific climatic parameters of the F I G U R E 3 Model evaluation. (a) Historical trends to the present day from different starting points computed with observations (black) and simulations of the ALL experiment (red). The thick red line represents the ALL ensemble mean and the pink area represents the modelled range. (b) The distribution of detrended Rx01 anomaly data over the observational period (1891-2020) constructed with HadUK-Grid (grey bars) and pooled data from the individual ALL simulations (red line). The p-value of a Kolmogorov-Smirnov test marked on the panel indicates the two distributions are not significantly different. (c) Power spectra from the ALL simulations (red) and HadUK-Grid (black) desired period obtained from the ALL experiment. Hence, we estimate the ALL probabilities from a much larger sample of 5,752 Rx01 values. Probabilities of threshold exceedance are computed with simple ranking statistics, while, as in previous work, the uncertainty range is estimated with a Monte Carlo bootstrap procedure by resampling the modelled Rx01 data 1,000 times. The adjustment of the CTL data could also be implemented based on parameters of the GEV distribution derived from ALL data within a timewindow of the reference climate. Such an approach would be sensitive to the representation of noise in the chosen window, and we find (not shown here) that it yields consistent probabilities with our methodology. | ATTRIBUTION The change in the Rx01 distribution under anthropogenic influence is shown in Figure 4a. We note a temporal shift in the overall distribution towards higher rainfall amounts relative to the NAT climate, as well as an increase in its spread. Exceeding high thresholds becomes much more common by 2,100. Return times of extreme events and changes in their likelihood (risk ratios) are reported in Table 1 and illustrated in Figure 4b,c. Human influence is estimated to have made it 2.5 times more likely to set a new record in year 2020. The models suggest that Rx01 at least as high as the 2020 record would occur once in about 300 years in the natural world, but the return time has now decreased to about a century and will further decrease to only about 30 years by 2,100. Extremes like in 1986 and 2020 become relatively common by the end of the century with approximately a 10-fold increase of their chance of occurrence relative to the NAT climate. Such events are estimated to be 2-3 times more common in the present-day climate. We test the new methodology by also computing the present and future probabilities of extreme events from 30-year time slices of the ALL simulations as described in Section 2.5. The probability estimates from the ALL timeslices are very similar to those obtained from CTL but have a larger uncertainty range due to the smaller samples (Figure 4b). A second test (not shown) that applies the new methodology to CTL data to infer NAT probabilities also confirms similar results to the ones reported for NAT in Table 1. Finally, we test the sensitivity of the probabilities derived with the CTL data to the specified F I G U R E 4 Attribution results. (a) Distributions of United Kingdom's Rx01 anomaly in the natural climate (green), the climate of 2020 (red), and 2,100 (magenta). The black and grey vertical lines mark the anomalies in years 2020 and 1986. (b) Return time estimates for events with Rx01 anomalies exceeding those in 2020 (left) and 1986 (right). Different colours represent different types of climate (green: natural climate; red: present day; magenta: end of century). The vertical bars mark the 5-95% uncertainty range and the horizontal lines the best estimates. Thinner bars for present and future return times are estimates from time-slices of the ALL simulations rather than the new methodology. (c) Distributions of the risk ratio (increase in the likelihood of extremes relative to the NAT climate) for the present day and the end of the century. Extreme events are defined as exceedances of the Rx01 in 2020 (blue) and 1986 (orange) levels of variance used for the data calibration. Figure 2b indicates that there is indeed some uncertainty in the SD of Rx01. For the present-day climate, we shift the 2020 SD level marked on Figure 2b by ±0.1 mm and recalculate the probabilities. This only moderately increases the uncertainty range of the return time to 82-192 years for the 2020 threshold and 46-85 years for the 1986 threshold. A smaller effect is found for the future uncertainties. We thus conclude that our approach offers a reliable alternative way to investigate past, present, and future risks of extreme events. | DISCUSSION Our analysis adds to the evidence of human influence leading to more extreme rainfall in the United Kingdom. This is the first event attribution study to examine changes in the wettest day of the year and establish that despite the large variability, a signal of more frequent extremes has emerged and will continue to intensify in coming decades. Human influence is shown to have had a clearer effect on the Rx01 variability than on its historical trend. Nevertheless, a prominent and steady rise in the mean Rx01 is projected during the course of this century under SSP2-4.5, giving rise to higher trends going forward. Changes in the mean state and spread of the Rx01 distribution made it about 2.5 times more likely to hit a new record in 2020, while by 2100 such an event is estimated to occur every few decades. There are of course limitations to our analysis stemming, for example, from the effect of variability that partly obscures the climate change signal, methodological uncertainties like the variance levels used for the scaling of CTL data, model limitations, or uncertainties in future emission pathways. Although several of these limitations have been explored and are reflected in reported uncertainty estimates, future research should aim to further reduce their impact. The event attribution methodology introduced in our study enables a better utilisation of model data, obtaining information from experiments with historical forcings to adjust long simulations of the pre-industrial climate. Comparison with the common alternative approach of estimating probabilities from short time-windows of the historical simulations shows not only consistency but also a reduction in the uncertainty range. The latter is a major advantage of the new approach, which would be better demonstrated in studies of rarer extremes and/or smaller ALL samples. In this article, we attempt to provide a best estimate of the change in the likelihood of extremely wet days from an ensemble of state-of-the-art coupled climate models. A comparison with analyses from attribution systems with a quasi-operational set up, like the one pioneered by the Hadley Centre Ciavarella et al., 2018), would be a useful future extension of this work once the necessary simulations become available. It is important to note, however, that differences in the framing may be reflected in the results ; the Hadley system is built on an atmospheric model and therefore provides probabilities conditioned on the observed oceanic state. Despite framing differences, all event attribution studies help form a solid scientific basis for decision-making that can aid United Kingdom's adaptation to the most adverse impacts of climate change. ACKNOWLEDGMENTS This work was supported by the Met Office Hadley Centre Climate Programme funded by BEIS and Defra and the EUPHEME project, which is part of ERA4CS, an ERA-NET initiated by JPI Climate and co-funded by the European Union (Grant 690462).
2021-05-05T00:09:49.193Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "200e5ef091eb6ef3904e6d9ee2ebbcade850eaf9", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/asl.1033", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "3a6017ed614c8bb4532d5819da869b862a3657b2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
688766
pes2o/s2orc
v3-fos-license
Visualization of molecular composition and functionality of cancer cells using nanoparticle-augmented ultrasound-guided photoacoustics Assessment of molecular signatures of tumors in addition to their anatomy and morphology is desired for effective diagnostic and therapeutic procedures. Development of in vivo imaging techniques that can identify and monitor molecular composition of tumors remains an important challenge in pre-clinical research and medical practice. Here we present a molecular photoacoustic imaging technique that can visualize the presence and activity of an important cancer biomarker – epidermal growth factor receptor (EGFR), utilizing the effect of plasmon resonance coupling between molecular targeted gold nanoparticles. Specifically, spectral analysis of photoacoustic images revealed profound changes in the optical absorption of systemically delivered EGFR-targeted gold nanospheres due to their molecular interactions with tumor cells overexpressing EGFR. In contrast, no changes in optical properties and, therefore, photoacoustic signal, were observed after systemic delivery of non-targeted gold nanoparticles to the tumors. The results indicate that multi-wavelength photoacoustic imaging augmented with molecularly targeted gold nanoparticles has the ability to monitor molecular specific interactions between nanoparticles and cell-surface receptors, allowing visualization of the presence and functional activity of tumor cells. Furthermore, the approach can be used for other cancer cell-surface receptors such as human epidermal growth factor receptor 2 (HER2). Therefore, ultrasound-guided molecular photoacoustic imaging can potentially aid in tumor diagnosis, selection of customized patient-specific treatment, and monitor the therapeutic progression and outcome in vivo. Introduction Molecular imaging techniques capable of good penetration depth in living tissue remain an important challenge in basic and clinical science including modern biology and medicine [1][2][3][4]. Optical imaging can provide unprecedented wealth of molecular specific information. However, tissue turbidity limits penetration depth of light in vivo to a few hundred micrometers for high-resolution imaging modalities such as confocal microscopy, optical coherence tomography (OCT), or two-photon fluorescence [5][6][7][8]. Approaches based on diffusely scattered light such as diffuse optical tomography (DOT) can extend this limit to several centimeters, but they suffer from low resolution and rely on complex reconstruction algorithms that require a priori knowledge of tissue optical properties. A unique solution to this problem is the recently emerging photoacoustic imaging technique that combines optical excitation and ultrasound detection [9][10][11][12]. This imaging approach relies on ''one-way'' propagation of diffusive photons into the tissue where the photoacoustic signal is generated through thermal interaction of pulsed laser light with photoabsorbers. Hence the contrast mechanism in photoacoustic imaging is primarily related to the optical absorption properties of the tissue being imaged. Beyond the depth of ballistic photons, the spatial resolution of photoacoustic imaging is determined by the ability of the ultrasound transducer to resolve the threedimensional distribution of photoabsorbers that generate photoacoustic transient waves. Photoacoustic imaging can visualize optical absorption properties of tissue at sufficient depth, and salient features of photoacoustic imaging are described in several reviews in recent years [9][10][11][12][13][14][15]. Furthermore, a synergistic integration of photoacoustic imaging with clinically available ultrasound imaging systems is also possible and is being pursed [9][10][11][12]. Endogenous contrast in photoacoustic imaging is largely limited to hemoglobin and melanin molecules. In other applications, detection of lipid and collagen is possible with photoacoustic imaging [13,[16][17][18]. Detection of other biomarkers or functionality associated with tumors requires availability of molecular probes or molecular specific contrast agents targeted to these biomarkers [14,15,19,20]. Plasmonic gold and silver nanoparticles are ideally suited for photoacoustics because of their high absorption crosssections [10,11,20]. Molecular specificity is conferred to these plasmonic nanoparticles via conjugation to probe molecules such as antibodies [21,22]. However, the sole addition of a targeting moiety is often not sufficient for sensitive molecular imaging because of the background signal generated by the non-specific delivery of contrast agents to the imaging site. In cancer imaging, non-specific delivery of contrast agents is related to leaky vasculature of the tumor i.e., contrast agents accumulate in the tumor site primarily due to the enhanced permeability and retention (EPR) effect. As done in immunohistological protocols, extensive blocking and washing steps cannot be applied in vivo to remove non-specific binding. Multiple innovative strategies have been developed to enable highly specific molecular imaging -for example, in fluorescence imaging, various activatable probes and beacons are used to provide signal only in the presence of a biomarker of interest or detect a change in the signal on a cue from the tumor micro-environment [23][24][25]. However, again, most of these approaches are limited to optical modalities that do not possess sufficient penetration depth in vivo. We and other groups have previously showed that targeted plasmonic nanoparticles by themselves can be used in a similar way as activatable contrast agents in molecular optical imaging [10,14,15,19,20,[26][27][28]. The approach is based on the phenomenon of plasmon resonance coupling between closely spaced noble metal nanoparticles [26,27,29,30]. The coupling results in strong optical changes including red spectral shift and broadening of nanoparticle extinction spectra [21,26,27,[29][30][31]. The formation of closely spaced assemblies can be mediated by specific interactions between targeted gold nanoparticles and a biomolecule of interest such as a cancer biomarker, e.g. epidermal growth factor receptor (EGFR) [31]. Confocal reflectance and dark-field optical imaging of EGFR positive cancer cells labeled with anti-EGFR antibody conjugated spherical gold nanoparticles showed a red shift of more than 100 nm in nanoparticle plasmon resonance frequency [27,31]. Further studies revealed that the observed optical changes are associated with EGFR activation and trafficking -key signaling pathways that determine cell behavior in normal and cancerous tissue [31]. As activated EGF receptors undergo dimerization and further aggregation in the plasma membrane, followed by internalization through endocytosis [31], the EGFR-targeted AuNPs associated with this process undergo a progressive change in optical properties (i.e., change in optical absorption) as schematically depicted in Fig. 1. Therefore, antibody targeted gold nanoparticles undergo dramatic optical changes upon binding to activated EGF receptors and endocytosis in live cells. We previously demonstrated molecular-specific photoacoustic imaging in three-dimensional cell culture phantoms and ex vivo tissue [21]. This approach is not unique to EGFR molecules and has also been applied to monitoring actin reorganization, detection of fibronectin-intergrin complexes, and imaging membrane morphology in live cells [30,31]. Here, we report multi-wavelength photoacoustic imaging of cancer cells in a xenograft murine tumor model in vivo using the effect of plasmon resonance coupling of EGFR-targeted gold nanoparticles. Specifically, when targeted AuNPs bind to EGFR molecules, trafficking of the labeled receptors results in receptormediated aggregation of AuNPs inside endosomal compartments causing plasmon resonance coupling between closely spaced AuNPs ( Fig. 1). This leads to a strong increase in absorption (thereby increase in photoacoustic signal) in the red spectral region [26,27,29,31]. These changes in optical properties provide the unique opportunity for photoacoustic imaging to monitor molecular specific interactions between nanoparticles and cellsurface receptors, allowing visualization of the presence and functional activity of viable tumor cells. Photoacoustic imaging system The combined ultrasound and photoacoustic imaging system ( Fig. 2a) was based on an ultrasound engine (Winprobe Corporation, North Palm Beach, FL, USA) interfaced with either a Qswitched Nd:YAG laser (532 nm wavelength, 5 ns pulses, 20 Hz pulse repetition frequency) or a tunable OPO laser system (680-950 nm wavelength, 7 ns pulses, 10 Hz pulse repetition frequency). The laser fluences were within 10-20 J/cm 2 according to the American National Standards Institute (ANSI) safe exposure level for human skin. To image the tumor, an integrated imaging probe consisting of a 7.5 MHz center frequency ultrasound transducer (14 mm wide, and 128 element linear array) and a bundle of optical fibers for laser light delivery (Fig. 2b) AuNPs upon interaction with a cancer cell overexpressing EGFR. Activated EGF receptors undergo dimerization and further aggregation in the plasma membrane, followed by internalization through endocytosis. The EGFR-targeted AuNPs associated in this process undergo a progressive color change (i.e., change in optical absorption) from green to red and near-infrared as depicted in the absorbance spectra at various stages of AuNPs interaction with the cancer cell. and elevational resolution of the transducer are 250, 300, and 500 mm, respectively. The integrated probe was attached to a three-dimensional positioning stage to facilitate volumetric ultrasound and photoacoustic imaging by moving the probe in steps of 400 mm in horizontal direction (orthogonal to the imaging plane). The light source, ultrasound imaging system and the positioning axes were interfaced to capture spatially co-registered RF ultrasound and photoacoustic data as described elsewhere [32]. Ultrasound and multi-wavelength photoacoustic RF data were acquired at each position of the integrated probe and stored for off-line processing. Multi-wavelength photoacoustic image analysis The collected RF data is beamformed using delay-and-sum approach as published previously. The absolute values of the photoacoustic analytic signals obtained at various wavelengths were normalized to compensate for the wavelength dependent laser fluence output. The multi-wavelength photoacoustic images were analyzed using intraclass correlation to identify regions of oxygenated/deoxygenated blood and nanoparticles [28]. Intraclass correlation is a method used in assessing agreement between different observers or different methods when used on the same set of subjects. The normalized spectrum of the photoacoustic signal at a pixel was compared individually to the known spectra of endocytosed nanoparticles [21] (Fig. 3) and oxygenated/deoxygenated hemoglobin [33]. The procedure is repeated for every pixel in the multi-wavelength photoacoustic image stack, and the resulting correlation coefficients were used to form an image. To obtain the spatial distribution of endocytosed nanoparticles, oxygenated blood, and deoxygenated blood, the correlation maps were thresholded (correlation coefficient greater than 0.75 were considered positive) and pseudocolored orange, red and blue for endocytosed AuNPs, oxygenated hemoglobin, and deoxygenated hemoglobin, respectively. The distribution map was displayed over the ultrasound image i.e., ultrasonic image was displayed if the correlation-based signal was smaller than a user-defined threshold or vice versa. Preparation of bioconjugated AuNPs Gold nanoparticles (20 nm in diameter) were prepared using citrate reduction of tetrachloroauric (III) acid (HAuCl 4 ) under reflux (Frens method). TEM image of the nanoparticles is shown in Fig. 3 inset. Anti-EGFR monoclonal antibody (C225, Sigma) was conjugated to the AuNPs using procedure described by Kumar et al. [30]. Briefly, the carbohydrate moiety on the Fc region of the antibody (Ab) was oxidized to an aldehyde by addition of 100 mM NaIO 4 to a 1 mg/mL Ab solution in HEPES (1:10 by volume). The Ab was then allowed to react with a hydrazide PEG di-thiol heterobifunctional linker (Sensopath Technologies, Inc.), where the hydrazide portion of the linker covalently bonded to the aldehyde portion of the Ab, yielding an exposed di-thiol moiety which could react strongly with the AuNPs. The Ab-linker was centrifuged in a 100 kD MWCO filter (Amicon) and resuspended in 40 mM HEPES at pH 8 (5 mg/mL). The Ab-linker was mixed with AuNPs (12 mL, 4 Â 10 10 particles/mL) at a 1:1 volume ratio and reacted on a shaker for 30 min at room temperature. Any remaining bare gold was capped with mPEG-SH (1.2 mL, 10 À5 M, 5 kD, Creative PEGWorks) and the particles were washed via centrifugation at 1500 g in the presence of PEG (15 kD, Sigma). The non-targeted AuNPs were prepared by reacting AuNPs (12 mL, 4 Â 10 10 particles/mL) with mPEG-SH (1.2 mL, 10 À5 M, 5 kD, Creative PEGWorks). The resulting PEGylated particles were also washed in the presence of PEG via centrifugation at 1500 g. Both the EGFR-targeted and non-targeted (PEGylated) AuNPs were sterile filtered before being administered to the nude mice. The molecular specificity of the EGFR-targeted AuNPs was also tested with cells possessing positive expression of EGFR (A431 cells) using previously published protocols [21]. The absorbance spectra of the A431 cells incubated 30 min with non-targeted AuNPs or EGFR-targeted AuNPs are shown in Fig. 3. The cells mixed with PEGylated AuNPs have an absorbance peak around 520 nm similar to a suspension of isolated gold nanoparticles. The absorbance of cells incubated with EGFR-targeted AuNPs has the peak red-shifted and broadened due to EGFR-mediated aggregation of gold nanoparticles [21]. These absorbance spectra were used as a Approximately 250 mL of nanoparticle solution consisting about 500 mg of gold was injected directly into the tumor region using a 27 gauge needle or injected into the mouse blood stream using a tail vein catheter (MTV01, SAI Infusion Technologies). Murine tumor models The tumors were inoculated in immunodeficient, albino colored female Nu/Nu mice. The mice were anesthetized with an intraperitoneal injection of Avertin During the imaging procedures, the mouse was anesthetized using isoflurane gas. Isoflurane was chosen due to its less toxic nature on the mouse's metabolism. Specifically a dose of 1% isoflurane mixed with pure oxygen at a 1 L/min flow rate was used. The body temperature of the animal was maintained at 37 8C using a heating pad (THM 100, Indus Instruments). The heart rate of the mouse was monitored every 15 min during the imaging procedure to ensure well-being. A commercially available ultrasound gel (Aquasonic Gel, Parker Laboratories) was applied to the tumor region to establish contact between the mouse skin and flexible window (polyethylene film) at the bottom of a custom-built water tank (Fig. 2b). The transducer with the fiber bundle was placed in the water tank to facilitate in vivo imaging. Histological analysis details The tumors were extracted after euthanizing the mice via approved protocols. The tumors were stored in formalin for 24 h and transferred to 70% ethanol for storage until tissue processing. The tumors were placed in paraffin cassettes for immunohistochemistry in a similar orientation as the imaging cross-section to facilitate qualitative comparison of ultrasound and photoacoustic images with histology images. The hematoxylin and eosin stain (H&E stain) was performed to identify the tissue structure. The hematoxylin colors basophilic structures (such as the cell nucleus), and the eosin dye colors eosinophilic structures (intracellular or extracellular protein such as cytoplasm) bright pink. Silver staining procedure [34] was utilized to identify the presence of AuNPs in the tumor tissue. When AuNP labeled tissue are silver stained, the produced bimetallic nanoparticles can be easily observed using bright field microscopy even in the presence of standard histological stains. A counter stain using nuclear fast red was performed to enhance contrast between the AuNPs and the tissue when observed under a brightfield microscope. Results and discussion In the first set of mice the molecular-specific nanoparticles were injected directly to the tumor region to evaluate if photoacoustic imaging (Fig. 4) can monitor interactions between tumor cells and AuNPs. Mice with subcutaneous tumors formed using EGFR expressing A431 human keratinocytes were injected with either EGFR-targeted AuNPs or PEGylated AuNPs directly into the tumors. Ultrasound and multi-wavelength photoacoustic imaging of the central cross-section of the tumor was performed before and immediately after intratumoral injection. In addition, 3D images were acquired at 2 h and 4 h after intratumoral injection of the nanoparticles. Ultrasound images show mouse skin as a hyperechoic region within the tumor region demarcated by a white inset in Fig. 4a and d. The corresponding photoacoustic images (obtained at 720 nm wavelength illumination, i.e., away from the $520 nm peak absorption of individual gold nanospheres) of tumors injected with either PEGylated AuNPs or EGFR-targeted AuNPs are shown in Fig. 4c and f, respectively. Clearly the tumor injected with EGFRtargeted AuNPs show an increase in the photoacoustic signal at 720 nm (Fig. 4f) whereas no increase in the photoacoustic signal was observed in the tumor injected with PEGylated AuNPs (Fig. 4c). This change in the photoacoustic signal is due to the molecular interactions of EGFR-targeted AuNPs with EGFR overexpressing tumor cells (Fig. 4f). No change in the photoacoustic signal was observed in the tumor injected with PEGylated AuNPs (i.e., no increase while background signal remains the same over time). Given that the non-targeted nanoparticles do not interact with cancer cells, there is no receptor-mediated endocytosis and, therefore, no plasmon coupling and no red-shift in the optical absorption of the PEGylated AuNPs in the tissue was observed. Quantitative analysis of the temporal changes in the photoacoustic signals indicate a clear enhancement at 720 nm in the tumor injected with EGFR-targeted AuNPs (Fig. 4g). The photoacoustic signal amplitude was normalized with respect to the photoacoustic signal obtained at 532 nm irradiation to compensate for possible differences in the AuNPs concentration in the imaging planes. Furthermore, the photoacoustic signal from tumor injected with EGFR-targeted AuNPs increased as a function of time (Fig. 4g) due to the continuous trafficking of EGFR-targeted AuNPs from the cell membrane to early endosomes and later to late endosomes/ multivesicular bodies resulting in a time-dependent red shift in optical properties of AuNPs [31]. In contrast, the tumor injected with PEGylated AuNPs showed insignificant temporal change in the photoacoustic signal from the tumor region. The results presented in Fig. 4 suggest that the combined ultrasound and photoacoustic imaging has the ability to monitor molecular interactions of epidermal growth factor receptor (EGFR) with molecular-specific AuNPs. In the next series of experiments we validated our imaging approach using an intravenous administration of AuNPs in three groups of mice (Fig. 5). The first and the second group had EGFRpositive A431 tumors. The third group was inoculated with EGFRnegative breast adenocarcinoma MDA-MB-435 cell line. The first and the third group were injected with EGFR-targeted AuNPs. In the second group PEGylated AuNPs were administered at the concentration two times higher than that used in the other two groups. Ultrasound and multi-wavelength photoacoustic images of tumor regions were acquired at the same spatial locations before and up to 4 h after administration of AuNPs. Multi-wavelength photoacoustic images were analyzed to identify regions containing endocytosed nanoparticles, oxygenated blood, and deoxygenated blood [28]. In all animals (Figs. 6 and 7), the tumor region was easily identified in ultrasound images. Typically, the subcutaneous tumor appears as a hyperechoic region (skin, top of the image) with a hypoechoic area denoting the tumor. In the first group of mice with EGFR-positive tumor and injected with molecular-specific AuNPs, photoacoustic images obtained at 720 nm excitation (Fig. 6a) clearly indicate an increase in the photoacoustic signal over time after administration of EGFR-targeted AuNPs. Similar to results shown in Fig. 4, the photoacoustic images in Fig. 6a show an increase of the photoacoustic signal amplitude over time and, therefore, indicate both successful delivery of the nanoparticles and their receptor-mediated interaction with cancer cells. To Fig. 5. In this study three groups of mice were used. The first and the second group of mice had xenografts of A431 cells that overexpress EGFR. The third group of mice, serving as a control, was inoculated using EGFR-negative MDA-MB-435 cells. The first and the third groups were injected with EGFR-targeted AuNPs. PEGylated AuNPs were administered to the second group. Over time, nanoparticles extravasated into the tumor via the leaky tumor vasculature. Despite significant accumulation of AuNPs in tumor region, no plasmon coupling effect was observed in the tumors from the second and third group. On the other hand, EGFR-targeted AuNPs underwent receptor-mediated aggregation in tumors belonging to the first group of mice. The receptor-mediated aggregation resulted in a strong red-shift in the optical absorption properties of AuNPs. identify regions associated with either endogenous (oxy/deoxyhemoglobin) or exogenous (endocytosed nanoparticles) chromophores, spectroscopic photoacoustic imaging was carried out (Fig. 6b). The images show a heterogeneous distribution of AuNPs in the tumor region that correlated well with viable tumor regions identified in the adjacent H&E stained tissue slices (Fig. 6c). Indeed, the central tumor regions were necrotic (e.g., area outlined by white square in Fig. 6c) while the peripheral areas of the tumor contained viable cells (e.g., area outlined by yellow square in Fig. 6c). Silver stain of gold nanoparticles in tissue slices showed that nanoparticles were present within the entire tumor region (Fig. 6d). However, the observed increase in the photoacoustic signal was localized only to the viable regions of the tumor. This result indicates sensitivity of our method toward the presence of functional EGFR-expressing tumor cells. Fig. 6e and f shows a 3D ultrasound and spectroscopic photoacoustic images of the tumor from an animal in the first group before and 4 h after intravenous injection of EGFR-targeted AuNPs respectively. These 3D images clearly demonstrate that AuNPs did not have a homogeneous distribution in the tumor. The 3D photoacoustic images visualized in context of the anatomical structure of tumor provide us with the functional map of the viable EGFR expressing tumor cells that interacted with the injected molecular specific contrast agent. In this particular tumor, the viable regions are located mostly at the tumor periphery while the necrotic core or the central region of the tumor did not have a detectable PA signal. No increase in the photoacoustic signal at NIR wavelengths was observed in the other two groups after AuNP administration. Spectroscopic photoacoustic images (Fig. 7a and c) do not indicate the presence of endocytosed nanoparticles despite the accumulation of AuNPs in the tumor as shown by silver staining of tissue sections (Fig. 7b and d). The mice in the second group had the EGFR-positive tumor, and the animals were injected with nonspecific PEGylated AuNPs. Consequently, there were no molecular/cellular interactions between PEGylated nanoparticles and tumor cells, and the 720nm photoacoustic images did not show an increase in photoacoustic response in spite of injecting double the amount of AuNPs in the second group. Finally, the third group of mice had tumors that did not have EGFR expression, and the injected EGFR-targeted AuNPs just extravasated into the tumor but did not undergo any nano-molecular interactions with tumor cells. Therefore, there was no plasmon resonance coupling between the nanoparticles and no change in the photoacoustic signal at 720 nm. The results presented in Figs. 6 and 7 clearly indicate molecular specificity of EGFR-targeted AuNPs and the ability of photoacoustic imaging to visualize depth-resolved nano-molecular interactions in vivo. To further quantify these results, the photoacoustic signals obtained at 720 nm wavelength illumination in the viable and necrotic regions of the tumor were compared at various time points before and after injection of AuNPs (Fig. 8). In each case, to compensate for the difference in AuNPs uptake between the tumors, 720-nm photoacoustic signals were normalized with photoacoustic signals obtained at 532 nm. Clearly, in the first group (Fig. 8a), the photoacoustic signal in the viable cell region of the tumors increased over time indicating delivery and receptormediated interaction of targeted nanoparticles with cancer cells. On the other hand, the necrotic region of the same tumors did not have an increase in the photoacoustic contrast at 720 nm. In groups 2 and 3 ( Fig. 8b and c), the photoacoustic signal did not increase over time in either the viable or necrotic regions of the tumors (pvalue >0.05). Plasmonic nanoparticles with different optical absorption properties can be conjugated to various cancer specific biomarkers such as growth factor receptors and integrins [22,35]. The ultrasound-guided spectroscopic photoacoustic imaging could be used to image the multiplex labeling and interactions of nanoparticles with cancer cells in vivo [28]. Identification of the molecular basis of tumor dissemination is driving drug development to discover or synthesize inhibitors that block key pathways in this process. Ultrasound-guided photoacoustic imaging can aid in understanding the molecular signature of cancers, thus guiding implementation of specific therapeutic procedures specific to a particular tumor. Furthermore, studies have shown that AuNPs are promising phototherapeutic agents. Using our imaging technique, guidance and monitoring of photothermal therapeutic procedures could also be feasible [32,36,37]. Conclusion We demonstrated that molecular targeted AuNPs and spectroscopic photoacoustic imaging has the potential to detect cancer cells in vivo based on their molecular functionality. In particular, EGFR-targeted AuNPs underwent molecular-specific endocytosis leading to plasmon resonance coupling. The phenomena resulted in an increase in optical absorption of AuNPs in the NIR region and hence an increase in the photoacoustic signal. We evaluated feasibility of photoacoustic imaging in detection of plasmon resonance coupling of targeted AuNPs in tumors expressing EGFR. Multi-wavelength photoacoustic images obtained before and after intravenous injection of EGFR-targeted AuNPs clearly showed a significant increase in red-NIR absorption of the tumor region due to formation of AuNP aggregates. Furthermore, we demonstrated 3D capabilities of the imaging system in obtaining molecular signatures of tumor cells. Conflict of interest statement The authors state no conflict of interest. Stanislav Emelianov received his B.S. and M.S. degrees in physics and acoustics in 1986 and 1989, respectively, and Ph.D. degree in physics in 1993 from Moscow State University, Russia. He is currently a Professor of Biomedical Engineering at The University of Texas at Austin, and an Adjunct Professor of Imaging Physics at The University of Texas M.D. Anderson Cancer Center in Houston. Dr. Emelianov directs the Ultrasound Imaging and Therapeutics Research Laboratory -home to research projects focused on basic science, pre-clinical studies and clinical translation of medical instrumentation, signal/image processing algorithms and imaging contrast/therapeutic agents. Dr. Emelianov's research interests are in the areas of intelligent diagnostic imaging and patient-specific image-guided therapeutics including cancer imaging and diagnosis, the detection and treatment of atherosclerosis, the development of imaging and therapeutic nanoagents, guided drug delivery and controlled release, simultaneous anatomical, functional, cellular and molecular imaging, multi-modal imaging, and imageguided therapy.
2018-04-03T03:08:19.537Z
2015-01-13T00:00:00.000
{ "year": 2015, "sha1": "3fba91aad4fe564c226631ef293a7b4b9e68df83", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.pacs.2014.12.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fba91aad4fe564c226631ef293a7b4b9e68df83", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259448994
pes2o/s2orc
v3-fos-license
Forty Years of Laser-Induced Breakdown Spectroscopy and Laser and Particle Beams . Te laser-induced breakdown spectroscopy (LIBS) technique is one of the most promising laser-based analytical techniques. Coincidentally, the LIBS acronym was proposed by Radziemski and Loree in two seminal papers published in 1981, almost at the same time in which the Laser and Particle Beams journal started its publication. In this contribution, the evolution of the LIBS technique is discussed following a chronological collection of key papers in LIBS, some of which were in fact published on LPB Introduction Laser-induced breakdown spectroscopy (LIBS) is an atomic emission spectroscopic technique based on the spectral analysis of the plasma induced by a pulsed laser beam in gas, liquid, or solid targets to obtain information about the materials under study. Te principles of the LIBS technique are deeply rooted on the preexisting knowledge on fame and plasma spectroscopy which largely precedes the discovery of the laser. After the introduction of the laser, techniques similar to modern LIBS were proposed [1]; however, as in the 1962 Brech and Cross paper, at those times, the laser was used essentially for ablating a solid sample, while the excitation of the material was obtained through an electrical spark. Te main characteristics of LIBS as it is now practiced are, on the other hand, to use the laser for obtaining at the same time the sampling of the material and its heating for producing the atomic optical emission [2]. Tis characteristic is peculiar of the LIBS technique and brings the exceptional advantages of operating on not treated materials in very short time, which in turn allow the use of the technique for remote in situ analysis in hostile environments. On the other hand, the use of a single tool for sampling and excitation prevents the possibility of independent optimization of the two processes, leading to analytical performances that are usually considered modest with respect to other conventional laboratory spectrochemical techniques. A typical LIBS experiment involves the use of a pulsed laser, typically Nd : YAG, at the fundamental wavelength of 1064 nm, emitting pulses of a few nanoseconds with energy of several tens of milli-Joules and maximum repetition rates of a few Hertz. Te light emitted by the laser-induced plasma is collected and sent to a spectrometer (gated or ungated, narrow-or wideband) where the signal is analysed using a suitable delay after the laser pulse, to reduce the continuum bremsstrahlung emission (see Figure 1). Many alternative experimental confgurations have been realized, though. A description of some of them is given in [3]. A typical LIBS spectrum, acquired with a broadband spectrometer, is shown in Figure 2. Te attribution of the emission lines to the corresponding atomic species is usually done manually, based on the information contained in the NIST database of atomic lines [4], although methods for automatic identifcation of the lines were also proposed [5]. In the following, we will discuss the exceptional evolution of the LIBS technique in the last 40 years, also highlighting the important role that the Laser and Particle Beams journal has had in this evolution. 1981-1990: The Early Years Te frst papers where the acronym LIBS was originally proposed were published by Radziemski and Loree [6,7] in 1981. Te two authors, researchers of the Los Alamos National Laboratory (Los Alamos, New Mexico, USA), outlined the principle of the LIBS technique in two companion papers, the frst dealing with time-integrated detection of the plasma emission, the second discussing the analytical advantages of a time-resolved detection. Te authors analysed by LIBS sodium and potassium in a coal gasifer product, airborne beryllium and phosphorous, sulphur, fuorine, and chlorine (the latter three elements particularly complex to detect by LIBS) in atmosphere. For the next 10 years, the research on LIBS remained essentially confned in North America; with the arrival of the Radziemski group of David Cremers, the application of the LIBS technique was extended to many other interesting felds, such as the study of aerosols [8], the analysis of liquids [9], detection of beryllium in air [10] and in beryllium-copper alloys [11], and detection of uranium in solution [12] and cadmium, lead, and zinc in aerosol [13]. 1991-2000: Evolution of LIBS In 1991, the Pisa group published a paper dealing with the quantitative determination of pollutants in air by LIBS [14]. Te paper was published on Laser and Particle Beams, and it represented the frst work on LIBS published by a group outside the USA. In the following years, other works on LIBS were published in Europe (determination of carbon in steel by the Spanish group of Aragón et al. [15,16]) and in Canada (quantitative analysis of aluminium alloys [17]). In the 1990-2000 decade, several papers on LIBS were published using the "LIPS" (laser-induced plasma spectroscopy) acronym. Tis occurred mostly in Europe (see, for example [18]), but some groups in Canada [19] and USA [20] also adopted this terminology, which was considered more general than the original "LIBS." Te LIPS acronym is now deprecated, after the First International LIBS Conference (LIBS 2000), organized by the Pisa group in Tirrenia, Italy [21]. It is nevertheless curious that two of the major contributions to LIBS, which introduced two techniques still widely used nowadays, were in fact referring to LIPS as the name of the technique. Te frst key paper was published in 1988 by the Sabsabi group in Canada and reported on an alternative experimental confguration in which the laser energy is delivered on the sample surface in two pulses, suitably delayed [22]. Te authors reported a considerable intensity enhancement in the spectral signal which was substantially independent on the interpulse delay. Te physical explanation of the enhancement in double-pulse confguration was fully explained only several years after by the Palleschi group in Pisa [23,24], in terms of the reduced plasma shielding of the plasma produced by the second laser pulse, due to a reduction of the environmental gas density behind the shock wave produced by the frst laser pulse. It is worth noting that the essential role of the frst shock wave in double-pulse LIBS was frstly hypothesized by the Russian researcher Sergei Pershin; unfortunately, his research, published on a Russian journal [25], went generally unnoticed, and the author was credited for his original intuition only recently [26]. Te second important paper was published by the Pisa group in 1999 and proposed a new procedure for standardless LIBS analysis called calibration-free LIPS (now known as CF-LIBS) [27]. Interestingly enough, an extended description of the method was published the same year on Laser and Particle Beams [28]. Te Ciucci et al. paper is the most quoted research paper (thus excluding books and reviews) in the history of LIBS. Many applications and improvements of the CF-LIBS method have been proposed since the original papers of 1999. One of the most useful procedures is the compensation of self-absorption efects in the LIBS plasma, which produce a non-linear dependence between the analyte concentration and the LIBS line intensity (see Figure 3). Te efect of self-absorption in laser-induced plasmas was studied in a key paper by the Winefordner group at the University of Florida in Gainesville, USA [29], but the implication of this research would not be transferred to CF-LIBS until the beginning of the XXI century. 2001-2021: XXI Century LIBS Te frst proposal to use the curve-of-growth approach to compensate for self-absorption efects in LIBS plasmas was published by the Pisa group in 2002 [30]. A simple experimental method for evaluating the self-absorption efect and compensating it using a duplicating optical path mirror was proposed by the Gainesville group of Nicolò Omenetto in 2009 [31]. Tis method is conceptually very simple, although its realization is rather complex and limited to a laboratory setup. A more versatile method was proposed by the Palleschi group [32], as a generalization of the theoretical work of Amamou et al. [33]. According to this method, the degree of self-absorption of a given emission line can be calculated and, eventually, compensated, by measuring the intensity and full width at half maximum of the emission line and the measured plasma electron number density, once the Stark broadening coefcient of the line is known [34]. Te method proposed by the Pisa group ofers the possibility of measuring the plasma electron number density from the broadening of the Balmer alpha hydrogen line [35]. Te problem was studied in 2013 by Pardini et al. [36]. Despite the fact that the Pisa method was initially developed for improving the predictions of the calibration-free LIBS technique, its applications have been extended to many situations in which the self-absorption efects are important (see [37] for a detailed discussion). Particularly important in this framework is the criticism to the "branching ratio" method for assessing the self-absorption efect of a spectral line published in 2021 by Urbina Medina et al. [38]. Te ability of compensating for self-absorption efects opened new perspectives in the determination of spectroscopic fundamental parameters as transition probabilities [39,40] and Stark broadening coefcients [41,42]. One of the fundamental hypotheses for the application of the calibration-free LIBS is the fulflment of the local thermal equilibrium (LTE) condition [43]. An important result, obtained at the end of the frst decade of the century, was the extension of the McWhirter criterion for local thermal equilibrium to non-stationary and non-homogeneous LIBS plasmas [44]. Te implication of that research confrmed the necessity, for the use of CF-LIBS, to use time-resolved detectors. A paper by Grifoni et al. in 2014 [45] provided a simple tool for extracting time-resolved information from time-integrated spectra, exploiting the diferences between two or more spectra taken at diferent time delays. Te frst decade of the century witnessed a great improvement in the performances of the LIBS technique, which accelerated its acceptation as a powerful analytical technique. In 2004, the Gainesville group published a paper, in which LIBS was defned as a possible future superstar among the atomic spectrometric techniques [46], and in 2010, David Hann and Nicolò Omenetto published an important review on Basic Diagnostics and Plasma-Particle Interactions [47] which at present time is the most quoted review paper in the LIBS history. Te group of Javier Laserna at Malaga University, Spain, demonstrated the feasibility of performing stand-of LIBS analysis at long distances (>10 meters) using an open-path confguration [70,71], thus extending dramatically the possible applications of LIBS for the analysis of industrial or environmental samples in hostile environment. Among the many exploitations of the LIBS technique, it is worth mentioning the results of a recent European project (LACOMORE-laser-based continuous monitoring and resolution of steel grades in sequence casting machines), aimed at the optimization of the continuous casting process of steel [72,73]. Te project involved the world's two most active groups in LIBS development and research of Malaga and Pisa. In the framework of that project, an open-path double-pulse LIBS instrument was successfully used for remote analysis (about 6 meters) of steel up to 900 C temperature. An important evolution of the LIBS technique, in the frst decade of the century, was the introduction of ultrashort lasers sources for plasma generation [74][75][76]. Femtosecond laser pulses produce neater craters on the sample surface compared to nanosecond lasers, and the resulting spectra are characterized by a lower continuum emission because of the temporal separation between the ablation phenomenon and the creation of the plasma, which inhibits the laser-plasma interaction phenomena. Te advantages of femtosecond LIBS were exploited for sub-micrometric indepth measurements [77], analysis of biological tissues [78], and environmental applications [79]. Combined Laser and Particle Beams 3 nanosecond-femtosecond [80,81] and femtosecondfemtosecond [82] dual-pulse analysis was also proposed. Te possibility of maintaining the laser beam collimation at very long distances, due to the flamentation/self-focusing efects that characterize the propagation of fs-laser beams in atmosphere [79,[83][84][85], has triggered many innovative applications of stand-of LIBS, including the remote analysis of cultural heritage [86], biological materials [87], geological samples [88], and explosives [89,90]. After the modest results of the proposals to use LIBS for Homeland Defense, in the years following the 9/11 tragic events [71,[91][92][93][94], LIBS regained some public consideration for its possible application in the feld of space exploration [95]. Finally, on August 6, 2012, the NASA Curiosity rover landed on Mars, carrying a LIBS instrument [96] which is still operating after more than 10 years of activity, resulting in hundreds of thousands of spectra taken on Martian rocks. Te data obtained by the LIBS instrument on Mars certainly helped in better understanding the Martian geology and contributed to the search of former life traces on the planet, opening the way to the use of LIBS in two other missions landed on Mars in 2021 (the NASA SuperCam instrument, mounted on the Perseverance rover [97], and the Chinese MarSCoDe mounted on the Zhurong Mars rover [98]). However, the success of LIBS on Mars beneftted more, if possible, the development of LIBS research on Earth. In fact, the frst LIBS Mars mission spurred the development of compact hand-held LIBS spectrometers, which rapidly arrived on the market of scientifc instrumentation for metal analysis [99,100], nuclear industry [101], and environmental applications [102], to cite some of the most important applications. Te hand-held LIBS instruments represented an impressive advance with respect to the conventional laboratory or mobile LIBS instrumentation [103], but their compact size unavoidably imposes some compromise in the analytical performances of the instrumentation. Te limitations of the hand-held LIBS hardware require the use of sophisticated chemometric techniques for extracting useful information from the spectra [99,100,104]. Te use of advanced chemometric tools (artifcial neural network, ANN) was frstly introduced in LIBS in 1998 [105]. However, these techniques became widely used only in the second decade of the century. Te chemometric techniques can be used for simplifcation (for example, principal component analysis [106]), classifcation (self-organizing map (SOM) [107], support vector machine (SVM) [108], graph clustering (GC) [109], random forest (RF) [110], ANN [111], etc.), and quantifcation of the LIBS spectra (partial least squares (PLS) analysis [112], ANN [113], etc.). Numerous applications based on machine learning and chemometric analysis of LIBS spectra have been proposed in recent years. Te most impressive are probably the applications to animal and human health (early detection of cancer, for example, [48]), but many other examples can be cited in cultural heritage [114], energy production [115], space exploration [116], and several other felds. LIBS elemental imaging [117,118], which represents one of the most interesting developments of the technique, can produce millions of spectra. Te construction and interpretation of LIBS elemental maps also beneft the chemometric algorithms [119]. Te growing complexity of the chemometric algorithms currently used in LIBS has stimulated some research aimed to obtain a better interpretability of the results obtained [120,121]. An up-to-date description of the most advanced chemometric techniques can be found in [122]. Among the emerging new approaches to LIBS analysis, it is worth mentioning two interesting variations of LIBS which were proposed in the second decade of the century. In 2011, Rick Russo proposed a technique for isotopic analysis by LIBS based on the detection of molecular emission in the spectrum. Te isotopic shift of diatomic oxides or fuorides of the elements is typically larger than the ones of the corresponding atoms, thus allowing for an easier separation of the characteristic emission pattern of the diferent isotopes. Te technique was called laser ablation molecular isotopic spectrometry (LAMIS) [123], and its efectiveness was demonstrated for the isotopic analysis of several interesting elements [124]. Another very interesting alternative approach to LIBS analysis was suggested by Alessandro De Giacomo and his group in Bari, Italy, in 2013. de Giacomo et al. reported on the use of nanoparticles to enhance the LIBS signal on metal targets [125]. Te enhancement observed was comparable to the ones obtained in double-pulse LIBS, but obtained with a simpler, in principle, experimental apparatus. Te mechanism of the enhancement in nanoparticle-enhanced LIBS (NELIBS) is very diferent from the one observed in DP-LIBS (total ablated mass, plasma electron density, and temperature are found very similar between NELIBS and conventional LIBS at the same energy). Te mechanism hypothesized by the inventors of the technique involves a much larger atomization of the ablated mass in NELIBS, produced by the intense electric feld which is created between one nanoparticle and the other. A model of the phenomenon has been recently published [126], which also explains the dependence of the enhancement on the distance between the nanoparticles (which in turn refects in a strong dependence of the enhancement on the concentration of the deposited nanoparticle). Te full potential of NELIBS has probably not been yet explored, but this method can possibly change the conventional narrative of LIBS as a mediocre analytical technique. Te applications of NELIBS should be mainly confned to the laboratory, though, because of the need of treating, although minimally, the sample under analysis. Proposals have been presented for putting together NELIBS and DP-LIBS, in a way to possibly combine the two enhancements [127]. Finally, in the analysis of insulators, NELIBS ofers the advantage of enhancing the signal while reducing the surface damages, a feature that makes this approach particularly useful for the study of precious stones [128], for example. Te possibility of obtaining readable LIBS spectra from minimal quantity of ablated material was specifcally studied by the Malaga group of Javier Laserna, which demonstrated the feasibility of LIBS for analysis of single nanoparticles [129] and, in more recent papers [130][131][132], essentially debunked the standard narrative describing LIBS as a low sensitivity technique (in [130], a limit of detection of 60 attograms was demonstrated in the LIBS analysis of single copper nanoparticles). Conclusions Te LIBS technique has made incredible progresses during its 40 years of existence. Certainly, the evolution of LIBS has been favoured by the technological evolution of the key instrumental parts (lasers, spectrometers, and detectors); however, the improved knowledge of the basic phenomena involved in the laser-sample, laser-plasma, and plasmasample interactions has helped in better modelling the complex chemical and physical phenomena involved in the generation of the LIBS spectral signal. In this contribution, we have tried to retrace the history of LIBS to its fundamental papers, several of which were published in this journal. Te list is far from being complete. A search on the Scopus ® database with keywords (LIBS OR LIPS) reports, after removal of non-pertinent works, more than 12,000 papers published on the topic since the frst two in 1981, with a growth rate in 2021 of more than 1,000 papers per year. Nevertheless, we hope to have given an idea of the distance that the technique has travelled from the frst laboratory application to the present interplanetary missions. Tere are still many steps to do and many obstacles to overcome before the LIBS technique could be considered to have growth to its full potential, but we have no doubts that some of these important steps will be presented and commented on this journal, as it was in the past with several key publications which still represent fundamental milestones in LIBS research. Data Availability Te relevant data are available from the author upon reasonable request. Conflicts of Interest Te author declares that there are no conficts of interest regarding the publication of this paper.
2023-07-11T00:12:48.101Z
2023-06-19T00:00:00.000
{ "year": 2023, "sha1": "beff10940c48b9d4505edadd5880c6cf12826e85", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/lpb/2023/2502152.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "916a3b9a4e21d9fccc5dc280530afbb84c74d6f5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
128290973
pes2o/s2orc
v3-fos-license
Influence of Transformational Leadership Style on Global Competitive Advantage through Innovation and Knowledge Today's business organizations necessitate a new leader who can easily confront a new business approach that mainly based on knowledge and innovation to simplify the organization's path toward global marketing trend. Still; organizations sometimes flop to reach sustainable global competitive advantage due to their imperfect consideration of the relationships between these strategic variables. In fact, and up to our knowledge, few researches has consider and examine the direct and indirect associations linking these variables. Our study tries to fill this gap by investigating, theoretically and empirically, how the leader's discernment of diverse intermediary strategic variables related to innovation and knowledge (knowledge slake, absorptive capacity, tacitness) and innovation impact the relation between transformational leadership and organizational global competitive advantage. Based on recently published researchers, we developed a theoretical model that demonstrate the actual connection linking these variables. Most of the data has been collected through secondary resources including journals, books and related research papers. While a questionnaire was used to collect data samples from 50 respondents from Telecommunication Regulatory Authority leaders and employees in UAE, model testing also conscious the findings and deliver some conclusion from business leaders to confirm the relations tested among this research Introduction Recently, almost all business organizations are facing challenges happens through dynamic business environments portrayed by new business changes, technological advancements, emerging customer's needs, and the need for innovative products.All those challenges lead to emerging globalization approach. Market competitive advantage facilitates the usage of e-commerce and online shopping approaches through retail customer's shopping experience, this rapid growth of e-marketing, e-commerce, and e-shopping affect the customers shopping decisions that significantly shift the organizations perspective toward attain global market advantage (Nuseir, M. T,2010). Leadership have many definitions based on different perspectives.(Yukl,2002) define the leadership as the process on which supervisor or manager can effectively coordinates and manages their subordinate's actions with concerns to business goals or objectives.Leaders have different leadership styles that used to describe how the leader impress their power or rules over their subordinates to achieve a specific goal either for a specific task or for the overall business. Leadership style, through many extended researches evidently proved to be a vital factor that affecting knowledgebased organizations to exploit innovative processes and even initiating global competitive environment (Dereli, 2015) Manager who looks to drive his organization toward innovation and competitiveness performs distinctive leadership style described as "Transformational Leadership Style".Transformational vs. Transactional leadership styles differ in their actual effect on directing, stimulating, and managing organization's trend toward more creativity and innovation.(Aragón-Correa.el., 2009).The actual effect of this styles appears on both the knowledge management and innovation process that composed affect the organization competitive global advantage.Transformational leader managing organizational knowledge through three measurable aspects generating, allocation, and exploiting on both individual and group levels (Bryant 2003). Knowledge management through organizations, groups, or even among individuals is strongly connected with management perspective that defined as "System based on knowledge".Over this view point the system should be managed through the overall knowledge circle (explicit or tacit), external knowledge (absorptive capability), internal knowledge (knowledge slake).This knowledge stream through the power of transformational leader and innovative business process can impress a significant effect for attaining the global competitive advantage.It is not only the member's knowledge that deliberately significant factor, but the existence of effective leadership style that enable the organization to custom, participate, and share this knowledge flow innovatively. Innovation, simply can be defined as finding new ways to do things like never done before or by the codification of old ideas to generate a new ideas or ways for understanding.In business this can translated to achieving a novel products and economic growth that works as a base for the total business performance advancements (Pratt, 2018).Therefore, companies exercise themselves to mature new innovative skills, expand sustainable capabilities and upgrade their performances.In this perspective, innovation has been one of the vital essentials of global competitive advantage (Dereli, 2015) In a globalized world, business organizations have been changed in all dimensions.Due to deviations in the market, the operative of the companies and markets has also changed.As of expanded business burdens, needs and production arrangements, new businesses appeared and also new production process and attitude have grown.Therefore, existing business setups and settings are not adequate or applicable, companies need innovative procedures and approaches in such global competition.Though creativity and innovation that based on organizational knowledge management become the main principal of companies to receive attainment at unlimited and flexible market circumstances (Bozkurt, 2000). By addressing this issues, in the existing study we contend that the relationship or connection between the organization's transformational leadership style and attaining global competitive advantage is contingent with boosting knowledge and innovation among all business aspects, especially by motivation and inspiration nature of transformational leadership in initiating effective engendering knowledge-based and innovative behaviors among all business organization's levels. To test the study model, the following hypothesis has been studied through next section and illustrated in figure 1 Figure 1.Proposed system's framework Theoretical Background and Hypothesis Development The approach of 7-S model from Mc Kinsey, Singh (2013) (Note 1) have confidence in that transformational leadership style had a starring role in preserving and emerging a company's sustainable global competitive advantage through effective association of center communications.The research findings of (Menguc et al. 2007) stated that transformational leadership definitely and positively effect on marketing distinction and variation and enforce a low cost strategy, combined together leads to attain global competitive advantage.Based on theoretical thinking and empirical study results, hypothesis was formulated here below as follows: Transformational Leadership and Organizational knowledge Transformational leaders can be defined as the leaders who work and motivate their subordinated to identify business vision, needed change, and thus share all business responsibilities.Based on this, transformational leadership can describe the leader style that effectively transform or change followers to escalate beyond their selfinterest by changing their ethics, interests, and scruples.Accordingly encourage them to execute superior than anticipated and work as a role model for scarifying self-achievements for group or collective-achievements (Bass, 1990). Transformational leadership can see as an advantage over transactional leadership whose works as a higher-level constructor conceding the dependent or reliable on rewarding their followers and specifically doing "managementby-exception" (i.e.observing performance measures and taking curative and helpful movements when problem ascend).(Avolio, Bass, & Jung, 1999). As transformational leadership considered on the total contrast with transactional leadership, we prefer to incorporate transactional leadership in our analysis.Besides, this contrasts provide an evident on that transformation leadership is not only work as a behavior but more extend to initiative transform that leads to attain more innovative way of thinking and thus affect the organization's innovation culture and trend (Pieterse, 2010). Transformational leader has his/her own characteristics that facilitate his impact to different business and personnel approaches.Transformational leader has attractiveness, personality, intelligent stimulation, encouragement.This style effectively builds a new communication channels among followers based on respect, trust, knowledge share that leads to effective knowledge management approach (Bass, B. M., Avolio, 2003). Knowledge generally uncertain due to many changes of its absorbing, discussing and manipulating.So that knowledge slack becomes important by enhancing the possibilities of fresh information that efficiently resembles current ones, and thus the possibility of adopting knowledge management process successfully (Cohen, W. M., & Leventhal, 2000).Business information absorbed new and innovative ones easily if the existing or previous information having some kind of similarities.This absorptive capacity strongly related with the organizational existing knowledge and on the same time, absorptive capacity strongly affected innovative capabilities and innovative attitude (Liao, S. H., Fei, 2007). Transformational leader also has a positive impact on knowledge absorptive capacity through his influence of the followers or subordinates' individual absorption.Individual absorption required an effective division and assignment of business responsibilities and authorities among workers.This becomes easy if transformational leader innovates new processes to expand responsibilities effectively, his power evidently matches a suitable organization structure to finally get the maximum absorption capacity (this may happen through merger and acquisition, business cultural change), using explicit and tacit knowledge. Tacit knowledge considered as more-strategic as explicit; facilitate the easy route toward attaining global competitive advantage and strongly related to different strategic significant variables like innovation capability (Liao,2007).For transformational leadership, innovation capabilities considered as a focal thinking point for such style, this concept of leadership can be connected to innovation through his way/s to transform old ideas into moreinnovative ones and by inspiring his followers to think differently, seeking for new problem solving opportunities and creating investigative thoughts processes This style works as a role model in expressing a shared vision of innovation.(Schepers, 2007). Therefore, this study suggests that there is a positive relationship between transformational leadership and organizational knowledge aspects (knowledge slake, absorptive capacity, tacitness).these aspects are abstracted and put into a formalized theory including a complete and sophisticated knowledge management system. From what we have presented above, we can formalize Hypothesis 1 that will be tested for soundness later on with more evidences from on-field studies. Hypothesis 1: Transformational leadership has a positive effect on organizational knowledge. Transformational Leadership and Innovation Innovative behavior can be compromise into a multi-steps process starting from problem understanding and recognition, flowed by suitable idea generation and upcoming solution, through reaching finally to the actual idea realization and achievement.(Yukl, 2002).Thus many researches argues how innovative behavior influences by abilities, skills, and specific knowledge (Amabile, 1988) while others insists it is strongly related with motivational factor that turn it into extensive concentration throughout many leadership researches (Glynn, 1996). It is essential to differentiate two strongly-related definitions "invention and innovation".Whereas invention can be stated as establishment of an idea around a new product or business process, innovation is to bring out this new idea into reality.As of this diverse necessities in creating new ideas and employing them, a time interval befalls between invention and innovation.Changed types of understanding, abilities and properties are desirable in order to transfer an invention into an innovation (Fagerberg,2005). A number of studies stated that transformational leader initiates an innovative business climate.(Jung et al. 2003), who's research testing conducted on 32 Taiwanese companies, found a positive impact of transformational leader on the companies' innovative approaches through stimulating motivation and intelligent innovation stimulation.Such leadership style, and through articulating the organizational vision will spread a power of confident that make their followers acts the same, and will strive to attain innovative market (Jung et al. 2003). Transformational leader, not only extent an internal innovation, his role extends to external innovation aspects like championing and boundary crossing and bridging.This considered a crucial factor to understand the global market requirements and global customer's needs.Therefore Hypothesis 2: Transformational leadership has a positive effect on organizational innovation. Knowledge Slack, Absorptive Capacity, and Global Competitive Advantage Organizational adaptability can be strongly related to how organization quickly adapt and accordingly adjust to business environment changes in existing compound and challenge business environment.Adaptability requires efficient absorptive capacity to transfer best performs, follows and knowledge (Daghfous, 2004). On the same path, absorptive capacity fundamentally relies on the level of preceding correlated knowledge; not only basic accountabilities, abilities, skills but also pioneering technological and social advances.On the other hand, knowledge slake strictly linked to new and innovative knowledge, so that it simplifies absorptive capacity, this capacity need more knowledge slake to be maintained and continue (Zahra, 2002). Knowledge slake is critical to the organization's capacities and talents to engage and advance knowledge and intellectual wealth.However, the ordinary presence of knowledge slakes or investment in its progress is not sufficient for attaining the market competitive advantage. Absorptive capacity which when related to competitive advantage can be defines as the ability of business organizations to understand and completely recognize the value of new, innovative external business information, absorb it, and practically implement it into competitive business results which is not only affect innovation (Zahra, 2002) but extends to cover many business significant areas like global marketing (Xiong and Bharadwaj, 2011) and international business (Lyles & Salk, 1996). In general, Knowledge management system(KMS) is the second state-of-the-art innovation relevant to business practitioners.with KMS distinct as "a total networked system that share information via a knowledge slake and influence knowledge all over the enterprise" in addition to "offer Internet based entrance to clients and provider worldwide".(Alavi,2001) discussed in his study how the great developments and advancements in knowledge management system initiate and maintain the competitive advantage. All of the research mentioned above leads to formalizing hypothesis 3 mentioned here: Hypothesis 3: Organizational knowledge will be a mediator between transformational leadership and organizational global competitive advantage Innovation and Global Competitive Advantage Studies and researches that interested in aspects related to economic growth and development, investigate the strong relation between innovation and competitive advantage.While the need for differentiation required innovative thinking, each new innovation forces a new competition advantages, conditions, consequences and implications.Shortly Innovation and competition influence each other.On other words Competition is an ambition for innovation creativities.On the other side innovation supports competition whereas making it more concentrated (Noe, 2003). Recently, high competition either regionally or globally exists.Organizations have to adopt new innovations or be innovative by itself to survive and continue in such high-competitive market.Dynamic changes of competition forces new innovative ways rather than traditional methods just like recruiting a cheap labor or ordinary economic growth approach.Rather, innovation variable is the best strategic variable for competitive advantage especially when the company look for globalization.(Özdemir,2012) in his study found that innovation has significant effect on organization efficiency, performance, and growth.Competitive advantage gained through innovation also affect employee's satisfaction, employment procedures and prosperity gains. On the other direction, Competition shape a pressure on organizations to look for differentiation through innovation.Innovative companies are appreciative to accept the innovation implementation extra costs to be born in instance of undesirability of the current corporal and human infrastructure.Competitive burdens shaped by the existence of competitors has to be further added at this topic. Innovation is the driving factor stands behind many success companies' stories, it is the keystone for global competition and global advantage.Innovation considered an essential impact that affect most organizational factors including profitability, service quality standards, and most importantly aid in attaining customer's loyalty with organizational product's (Nuseir,2015).(Açıkdilli,2013) in his study stated some concerns and procedures related to global market conditions determined by innovative approach , some of them are: Innovation is a life style, rather than a number of continuous business steps.Organizations with all sizes (small, medium, and large) have to look for new, creative, and innovative opportunities to continue or DIE.Innovation can either be on idea, process, product, or even service.For the best implementation of innovative strategies, it is better to put-in-hand new changes including process, technology, and management and observe them closely.Despite the fact that there is an excessive costs related with innovation implementation, innovation can be implemented by just new effective ideas. According to Porter five-forces (Porter,2008) there are essential principles for organizations to attain the global competitive advantage, the first principle is the effective handle of the organization's value system, and beside to that all the resources must be continually advanced and investigated, and finally innovation and change requisite to be sustainable.Hypothesis 4 formalized the assumptions deducted from the literature research mentioned in this section. Hypothesis 4: Organizational Innovation will be a mediator between transformational leadership and organizational global competitive advantage. The main intention behind this study is to conclude whether leaders in managerial positions in the "UAE Telecommunication Regularity Authority(TRA)" (Note 2) who are in charge of vital decision making and managerial responsibilities have characteristics of transformational leadership styles.Perceptions of organizational knowledge management, An implementation of innovative approaches via business processes, product, and services.How this perception positively mediate effects between transformational leadership and organizational global competitive advantage.Therefore, this study passes in concluding the tested relations Sample and Procedure Our research sample targeted "UAE Telecommunication Regularity Authority(TRA)" (Note 3) leaders and employees regarding the actual relationship among transformational leadership, applied knowledge management system, innovation, and the way to attain the Authority competitive advantage. TRA as our research analysis field or unit is the authorized UAE partner in carrying all the responsibilities related to the management and supervision of every telecommunication and information technology aspects (ICT).Despite its short life span compared to other regional authorities, But we choose it since its exceeding expectations in achieving creativity and innovation that evidently touched through its innovative services and global competence in promoting initiative ways to offer the best services to the UAE society citizens. TRA through its ambitious toward attaining service global competitiveness, participate and win many local, regional, and global awards related to ICT sector.This innovative involvement in such awards will throw in producing an environment encouraging the development advancements and innovation through helpful competition between government units at the local, regional and global stages.It also present suggested and recommended tools to contest international standards and propose government authorities an open proposal to become skilled of thriving ICT performs. For our study, we have used purposive sampling (Tongco,2007) mainly targeting TRA leaders and employees for whom their innovation and competitiveness ambitious are very important and noticed.To avoid the "Common method Variance (CMV)" problems since in this research we measure one dependent variable and other intermediate one.(Eichhorn, 2014).So that we use two separate surveys to gather the dependent and independent variables related data.One for the TRA's managers and the other for the employees. Senior managers were request to respond to the degree of the technological innovation implementation and embracing, in addition to questions related to the company ambitious toward achieving and maintaining globalization trends while employees were asked to respond to the questions relating to transformational leadership behaviors attributable to their direct or indirect senior managers.TRA total size at the end of 2017 was 1300 employees (as mentioned in the official website open data tab).70percent of employees in our sample were in the customer service departments while the rest 30 percent worked in other managerial work positions Applicable and usable matching data were gather from 50 total respondent divided into 20 senior managers and 30 employees. Measures Transformational leadership attributes were assessed and measures reflectively by five pointers: idealized influence attributes, idealized influence behaviors, encouragement motivation, intellectual stimulation, and individualized consideration, adapted from Multifactor Leadership Questionnaire (MLQ) from (Bass and Avolio ,2000). Knowledge and innovation was assessed formatively also from sign of product innovation, service innovation, organizational innovation, adapted from Innovation Diagnosis Questionnaire (IDQ) from Mckinsy and Company. Finally, the competitive advantage of TRA was considered based on one or more indicators: exceptionality of products and services, product and service variant, product cost/ value, firm reputation and customer satisfaction, adapted from (Reniati2013).The questionnaire items first passed the validity test to ensure questions reliability. Results and Literature Model In this research, we mainly depend on investigating, comparing and critically evaluate the different literatures related to our research topics.After that we construct our Research Theoretical Framework that illustrates the research testing hypothesis. Research critically finds that Transformational leadership affects the dynamic potentials of organizational knowledge and innovation, supporting Hypotheses 1 and 2, respectively.However, Hypothesis 2 and 3 also shows that the mediation relation that correlate organizational knowledge and innovation successes to attain the global competitive advantage. The research concludes the indirect relation between transformational leadership style in attaining the organizational global competitive advantage through organizational Innovation and knowledge (measured by knowledge slake, tacitness, and absorptive capacity factors). In the process of testing the theoretical framework for this research, we shape a number of nested and correlated other hypothesis models, each integrate different hypothesis about considerations and correlations.Evaluation and assessment to rational alternative models is suggested to prove that a "hypothesized model" considered the top demonstration of the exploratory research's data. Research Findings Based on the aforementioned results, we were able to formalize the following findings: 1. Building the conceptual model to incorporated the effect of transformational leadership indirectly through knowledge and innovation significantly to organizations global competitive advantage. 2. Organizational knowledge and innovation consider as a two perfect mediation variables of transformational leadership effects on organizations global competitive advantage. Research Limitations This research might be limited by the attitude of the questionnaire respondents who were very sensitive in regards with their perception towards their management leadership styles, and so that potentially decrease the independence and objectivity of respondents in responding to questionnaires and freely express their opinions. Conclusion Effective transformational leadership through the employment of influence of ideal features, the inspiration of ideal behavior, stimulating motivation, intellectual stimulation, and personnel considerations directly cannot escalate and enhance the global competitive advantage.Effective transformational leadership can advance and progress organizational knowledge management and innovation approach.Additionally, high innovation can improve the competitive advantage.Innovation and organizational knowledge mediated flawlessly the effect of transformational leadership on competitive advantage. In this research, the research model framework assigns the required hypothesis to evidently test the research variable correlations.Therefore, the results of empirical studies form the basis of this research hypotheses.
2019-04-23T13:21:39.211Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "82f6b9d22aa3dd0673fd6899980a784426caa3e1", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/mas/article/download/0/0/37999/38466", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "82f6b9d22aa3dd0673fd6899980a784426caa3e1", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Mathematics" ] }
265000875
pes2o/s2orc
v3-fos-license
Comprehensive safety risk evaluation of fireworks production enterprises using the frequency-based ANP and BPNN The fireworks industry has long struggled with the problem of safety. Scientific, reasonable, and operable evaluation models are prerequisites of reducing risk. Based on the data from over 100 fireworks production safety accidents in China from 2010 to 2022, two evaluation models were established from the perspective of safety risk definition. Firstly, a weight calculation derivative method, the frequency-based analytic network process (ANP), was proposed creatively. This method optimized the importance ranking index calculation process in the ANP by considering the causal frequency of risk factors in the historical accident samples, thus determining how much each indicator affects the likelihood of accidents. Secondly, utilizing the historical accident samples as the dataset, a back propagation neural network (BPNN) model was developed to extract the mathematical relationship between each risk factor and the severity of accident consequence. Finally, the frequency-based ANP and BPNN models were combined to determine the safety risk level of the fireworks production enterprises. Meanwhile, the safety evaluation research samples were used as the comparison set for empirical study with historical accident samples, involving 100 fireworks production enterprises in China evaluated from 2017 to 2020. The significance result of zero shows that there is a statistically significant difference between the likelihood evaluation results of the accident and non-accident companies. Additionally, the severity evaluation model exhibits an excellent result, revealing a classification accuracy of 98.21 %, a mean square error of 8.97 × 10−4, a percent bias of 1.24 %, and a correlation coefficient and Nash-Sutcliffe efficiency coefficient both of 0.96. The frequency-based ANP and BPNN models integrate self-learning, self-adaptive, and fuzzy information processing, obtaining more accurate and objective evaluation results. This work provides a new strategy for the promotion and application of artificial intelligence in the field of safety risk evaluation, thus offering real-time safety risk evaluation and decision support of the safety management for the enterprises. Introduction China is the largest producer, distributor, and exporter of fireworks in the world [1][2][3].There are many risks and hidden dangers behind the vast market and production scale.Fireworks are primarily made from flammable and explosive pyrotechnic powder [4], which are extremely sensitive to any mechanical process, leading that modernizing the manufacturing facility difficult [5].Meanwhile, most employees are from less developed areas with an older age per capita and a lower education level, and a need for more awareness and skills in safety production [6].Therefore, compared with other manufacturing industries, the production process of the fireworks industry involves more couplings of risky factors, thus resulting in a greater accident rate [7,8]. Safety risk evaluation is the basis and premise of risk control.A scientific evaluation model can not only help the safety supervision department accurately review the safety production conditions of companies, but also enable the graded control measures based on it to accomplish the effect of outlining.After nearly a century of research, safety risk evaluation has successfully shifted from qualitative to quantitative, and some corresponding software have been created based on computer technology.However, the manufacture of fireworks is mainly manual [9,10], unlike other industries with standard production lines and instrumentation diagrams [11].Apart from that, evaluation indicators are mostly qualitative that are difficult to quantify.Therefore, the study on comprehensive safety risk evaluation of fireworks production enterprises is still mainly qualitative methods, only reaching conformity conclusions, such as what-if analysis [12] and job safety analysis [13], semi-quantitative or quantitative methods with subjective judgment, such as fuzzy approach [14], risk assessment for safety and health and chemical health risk assessment [6], and hazard identification and risk assessment [15], and quantitative methods that evaluate only from the severity like fire & explosion index [16], or from the likelihood like prediction human error analysis technique [17].Previous studies have favorably explored the issue of uncertainty in the evaluation process from different perspectives.However, there is still a lack of evaluation models that can comprehensively, efficiently, and objectively quantify the safety risk of fireworks production enterprises. Safety risk evaluation has become more effective and informed with advancements in artificial intelligence, machine learning, and deep learning [18,19].The back propagation neural network (BPNN) is a mathematical model that intelligently processes data by simulating the human brain, including learning, recognition, and self-adaptation.The BPNN model trains the network repeatedly to find patterns between sample inputs and outputs using an error back propagation algorithm.The neural network model has been widely used in evaluation [20], prediction [21], classification [22], and other fields with good results.In particular, Indumathi et al. [23] developed an artificial neural network model to predict occupational accidents, which took values from historical accident data due to the atmospheric conditions for Sivakasi (2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020)(2021).The proposed model by Indumathi et al. gave the highest accuracy compared with other models, but needed more comprehensive consideration of risk factors. In light of the above considerations, the BPNN model was introduced into the severity modeling process.However, due to the lack of quantitative data on the likelihood and taking into account the numerous factors affecting the safety of fireworks production and their interactions, the analytic network process (ANP) was optimized and used to determine the corresponding weights of the evaluation indicators.The key idea of the ANP is to construct a comparison matrix using the nine-scale method, utilizing each factor as the criterion for a two-by-two comparison of the factors influenced by that factor [24].However, the subjective character of expert scoring and the vagueness of the judgment boundary make it difficult to draw a clear line between the relative importance of two factors in practical applications.The causal frequency may indirectly indicate the importance of each risk factor in the chain of accidents [25], which is useful to improve the risk assessment quality and prevent accidents [26].Therefore, the frequency-based ANP was proposed to objectively determine the importance ranking index by substituting the causal frequency of the indicators for the subjective judgment of experts.According to the historical accident data, this work aims to eliminate the overlap and subjectivity of evaluation indicators information in multiple links.Moreover, the comprehensive safety risk evaluation results, regarded as R, can be used to classify enterprises or risk points within the same enterprise and provide decision-making support for safety management. Design of the indicator system The evaluation indicator system was constructed from the perspective of accident causation.The grounded theory (GT) [27] was Table 1 Part of the open coding process and results. Examples of original sentences Concepts Subcategories After the victim returned from playing cards outdoors, the security managers saw him in a bad mood and tried to persuade him not to work that day, but victim insisted on continuing to work. Poor mental state of personnel Physical and mental state of the personnel When recruiting and arranging work types, the company did not carefully examine the status of the employees' age and so on.The company arranged for victim, who had exceeded the legal retirement age, to engage in the heavy labor production work of filling the filling room with drugs and collecting the cake room with sealing powder. Poor age and physical condition of personnel The emergency plan has not been well practiced or trained, so when the accident first occurred, the staff members on duty panicked and struggled to cope with it. Inadequate training and rehearsal of emergency plans Preparation and exercise of the emergency plan The production safety emergency plan was not produced in line with requirements, arranged for expert assessment, or lodged with the appropriate agency.The plan's relevance, viability, and convergence are weak. Unqualified emergency plan preparation The average daily relative humidity was 29 % on the accident day, while the minimum daily relative humidity was 10 %.The dry weather made it simple to create electrostatic buildup. Low humidity in the workplace Humidity The factory did not strengthen the management of raw materials and drugs according to weather changes, resulting in the explosion of drugs after spontaneous combustion due to moisture. Raw materials, finished products or machinery and equipment are damped applied to systematically abstract the indicator system from historical accident data without preconceptions to meet the requirements of comprehensiveness, purposefulness, and salience.The 112 fireworks production safety accidents collected on the information disclosure platform of the Chinese government were used as root materials.One hundred samples were randomly chosen for coding and analysis, and the rest were set aside as test samples.The indicator system was established using the proceduralised GT [28]. Open coding, axial coding, and selective coding After dividing the coded materials into semantically distinct sentences, the similarity and dissimilarity of sentences were analyzed.The 76 conceptualized causal factors of fireworks production safety accidents were identified and grouped into 27 subcategories with related traits and definitions.Table 1 only shows a portion of the open coding process and results due to space limitations.For each concept, only one original statement is excerpted.The concepts and subcategories resulted from open coding were investigated for their potential logical relationships using the paradigm model [29].The five main categories governed the subcategories were then refined, as shown in Table 2.An example of the analysis process of the paradigm model is shown in Fig. 1, where the phenomenon is the main category.Subcategories and main categories were again gathered and refined based on the principal goal of evaluation.Finally, the "Evaluation indicators system of safety risk for fireworks production enterprises" was identified as the core category of the rooted material. Test of coding results (significance level α = 0.05) When the 12 reserved materials were coded at three levels in order, no new concepts, categories, or links could be made, indicating that the refining study of assessment system has hit theoretical saturation.Using SPSS software, the significance of differences in the frequency of the five main categories in each rooting material was tested in order to further confirm the extraction effect of evaluation indicators.Among them, when the main category belonging to the same concept appeared repeatedly, it was only counted once.Firstly, a distribution test was performed using the Kolmogorov-Smirnov (K-S) test [30], which was appropriate for sample size (n = 112) greater than 50.As illustrated in Table 3, the results of the significance test were less than α, indicating that the frequency of the 5 main categories did not follow the normal distribution.Therefore, the Friedman test [30] in nonparametric tests was chosen to perform the significance of differences test.The results revealed that there were significant differences among the 5 main categories (significance p = 0 < α), and the evaluation indicator system was well constructed.The main categories and subcategories were utilized as primary and secondary indicators, respectively, constructing the evaluation indicator system based on the coding results. Table 2 Axial coding results. Subcategories Main categories Awareness level of responsibility among safety managers (A1) The safety risk level of personnel (A) Literacy level of safety among workers (A2) Quota situation of workers (A3) Physical and mental state of the personnel (A4) Setting condition of safety facilities and equipment (B1) The safety risk level of equipment (B) Working condition of machinery and equipment (B2) Qualified status of tools (B3) The safety risk level of environment (C) Temperature (C2) Humidity (C3) Arrangement of production processes (C4) Situation of overall layout (C5) Qualified compliance of raw and auxiliary materials (D1) The safety risk level of material (D) Drug residue situation (D2) Quantitative production, storage and transportation situation (D3) Safety education and training situation (E1) The safety risk level of management (E) Qualification status of security managers (E2) Qualification status of special workers (E3) Preparation and exercise of the emergency plan (E4) Implementation status of raw material access system (E5) Implementation status of hazardous materials storage and transportation system (E6) Implementation status of the full production safety responsibility system (E7) Construction status of regulations (E8) Construction status of the organization of safety production (E9) Situation of hidden danger investigation and rectification (E10) Acquisition and maintenance status of equipment and facilities (E11) Management of the safety production site (E12) F. Wang et al. Evaluation scale of indicators Given that the causal frequency of each indicator indicates its importance in the accident chain, the evaluation scale proposed in this paper is characterized by the conceptual frequencies under each indicator.Therefore, in the evaluation process of likelihood (L) and severity (S), the normalized dimensionless evaluation values V L I k and V S I k of secondary indicator I k (k = 1, 2, …, m I ) under the primary indicator I (I = A, B, …, E) are set as equations ( 1) and (2), respectively: where m I is the number of secondary indicators that I covers; f L I k is the number of concept categories that I k covers; f ′ L I k and f ′ S I k are the frequency of concepts that belong to I k in one evaluation; f S I k is the highest frequency of concepts that belong to I k in a historical accident research data.The impact of each risk factor on likelihood is primarily driven by its quality, while the impact on severity also includes its quantity.Therefore, when the same concept is repeated, it is only recorded once in V L I k , while accumulated in V S I k .If f ' S I k > f S I k , then V S I k takes 1.For instance, three issues were found when the overall layout (C 5 ) of an enterprise was examined, including the insufficient number of workplaces, insufficient safe distance from the workplace (two locations), and non-compliant workplace protection level and protective barrier.C 5 covers 6 concepts with f S C 5 of 5. Therefore, V L C 5 is 0.5, and V S C 5 is 0.8. Frequency-based ANP Fig. 2 illustrates the flow and architecture diagram of the proposed Frequency-based ANP model.Firstly, the frequency-based ANP network structure of the evaluation system was provided after analyzing the relationship among the risk factors, as shown in Fig. 3.The control layer only contains the decision objective, which is the likelihood of fireworks production safety accidents.Among them, the secondary and primary indicators are also called risk impact factors and factor groups, respectively.The connecting lines denote relationships between factors, and factors in the arrow-tailed factor group influence factors in the factor group pointed by the arrow.Secondly, in historical accident research data, the duplication concepts are eliminated, and the frequency of I k and J l together as the accident causal factors, is recorded as c J l I k (c J l I k = c I k J l ).Then the importance ranking index of I k1 compared to I k2 under the criterion J l is optimized as equation ( 3): In this way, 27 factors are progressively utilized as a criterion for a two-by-two comparison of all factors in the same factor group to construct the judgment matrix for each of the 5 factor groups.Following that, the normalized eigenvectors of each judgment matrix are aggregated to produce the weightless supermatrix W. Similarly, using each of the 5 factor groups as the criterion in turn, two-by-two comparisons are made for all factor groups to construct 5 judgment matrices.The normalized eigenvectors of each judgment matrix are combined to produce the weighted matrix A (A≝(a ij )) that reflects the relationship between factor groups.As shown in equation ( 4) [24], The elements of W are weighted to create the weighted supermatrix W based on A. Finally, the limit supermatrix W ∞ is created by self-multiplication of W, until the values in each row are stable and constant.The values of W ∞ in each row represent the weight values of relevant risk factors affecting likelihood. BPNN with AdamW optimizer The settings of the learning rate (LR) and gradient algorithm significantly impact on the training of a network.As a result, the adaptive moment estimation (Adam) [preprint] [31] with decoupled weight decay (AdamW) [preprint] [32] was introduced to determine the appropriate LR and gradient algorithm.The parameters in Adam were updated using the experience gained from previous iterations, which dampened the tendency for oscillations.Based on Adam, AdamW introduced a weight decay (WD) term decoupled from gradient descent to regularize larger weights and avoid overfitting the model.samples of small sample classes.Due to the significant ambiguity in the quantification process of severity, the evaluation of severity was converted into a classification problem.The evaluation result level of severity was divided into five levels according to the classification standard of production safety accident level, as shown in Table 4.The acceptable value of the sample output expectation is set to the group median of the corresponding value range of the evaluation result level, and the acceptable mean square error (AMSE) is 0.01.Among these, the mean square error (MSE [34]) is given as equation ( 5), y i and ŷi are the expected value and output value of sample i, respectively. It was found that AdamW increased the separation of the hyperparameters search space [32].As a result, the value of WD was set to the default value of 0.01 in pretraining, looking for a better LR, thus modifying the WD by using the better LR.The LR was typically empirically set at a lower value (10 − 3 ⁓ 10 − 2 ), which was inefficient for training.To save training time and ensure network convergence, the boundary test of the cyclical learning rate method (CLR) was used to objectively determine the maximum bound of the LR [35].In order to create a graph to show how the loss changes with the LR, the LR was initially set to a low number and then gradually increased after each iteration.The maximum bound value corresponded to the LR when the loss grew inversely.The LR was set to the empirical value, the maximum bound value with the CLR method, and a larger value, respectively, and pretraining was performed to determine the better value.In addition, the larger WD, 10 − 2 , 10 − 3 , and 10 − 4 , should be tested because the shallow architecture of BPNN needed more regularization [36]. The purpose of sensitivity analysis for the improved BPNN model established by this paper is to identify the key risk factors in control severity.Although there are many sensitivity analysis methods, the fundamental concept is similar.The impact of input neurons on the output was analyzed using the mean influence value (MIV) approach [37], which is a method often used in neural networks.Equation ( 6) [37] sets AMIV I k , is the absolute value of MIV for I k . where, two new input samples are created by a 10 % increase and decrease in the value of the input variable corresponding to I k in Frequency results for each indicator The frequency of each indicator including and excluding the same concepts in each fireworks production safety accident research sample is shown in Fig. 5(a-e), where IQR is the inter quartile range.According to the evaluation scale in 3.1, each V L I k and V S I k in samples was quantified. Results and empirical study of the likelihood evaluation It is not feasible to use the current software, such as Super Decision and yaanp, which only provide importance ranking index options in nine-scale, five-scale, three-scale, or two-scale.Furthermore, it is challenging to verify the effectiveness and correctness of manual computation.Therefore, the frequency-based ANP model was built in the Python environment, and its calculation results are shown in Table 5. To make a comparative empirical study with the 112 accident enterprises, this exploration collected the safety evaluation report of 100 fireworks production enterprises in China.Among them, the samples of accidental and non-accidental enterprises were noted as group N 1 and N 2 , respectively, among which n were 112 and 100; group N 2 used data on the production safety conditions before the rectification, and no production safety accidents occurred in the period before and after the evaluation.Fig. 6 shows the evaluation results L of two sample groups.As shown in Fig. 6, the likelihood assessment results of group N 1 are all smaller and more concentrated than those for group N 2 .To further verify the significance of the difference between the assessment results of group N 1 and N 2 , a K-S test using SPSS software was performed.The Friedman method was also chosen for the significance of differences test because the p in the K-S test were determined as 0.0240 and 6.0893 × 10 − 11 , respectively.The result of the Friedman test demonstrates a significant difference between the evaluation results of N 1 and N 2 (p = 0 < α), which is compatible with the objective fact. Results and validation of the severity evaluation Based on the established evaluation indicator system, the BPNN model adopted a three-layer structure.The input layer was set to the evaluation value of each secondary evaluation indicator with 27 neurons, which was the V S I k obtained from the evaluation scale in 3.1.The number of neurons in the implicit layer was regarded as the number of primary evaluation indicators, which was 5.One neuron was set in the output layer, which was the algebraic value of the severity evaluation result.The loss function adopted the MSE loss function, where loss = = MSE/2.Given that AMSE/2 = 5 × 10 − 2 , the goal loss was considered to be 10 − 3 , resulting in MSE being smaller than AMSE.The sigmoid function was used as the activation function since the input and output layers ranged from 0 to 1.The maximum number of iterations was taken as 10 5 .340groups of data were obtained after data preprocessing.One group of each type was then randomly chosen to serve as the test dataset, while the remaining 335 groups served as the training data set.Due to the moderate number of the study for training samples, training was carried out using a full data set.Fig. 7 displays the results of the LR range test for this dataset.According to the CLR method, the maximum bound value of LR is 9 × 10 − 2 .Accordingly, the pretraining result is displayed in Fig. 8 when the LR was set to the empirical value of 10 − 2 , the maximum bound value of 9 × 10 − 2 , and a larger value of 10 − 1 , respectively.The LR determined by the CLR method has a faster training speed compared to smaller LR, and prevents the model from exhibiting overfitting before they reach a predetermined accuracy compared to larger LR.According to the pretraining result of test loss under different WD shown in Fig. 9, the model can demonstrate better generalization performance when the WD is taken as 10 − 2 .In summary, the BPNN model was trained with the hyperparameters shown in Table 6, and the training results are shown in Fig. 10.Based on that, the optimal parameters of the BPNN severity assessment model were derived.Fig. 11 illustrates the evaluation results S of accident enterprises.The following metrics were employed to evaluate the performance of model: CA (classification accuracy, equation ( 7) [38]), MSE, R 2 (correlation coefficient, equation ( 8) [39]), NSE (Nash-Sutcliffe efficiency coefficient, equation ( 9) [39]), and PBIAS (percent bias, equation (10) [40]).Among these, Table 7 shows the meanings of TP, TN, FN and FP in the binary categorization problem, and the categorization results include both P (positive) and N (negative) categories; y and ŷ are the mean expected value and mean output value for all samples, respectively.As seen in Table 8 and Fig. 11, CA is close to 100 %.Besides, MSE and PBIAS are both close to 0, and R 2 and NSE are both close to 1, indicating a good match between the evaluation and expected values. Results of the comprehensive safety risk evaluation According to the evaluation models of likelihood and severity developed by frequency-based ANP and BPNN, respectively, the comprehensive evaluation results of safety risk were obtained for 212 enterprises.Among group N 2 , the enterprises judged to be qualified after rectification were recorded as group N 2− 1 , and the rest were recorded as group N 2− 2 . The evaluation level of safety risk is divided into four categories based on the safety risk classification and control, including significant risk, higher risk, general risk, and low risk.To be consistent with the actual evaluation results of the enterprises, the upper limit value of low risk was initially set to the minimum value of R for the group N 2− 1 , which was 0.0023.The comprehensive evaluation results of R for groups N 1 and N 2− 1 were then subjected to K-means [41] cluster analysis using SPSS, which provided a scientific and theoretical basis for the division of the remaining three levels.The clustering outcomes are displayed in Table 9, which provide a scientific and theoretical basis for the establishment of the safety risk level assessment scale.The range of values for each evaluation level is modified downward under the strict and high principle. Fig. 12 displays the results of the comprehensive safety risk evaluation for samples and the level evaluation scale of that.As shown in Fig. 12, most of the samples with general risk and below are from group N 2 with no production safety accidents.Although there are occasional accidents, the severity is not higher, which did not result in many fatalities.The frequency-based ANP and BPNN evaluation models can better achieve the goal of reviewing the safety production conditions. Discussion of key factors in risk control The frequency-based ANP and BPNN models are not only useful for the safety risk evaluation, but also can understand the importance of each indicator among the safety risk.Therefore, it is crucial to identify the indicators that can be modified to reduce the risk after considering factors such as cost, feasibility, and effectiveness. As shown in Table 5, the top five risk factors, including A 2 ,E 1 ,A 4 ,A 3 , and E 12 , accounted for 54.6% of the total weight and the other 22 risk factors accounted for the rest weight.This indicates that eliminating the probability of occurrence for only these five critical risk factors can significantly reduce the occurrence likelihood of fireworks production safety accidents.The importance of the key factors that have not been stressed enough in previous research and practical applications of safety management, such as the physical and mental state of the personnel (A 4 ), should also be considered.The AMIV for risk factors in BPNN model were calculated respectively, and the sequence of the risk factors was sorted according to the AMIV value.From Table 10, it is illustrated that C 5 , E 12 , A 1 , E 1 , and B 1 are the 5 most important factors for severity evaluation, although the safety risk level of materials (D) is directly related to severity.Environment factors like C 5 and equipment factors like B 1 are the key measures to limit D, and the failure of measures largely contributes to the expansion of severity.Besides, management factors like E 12 and personnel factors like A 1 increase severity by raising the safety risk level of materials, environment, and equipment.Therefore, factors in D are less sensitive in the severity assessment model. Conclusion The goal of this study is to increase the objectivity and accuracy of the safety risk evaluation of enterprises that produce fireworks from multiple perspectives.To remove the interference of redundant information, GT was used for the first time in the fireworks field to thoroughly and methodically refine the evaluation indicators system from historical accident causation data.Then, from the perspective of the definition of safety risk, frequency-based ANP and BPNN models have been established and improved based on the evaluation characteristics of likelihood and severity indicators, which are of pioneering significance. Compared with traditional methods like the fuzzy approach, the frequency of evaluation indicators was innovatively used as the basis for quantifying their weights and values, which simultaneously improved objectivity while maintaining a high degree of operationalization.Unlike previous attempts of artificial intelligence in the field of safety risk evaluation of fireworks production enterprises, the two models proposed in this study consider the risk-influencing factors in a comprehensive way.In addition, the likelihood was not simply regarded as two limit states, but the evaluation of their specific values were explored.The significance of the difference test and the results of the five performance metrics verify that the proposed evaluation indicator system and models provide a realistic way for companies to determine their safety risk level in an objective manner.Taking into account the specific circumstances of the company and the results of a sensitivity analysis, it is also effective and practicable to make decisions for safety management.Due to the limited sample size, there are some gaps between the five metrics characterizing the performance of the BPNN model and the ideal values, which means that there are some systematic biases in the evaluation results.Thus, it is necessary to increase the size of the data set used to build the models in the future.And specialists and academics in the fields of computer science and safety must work together to further improve the accuracy and speed of intelligent safety risk evaluation. Fig. 1 . Fig. 1.Example of the analysis process of the paradigm model. Fig. 2 . Fig. 2. Flow and architecture diagram of the proposed frequency-based ANP model. F .Wang et al. Fig. 4 Fig. 3 . Fig.3.Frequency-based ANP network structure of safety risk evaluation system for fireworks production enterprises. F .Wang et al. Fig. 4 . Fig. 4. Flow and architecture diagram of the proposed BPNN model. F .Wang et al. Fig. 6 . Fig. 6.Empirical study results of the frequency-based ANP model. Fig. 11 . Fig. 11.Evaluation results for historical accident samples under the BPNN model. Fig. 12 . Fig. 12. Comprehensive evaluation results and level evaluation scale of safety risk for fireworks production enterprises. Table 3 K -S test results. Table 4 Correspondence table of evaluation value and level of severity.i, respectively.These two new input samples are then imported into the trained BPNN model to produce new output values of ŷi1 I k and ŷi2 I k , respectively. sample Table 5 Normalized weight values and ranking of evaluation indicators. Table 6 Hyperparameters for training BPNN model. Table 7 The meanings of TP, TN, FN and FP. Table 8 The evaluation effect of the model. Table 9 Correspondence table of comprehensive evaluation value and evaluation level of R. F.Wang et al. Table 10 Sensitivity analysis results of BPNN model. F.Wang et al.
2023-11-05T16:15:10.554Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "a611119f1125f6477b2339d9c3b6ac04c9b6c96e", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844023089326/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e719dc79ffe3773d130b7540bb73d70dd4c9bc75", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
269320729
pes2o/s2orc
v3-fos-license
Tidal Flat Erosion Processes and Their Dynamic Mechanisms on the South Side of Sheyang River Estuary, Jiangsu Province : Tidal flats are accumulations of fine-grained sediment formed under the action of tides and play a very important role in coastal protection. The northern part of Jiangsu coast, as a typical example of muddy coasts found all over the world, has experienced serious erosion since the Yellow River shifted northward, and the range of erosion has been gradually extending southward, now reaching the south of the Sheyang River estuary (SYRE). In order to address coastal erosion near the SYRE through protective measures, there is an urgent need for research on the spatial and temporal variation of coastal erosion processes and their control mechanisms in the SYRE and adjacent coastal areas. For this study, the tidal flats on the south side of the SYRE were selected as the study area, and the sediment dynamics in the upper and lower intertidal flat were observed in different seasons to investigate the erosion processes and their dynamic mechanisms. The results show that the tidal current and wave action in the observed intertidal flats are stronger in winter than in summer, and these intertidal flats erode under the combined action of waves and currents. During winter, the net transport of the near-bottom suspended sediment and bedload is primarily towards the southeast, while in summer, the direction tends toward the north and northeast. The net transport fluxes are larger in the lower part of the intertidal flat than in the upper part in summer and also larger in winter than in summer within the lower intertidal flat. Furthermore, the tidal flat erosion in the study area manifests as shoreline retreat and flat surface erosion. The average shoreline retreat rate increased from 23.3 m/a during 2014–2019 to 43.5 m/a during 2019–2021, and the average erosion depth of the lower and upper parts of the intertidal flat over a tidal cycle is, respectively, 1.98 cm and 0.24 cm in winter and 1.65 cm and 0.26 cm in summer. The ratio of the wave-induced bottom shear stress to the tidal current-induced bottom shear stress is 0.40~0.46 in the lower intertidal flat and increases to 0.66~0.67 in the upper intertidal flat, indicating that the intertidal flat erosion in the study area is primarily driven by tidal currents, with significant contributions from wave action, especially in the upper intertidal flat Introduction Tidal flats, formed under the significant influence of tidal action and rich conditions of fine-grained sediment supply, represent a distinct geomorphic feature situated at the interface of land, ocean, and atmosphere [1].They exhibit significant spatial differences influenced by factors such as tidal range, waves, material supply, and vegetation cover [2]. Currently, the area corresponding to global tidal flats is approximately 127,921 km 2 , mainly distributed in the Asia-Pacific region, such as in Indonesia, China, India, and other countries [3].There has been substantial and systematic research on tidal flat sedimentation in the world, particularly along the North Sea coasts of European countries such as The Netherlands, Germany, and Denmark [4], the Wash Bay in the United Kingdom [5], Fundy Bay in Canada [6], and the Jiangsu tidal flats in China [7,8].As an important part of the coastal zone, tidal flats play a crucial role in coastal protection [9].However, due to sea-level rise, land subsidence, and intensive human activities in river basins and coastal zones, there has been a gradual decrease in the area of tidal flats globally [10,11], and most muddy coasts have suffered erosion [12,13]. The tidal flats in Jiangsu coast are famous for both rapid accretion and intense erosion.After the Yellow River shifted northward in 1855, the supply of huge amounts of sediment was cut off, resulting in erosion in the abandoned Yellow River delta (AYRD) and adjacent coast area, and the intensity of erosion gradually decreases with the increasing distance away from the AYRD center [8,14].The coastal area south of the Sheyang River estuary (SYRE) (Figure 1a) exhibits signs of continuous accretion in the early stage, due to the transport of eroded sediment from the AYRD to the south [15][16][17].However, since 2000, the coastal area south of the SYRE has also begun to gradually erode, and the intensity of this erosion has been gradually increasing [18][19][20][21][22]. Severe coastal erosion has severely impacted local socioeconomic development and damaged residents' property, necessitating ecological protection and restoration efforts.However, although there is a general understanding of the causes of coastal erosion [14,[18][19][20], the spatial and temporal changes in coastal erosion on the coast near the Sheyang River estuary are complex; thus, there is insufficient understanding of the coastal erosion process and its spatiotemporal variations in this area, which seriously restricts the effectiveness of measures for coastal erosion protection and ecological restoration in the region.Generally speaking, the main methods for researching coastal erosion and accumulation mainly include (1) setting up fixed sections in typical areas for repeatedly monitoring profile elevation, which is analyzed to explore the dynamic change process of coastal erosion and accumulation [15,16]; (2) using time-series remote sensing images to study coastline changes [17,18]; (3) using remote sensing image interpretation or drone measurements to obtain data for the distribution of seabed elevation in different periods and analyzing the spatiotemporal variations in coastal erosion and accumulation [21]; (4) using in situ observation of hydrodynamics to calculate the erosion and accumulation processes and analyze their control mechanisms [22][23][24]; (5) using numerical modeling methods to carry out modeling research on tidal currents, waves, and sediment transport to simulate the evolution processes of coastal morphodynamics and their control mechanisms [25,26]. To obtain a clearer understanding of the processes of tidal flats erosion in Jiangsu coast, the tidal flats on the south side of the SYRE are selected as the study area, and the hydrodynamic and sediment transport processes and spatial pattern of tidal flat erosion near the SYRE and the surrounding coastal area are analyzed through in situ observation of the sediment dynamics.The dynamic mechanisms of tidal flat erosion are discussed toward establishing a scientific basis for coastal protection and ecological restoration near the SYRE. Study Area The Sheyang River estuary is located in the central part of the Jiangsu coast (Figure 1a).The average flood tide duration is 4 h 49 min and the average ebb tide duration is 7 h 36 min.The maximum tidal range is 4.16 m with a mean tidal range of 2.15 m, with irregular semi-diurnal tides and irregular tidal currents, which are mainly affected by the rotating tidal waves of the South Yellow Sea and the coastal current along the northern Jiangsu coast [27].The intertidal geomorphology of the SYRE predominantly comprises tidal flats, which can be categorized into high-tide mud flats, mid-tide silt-mud mixed flats, and low-tide silt-fine sand flats.The area occupied by silt-mud tidal flats at midand high-tide levels is 56.20 km 2 , while silt-fine sand tidal flats at low tide cover 84.73 km 2 [27]. As the southern end of the AYRD, the intensity and extent of coastal erosion at the Sheyang River mouth are continually expanding.Remote sensing monitoring results for the SYRE indicate that since the 1970s, the intensity of coastal erosion has decreased on the north side but gradually increased on the south side, particularly increasing since 2000 [17][18][19].Field investigations have found that the coast south of the SYRE is seriously Study Area The Sheyang River estuary is located in the central part of the Jiangsu coast (Figure 1a).The average flood tide duration is 4 h 49 min and the average ebb tide duration is 7 h 36 min.The maximum tidal range is 4.16 m with a mean tidal range of 2.15 m, with irregular semi-diurnal tides and irregular tidal currents, which are mainly affected by the rotating tidal waves of the South Yellow Sea and the coastal current along the northern Jiangsu coast [27].The intertidal geomorphology of the SYRE predominantly comprises tidal flats, which can be categorized into high-tide mud flats, mid-tide silt-mud mixed flats, and low-tide silt-fine sand flats.The area occupied by silt-mud tidal flats at mid-and high-tide levels is 56.20 km 2 , while silt-fine sand tidal flats at low tide cover 84.73 km 2 [27]. As the southern end of the AYRD, the intensity and extent of coastal erosion at the Sheyang River mouth are continually expanding.Remote sensing monitoring results for the SYRE indicate that since the 1970s, the intensity of coastal erosion has decreased on the north side but gradually increased on the south side, particularly increasing since 2000 [17][18][19].Field investigations have found that the coast south of the SYRE is seriously eroded, resulting in the destruction of many fishponds along the coast, and the safety of roads and coastal projects is seriously threatened (Figure 1b).Due to the continuous intensification of coastal erosion, the region is gradually shifting from being a accretional muddy coast to an erosive sandy coast [27]. Field Observation Due to monsoon control in the Jiangsu coastal area, there are significant differences in the dynamic environment between summer and winter, such as in terms of tidal currents and waves [8,26].In order to deeply understand the seasonal variations in the dynamic conditions that cause tidal flat erosion, we conducted field observations in both winter and summer.Field observations of sediment dynamics were carried out on the intertidal flat on the south of the SYRE during 6-15 December 2021 and 10-18 June 2022, covering both winter and summer spring-neap tidal cycles (8 days).The key parameters observed included inundation height, tidal current velocity, flow direction, wave height, and suspended sediment concentration (SSC), and the observation sites are shown in Figure 1b.The acoustic Doppler velocimeter (ADV) (Nortek AS, Oslo, Norway) was used at each site.Data collection was set to burst mode, with a sampling interval of 30 min and a sampling frequency of 1 Hz during winter, with each sampling conducted for a duration of 1024 s.In order to obtain observation data with a higher time resolution, the sampling interval was set to 10 min and the sampling frequency was set to 4 Hz during summer, and each sampling duration to 256 s.After the instrument was installed, the height of the ADV probe from the flat surface was measured as 25 cm, and since the ADV measures the flow velocity at 15 cm from the instrument probe, the actual observed layer was at a height of 10 cm from the flat surface. Surface sediments (0-1 cm) were collected at each site during low tide, sealed in plastic bags, and transported to the laboratory for analysis.After pretreatment of fully mixed sediment, including the removal of organic matter, carbonates, and dispersion, particle size analysis was conducted using a Mastersize2000 laser particle size analyzer (Malvern Panalytical Ltd., Malvern, UK) to determine the content and median grain size of surface sediments. It should be noted that since the ADV probe is 25 cm above the seabed and the pressure sensor is 22 cm above the probe, accurate and complete hydrodynamic data were only recorded when the pressure sensor was submerged (i.e., inundation height > 47 cm).Therefore, "early flood tide" and "late ebb tide" refer to the moments when the pressure sensor was just submerged by the flood tide and emerged during ebb tide, respectively. Calculation of Suspended Sediment Concentration The data recorded by ADV include the signal-noise ratio (SNR), which is related to the suspended sediment concentration (SSC) in water.Therefore, it is possible to establish a relationship between SNR and SSC by conducting laboratory experiments, and the SNR signal recorded by ADV during in situ observations can then be converted to time-series SSC data; this method has been widely use in coastal areas [22,28,29].Therefore, we prepared water samples with different SSCs in the laboratory to calibrate the ADV, and the water samples for each test were collected and filtered through preweighed 0.45 µm diameter filters.After filtration, the residue on the filters was washed with distilled water to remove sea salt.The washed filters were dried in an oven at 40 • C and then re-weighed using the same electronic balance to determine the SSC.Linear regression analysis was performed between the log(SSC) and corresponding SNR, as shown in Figure 2.Then, the relation between the SNR at each burst could be converted into the SSC. Calculation of Bottom Shear Stress In intertidal regions, where water depth is shallow, the entire water column falls within the boundary layer range, and its velocity distribution conforms to a logarithmic profile [30].Thus, the bottom shear stress induced by tidal currents can be computed using this method, which has been widely applied [31,32].This assumes that the mean horizontal flow velocity profile within the boundary layer follows a logarithmic distribution and that the flow velocity decays with depth due to friction between the flow and the seabed.Then, the bottom shear stress induced by the tidal currents (τc) can be calculated and obtained using the following equation [31]: in which U(z) is the mean horizontal flow velocity at height z from the seabed, * is the friction velocity, κ is the von Kármán constant (κ = 0.4), z0 is the bed roughness, and ρw is the density of seawater (ρw = 1025 kg/m 3 ).Considering the well-developed sand ripples near the observation site, z0 is taken as z0 = 6 mm in this paper according to the recommendation of Soulsby [33].Since the ADV records pressure data at high frequencies, the significant wave height Hs can be calculated based on Longuet-Higgins [34]: where Sη is the power spectrum of the water level.When converting the pressure measured by ADV to water level, attenuation needs to be considered, and according to linear wave theory, the attenuation coefficient can be expressed as [35] = cosh ( + ℎ) cosh ℎ (4) where k is the wave number, h is the mean water depth, and z is the depth of the pressure transducer (negative value).The wave-induced bottom shear stress (τw) can be calculated using the following equation [36]: In intertidal regions, where water depth is shallow, the entire water column falls within the boundary layer range, and its velocity distribution conforms to a logarithmic profile [30].Thus, the bottom shear stress induced by tidal currents can be computed using this method, which has been widely applied [31,32].This assumes that the mean horizontal flow velocity profile within the boundary layer follows a logarithmic distribution and that the flow velocity decays with depth due to friction between the flow and the seabed.Then, the bottom shear stress induced by the tidal currents (τ c ) can be calculated and obtained using the following equation [31]: in which U(z) is the mean horizontal flow velocity at height z from the seabed, u * is the friction velocity, κ is the von Kármán constant (κ = 0.4), z 0 is the bed roughness, and ρ w is the density of seawater (ρ w = 1025 kg/m 3 ).Considering the well-developed sand ripples near the observation site, z 0 is taken as z 0 = 6 mm in this paper according to the recommendation of Soulsby [33].Since the ADV records pressure data at high frequencies, the significant wave height H s can be calculated based on Longuet-Higgins [34]: where S η is the power spectrum of the water level.When converting the pressure measured by ADV to water level, attenuation needs to be considered, and according to linear wave theory, the attenuation coefficient can be expressed as [35] where k is the wave number, h is the mean water depth, and z is the depth of the pressure transducer (negative value).The wave-induced bottom shear stress (τ w ) can be calculated using the following equation [36]: where Ûδ is the wave orbital velocity at the edge of the wave boundary layer, ω is the angular velocity (ω = 2π/T), Âδ is the peak orbital offset, L is the wavelength, g is the gravitational acceleration (9.8 m/s 2 ), and T is the mean period.Ûδ can be accomplished by measuring T n = (h/g) 1/2 directly from the input parameters H, T, h, and g [33]. The wave friction coefficient f wr depends on the hydrodynamic state and can be expressed as [33] where Re w is the wave Reynolds number, r (= Âδ /k s ) is the relative roughness, k s (=2.5d50 ) is the Nikuradse equivalent sediment grain roughness [33], and d 50 is the median grain size of surface sediment.The bottom shear stress under the combined wave-current action (τ cw ) was calculated using the following equation [33]: Calculation of Erosion and Deposition Fluxes The deposition of suspended sediment and surface sediment erosion is determined by comparing the bottom shear stress (τ b ) with the critical shear stress for deposition (τ crd ) and the critical shear stress for erosion (τ cr ); if τ b < τ crd , suspended sediment in the water column occurs through deposition, and the seabed exhibits accretion, whereas if τ b > τ cr , resuspension of the surface sediment occurs, and the seabed experiences erosion.The following formulas for sediment erosion and deposition in coastal areas were used [37,38]: where F E is the erosion flux (kg/m 2 •s), m e is the erosion constant (kg/m 2 •s), according to the grain size characteristics of the sediment in this study area and the recommendation of Robert and Whitehouse [39], m e = 0.002 kg/N•s; F D is the settling flux (kg/m 2 •s), C is the near-bed SSC (g/L), and we take the observed value of SSC at a height of 10 cm above the tidal flat surface.ω s is the settling velocity of suspended sediment (m/s); τ crd generally ranges between 0.06 and 0.1 N/m 2 .According to the recommendation of Robert and Whitehouse [39], τ crd = 0.08 N/m 2 is taken in this paper. Since the sediment in the study area is mainly composed of sand and silt, with minimal clay content, the τ cr can be calculated using the following formulas [33]: where θ cr is the critical Shields parameter; ρ s is the sediment density, which takes the value of ρ s = 2650 kg/m 3 ; d is the sediment grain size; D * is a dimensionless parameter related to the sediment grain size; ν is the coefficient of dynamical viscosity of the water body, which takes the value of 1.36 × 10 −6 m 2 /s; and s = ρ s /ρ w .The settling velocity (ω s ) can be expressed as [40] where K and m are constants, which can be taken as K = 0.00043 and m = 1.06 based on experimental studies [40]. Calculation of Suspended Sediment and Bedload Transport The instantaneous suspended sediment transport rate f (t) and the net suspended sediment transport flux F H during the tidal cycle at the observation layer can be calculated using the following formulas [40]: ) where C(t) represents the SSC in the observation layer at time t, u(t) is the instantaneous velocity, and ∆t represents the representative time length.The bedload transport rate Q b was calculated using the Bagnold method [33,41]: C D = 0.40 1 + ln(z 0 /h) 2 (20) in which q b is the volume transport rate of the bedload; C D is the drag coefficient; θ is the Shields parameter, which can be calculated according to Equation (12); ϕ is the angle of repose of the sediment, set to ϕ = 32 • ; and β is the slope of the tidal flat, which can be calculated based on field topographic profile observations. Surface Sediment Characters The results of grain size analysis show that the surface sediments at the observation sites are dominated by silt and sand, with low clay content (Figure 3).The median grain size of the surface sediments at site B0 and B1 are 4.04 Φ and 3.67 Φ, respectively.Then, the critical shear stress for erosion can be calculated using Equations ( 12)-( 14), and the results show that τ cr = 0.118 N/m 2 at B0 and τ cr = 0.131 N/m 2 at B1, respectively. Hydrodynamic Characteristics The observation results for winter (Figure 4, Table 1) show that the maximum inundation height at B1 and B0 decreased from 3.31 m and 1.69 m during spring tide to 2.58 m and 0.93 m during neap tide, respectively; the average near-bottom current velocities during the tidal cycle decreased from 0.288 m/s and 0.092 m/s during spring tides to 0.196 m/s and 0.064 m/s during neap tides, respectively.The tidal currents generally exhibit a southeast-northwest reciprocating flow.The maximum significant wave heights during the observation are 0.98 m (with a mean value of 0.39 m) with wave periods ranging from 2.6 s to 5.8 s (with a mean value of 3.8 s) at B1 and 0.72 m (with a mean value of 0.29 m) with wave periods ranging from 2.7 s to 6.1 s (with a mean value of 4.2 s) at B0.The calculated results indicate that the maximum values of τ c at B1 and B0 are Hydrodynamic Characteristics The observation results for winter (Figure 4, Table 1) show that the maximum inundation height at B1 and B0 decreased from 3.31 m and 1.69 m during spring tide to 2.58 m and 0.93 m during neap tide, respectively; the average near-bottom current velocities during the tidal cycle decreased from 0.288 m/s and 0.092 m/s during spring tides to 0.196 m/s and 0.064 m/s during neap tides, respectively.The tidal currents generally exhibit a southeast-northwest reciprocating flow.The maximum significant wave heights during the observation are 0.98 m (with a mean value of 0.39 m) with wave periods ranging from 2.6 s to 5.8 s (with a mean value of 3.8 s) at B1 and 0.72 m (with a mean value of 0.29 m) with wave periods ranging from 2.7 s to 6.1 s (with a mean value of 4.2 s) at B0.The calculated results indicate that the maximum values of τc at B1 and B0 are 3.392 N/m 2 and 1.174 N/m 2 , respectively, during spring tides and 0.606 N/m 2 and 0.123 N/m 2 , respectively, during neap tides, while the maximum values of τw are 0.751 N/m 2 and 0.361 N/m 2 , respectively, and the maximum values of τcw are 3.394 N/m 2 and 1.177 N/m 2 , respectively.The results for summer indicate that the variations in inundation height and nearbottom tidal current velocity between neap tides and spring tides were not significant (Figure 5, Table 1), and the tidal current also shows a southeast-northwest reciprocating flow.The maximum significant wave heights during the observation in summer are 0.86 m (with a mean value of 0.38 m) with wave periods ranging from 2.9 s to 6.9 s (with a mean value of 3.9 s) at B1 and 0.68 m (with a mean value of 0.31 m) with wave periods ranging from 2.0 s to 5.9 s (with a mean value of 4.0 s) at B0.The calculated results indicate that the maximum values of τc at B1 and B0 are 1.662 N/m 2 and 0.980 N/m 2 , respectively, The results for summer indicate that the variations in inundation height and nearbottom tidal current velocity between neap tides and spring tides were not significant (Figure 5, Table 1), and the tidal current also shows a southeast-northwest reciprocating flow.The maximum significant wave heights during the observation in summer are 0.86 m (with a mean value of 0.38 m) with wave periods ranging from 2.9 s to 6.9 s (with a mean value of 3.9 s) at B1 and 0.68 m (with a mean value of 0.31 m) with wave periods ranging from 2.0 s to 5.9 s (with a mean value of 4.0 s) at B0.The calculated results indicate that the maximum values of τ c at B1 and B0 are 1.662 N/m 2 and 0.980 N/m 2 , respectively, during the observation in summer and 0.550 N/m 2 and 0.287 N/m 2 for τ w , respectively, while the maximum values of τ cw are 1.665 N/m 2 and 0.980 N/m 2 , respectively. Suspended Sediment Concentration The results calculated for SSC (Figures 4g and 5g, Table 1) reveal that the maximum values of the near-bed SSC at B1 and B0 were 1.968 g/L and 0.412 g/L, respectively, in winter and 2.400 g/L and 0.469 g/L in summer, respectively, and the SSC was significantly larger in the lower intertidal flat than in the upper intertidal flat.The near-bed SSC in the study area is characterized by obvious temporal variations: (1) Within the tidal cycle, the maximum SSC occurs at the early stage of the flood tide, and there is a gradual decrease with the increase in immersion time and inundation height, followed by a slow increase after high tide and reaching a higher value of SSC in the later stage of ebb tide, with the concentration greater during flood tide than that during ebb tide.(2) At the scale of spring-neap tide, the average SSC at B1 decreases from 0.706 g/L during spring tides to 0.350 g/L during neap tides in winter, with little variation at B0, while there is no obvious difference in SSC between spring and neap tides at the two sites in summer.(3) At the seasonal scale, the average SSC at B1 and B0 are 0.507 g/L and 0.145 g/L, respectively, in winter, while those in summer are 0.589 g/L and 0.136 g/L, respectively. Erosion-Sedimentation Fluxes The results calculated for bottom shear stress indicate that τ c and τ w at B1 and B0 are mostly greater than τ cr in both winter and summer (Figures 4 and 5), indicating that the tidal flat in the study area experiences erosion most of the time.The results calculated for F E and F D reveal that the intertidal flat in the study area was mainly eroded during the observation, and the instantaneous erosion fluxes caused by τ cw at B1 and B0 had maximum values of −6.53 × 10 −3 kg/m 2 •s and −2.12 × 10 −3 kg/m 2 •s, respectively, in the winter and −3.07 × 10 −3 kg/m 2 •s and −1.72 × 10 −3 kg/m 2 •s in the summer, respectively.The statistical results show that the maximum total erosion fluxes within a tidal cycle at B1 and B0 were −77.68 kg/m 2 and −15.64 kg/m 2 , respectively, in winter and −45.51 kg/m 2 and −11.88 kg/m 2 in summer, respectively, while the maximum deposition fluxes of suspended sediment were only 0.10 kg/m 2 and 0.07 kg/m 2 , respectively, in winter and 0.86 kg/m 2 and 0.06 kg/m in summer, respectively (Figure 6, Table 2).The study area exhibits net erosion both in winter and summer, with net erosion fluxes of −622.31kg/m and −516.50 kg/m 2 , respectively, at B1 and −75.37 kg/m 2 and −83.00 kg/m 2 at B0, respectively.The erosion of the intertidal flat has distinctive spatiotemporal variation characteristics, i.e., the erosion and deposition fluxes are larger in the lower than in the upper intertidal flat, and the erosion fluxes in the lower intertidal flat are larger in winter than in summer, while those in the upper intertidal flat exhibit the opposite pattern. Suspended Sediment Concentration The results calculated for SSC (Figures 4g and 5g, Table 1) reveal that the maximum values of the near-bed SSC at B1 and B0 were 1.968 g/L and 0.412 g/L, respectively, in winter and 2.400 g/L and 0.469 g/L in summer, respectively, and the SSC was significantly larger in the lower intertidal flat than in the upper intertidal flat.The near-bed SSC in the study area is characterized by obvious temporal variations: (1) Within the tidal cycle, the maximum SSC occurs at the early stage of the flood tide, and there is a gradual decrease with the increase in immersion time and inundation height, followed by a slow increase after high tide and reaching a higher value of SSC in the later stage of ebb tide, with the concentration greater during flood tide than that during ebb tide.(2) At the scale of springneap tide, the average SSC at B1 decreases from 0.706 g/L during spring tides to 0.350 g/L during neap tides in winter, with little variation at B0, while there is no obvious difference in SSC between spring and neap tides at the two sites in summer.(3) At the seasonal scale, the average SSC at B1 and B0 are 0.507 g/L and 0.145 g/L, respectively, in winter, while those in summer are 0.589 g/L and 0.136 g/L, respectively. Erosion-Sedimentation Fluxes The results calculated for bottom shear stress indicate that τc and τw at B1 and B0 are Suspended Sediment and Bedload Transport Fluxes The calculated results show that the net transport fluxes of near-bottom suspended sediment within the tidal cycle at B1 and B0 range from 72.9 kg/m to 10,167.6 kg/m and from 2.8 kg/m to 539.4 kg/m, respectively, in winter and from 74.6 kg/m to 9455.7 kg/m and from 13.6 kg/m to 1237.2 kg/m in summer, respectively (Figure 7).The maximum bedload transport rate caused by τ cw at B1 and B0 are 5.78 × 10 −2 kg/m•s and 0.79 × 10 −2 kg/m•s, respectively, in winter and 1.94 × 10 −2 kg/m•s and 0.65 × 10 −2 kg/m•s in summer, respectively.The statistical results show that the net bedload transport fluxes within the tidal cycle at B1 and B0 range from 3.12 kg/m to 331.60 kg/m and from 0 to 28.25 kg/m, respectively, in winter and from 1.95 kg/m to 191.98 kg/m and from 1.24 kg/m to 30.44 kg/m in summer, respectively (Figure 8).Overall, as shown in Figure 9 and Table 3, the near-bottom suspended sediment and bedload in the tidal flat of the study area are mainly transported southeastward in winter and northward in summer, with the net transport fluxes being greater in the lower than in the upper intertidal flat; the net transport fluxes in the lower intertidal flat are greater in winter than in summer, while those in the upper intertidal flat are smaller in winter than in summer.Overall, as shown in Figure 9 and Table 3, the near-bottom suspended sediment and bedload in the tidal flat of the study area are mainly transported southeastward in winter and northward in summer, with the net transport fluxes being greater in the lower than in the upper intertidal flat; the net transport fluxes in the lower intertidal flat are greater in winter than in summer, while those in the upper intertidal flat are smaller in winter than in summer.Overall, as shown in Figure 9 and Table 3, the near-bottom suspended sediment and bedload in the tidal flat of the study area are mainly transported southeastward in winter and northward in summer, with the net transport fluxes being greater in the lower than in the upper intertidal flat; the net transport fluxes in the lower intertidal flat are greater in winter than in summer, while those in the upper intertidal flat are smaller in winter than in summer. Spatiotemporal Variations in Tidal Flat Erosion-Accretion near the Sheyang River Estuary The geomorphology and sediment composition of tidal flats are controlled by hydrodynamics, sediment supply, and biological activities [1].The area corresponding to the tidal flat is gradually decreasing [3] due to sea-level rise, land subsidence, and intensifying human activities in river basins and coastal zones, of which human activities are the main driving factor [10] causing most tidal flats in the world to experience erosion [42][43][44].Coastal erosion generally manifests as shoreline retreat and bed erosion [13,42].Coastal erosion and accretion are fundamentally controlled by sediment balance at all spatiotemporal scales [45,46].Since its diversion into the Yellow Sea in 1128, the Yellow River has brought a huge amount of sediment that has entered the Yellow Sea, providing sufficient fine-grained sediment for the development of tidal flats in the central and northern parts of the Jiangsu coast, which are rapidly accumulating [8].However, the Yellow River shifted northward in 1855, cutting off the massive sediment supply and leading to erosion of the AYRD and adjacent coast zone.This erosion gradually decreases in intensity with an increase from the AYRD to the SYRE [8,13], while the tidal flat south of the SYRE exhibits accretion in the early period due to the southward transport of erosive sediments from the AYRD [14,15,47].According to monitoring data from the China Marine Environmental Quality Bulletin, the proportion of eroded coastline length on the northern side of the SYRE increased from 58.4% in 2013 to 68.3% in 2017, while the average erosion rate decreased from 26.4 m/a to 10.5 m/a.Over time, the eroded coastline has continuously moved southward, especially since 2000, when the coast south of the SYRE also gradually began to erode with progressive intensification of erosion [17][18][19][20][21].Comparison of satellite images from different times shows that coastline erosion and destruction of aquaculture ponds have occurred in the study area and the adjacent coastline (approximately 4 km in length) since 2014, of which the rate of coastline retreat ranged from 12 m/a to 44 m/a during 2014-2019, with an average rate of 23.3 m/a, and the length of eroded coastline was 920 m.However, from 2019 to 2021, the coastline retreat rate increased rapidly from 13 m/a to 193 m/a, with an average rate of 43.5 m/a, and the length of eroded coastline increased to 3840 m, leading to the destruction of aquaculture ponds and rapid coastal retreat (Figure 10).line of the AYRD and the magnificent subaqueous delta in the early period, the propagation of tidal waves from north to south were blocked, and with the retreat of the Jiangsu coastline, the subaqueous delta of the AYRD flattened, allowing tidal waves propagating from north to south to flow more smoothly, thereby enhancing the tidal flow on the southern side of the AYRD [50] and thus, intensifying the coastal erosion on the south side of the SYRE.Additionally, since 2000, there has been a rapid increase in projects of port construction and reclamation in the central Jiangsu coast [20], which has significantly impacted coastal erosion near the SYRE and its adjacent coastal area.Activities such as dredging of the Sheyang Port (SYP) navigation channels, construction of double guide levees in the SYP area, construction of sluices, and reclamation have influenced the hydrodynamic and erosion-accretion patterns near the SYRE, enhancing the coastal erosion near SYRE [51][52][53].The observation and modeling results indicate that large-scale tidal flat reclamation has altered the pattern of tidal energy distribution along the Jiangsu coast, resulting in strengthened M2 tidal constituents [54] and leading to changes in sediment transport patterns and tidal flat geomorphology [25,55]. Mechanisms of Tidal Flat Erosion on the South Side of the Sheyang River Estuary Sediment transport over intertidal flats is controlled by the combined action of tidal currents and waves.In the case of sufficient fine-grained sediment supply, suspended sediment and bedload are transported landward under the influences of "settling-and scourlag effects" and "time-tidal current velocity asymmetry", forming broad tidal flats [1,15,56], making it difficult for wave energy to penetrate into the intertidal area and resulting in significantly less wave-driven than tidal current-driven sediment transport [46].However, when the supply of fine sediment is interrupted, the environment dominated by tidal action is disrupted, and the previously suppressed wave action gradually becomes active, with a gradual changing from accretion to erosion of tidal flats under the combined action of waves and tidal currents [46].The observation results show that there is strong wave action in the intertidal flat of the study area under normal weather conditions.Although generally lower than τc, τw is significantly higher than τc during the early stage of flood tide, high tide, and the late stage of ebb tide (Figures 4 and 5).In terms of the average value of the tidal cycle, the proportion of τcw > τcr in the lower intertidal flat during winter and summer is 84% and 82%, respectively, and 50% and 57%, respectively, Coastal erosion in central Jiangsu has been expanding, primarily due to changes in sediment sources and hydrodynamics.On the one hand, after the diversion of the Yellow River, the interruption of this massive sediment supply led to rapid erosion of the AYRD, and the eroded sediments were transported southward, thereby continuing to provide abundant sediment for the central Jiangsu tidal flats.However, with the implementation of coastal erosion protection projects, the sediment supply generated by coastal erosion gradually decreased, causing coastal erosion to extend southward [8,13,48].The transitional area for erosion-accretion conversion in the central Jiangsu coast moved from the SYRE in the 1980s to the coast of the Xinyang River estuary (XYRE) and to Doulong River estuary (DLRE) [49].On the other hand, due to the 20 km outward protrusion of the coastline of the AYRD and the magnificent subaqueous delta in the early period, the propagation of tidal waves from north to south were blocked, and with the retreat of the Jiangsu coastline, the subaqueous delta of the AYRD flattened, allowing tidal waves propagating from north to south to flow more smoothly, thereby enhancing the tidal flow on the southern side of the AYRD [50] and thus, intensifying the coastal erosion on the south side of the SYRE.Additionally, since 2000, there has been a rapid increase in projects of port construction and reclamation in the central Jiangsu coast [20], which has significantly impacted coastal erosion near the SYRE and its adjacent coastal area.Activities such as dredging of the Sheyang Port (SYP) navigation channels, construction of double guide levees in the SYP area, construction of sluices, and reclamation have influenced the hydrodynamic and erosion-accretion patterns near the SYRE, enhancing the coastal erosion near SYRE [51][52][53].The observation and modeling results indicate that large-scale tidal flat reclamation has altered the pattern of tidal energy distribution along the Jiangsu coast, resulting in strengthened M 2 tidal constituents [54] and leading to changes in sediment transport patterns and tidal flat geomorphology [25,55]. Mechanisms of Tidal Flat Erosion on the South Side of the Sheyang River Estuary Sediment transport over intertidal flats is controlled by the combined action of tidal currents and waves.In the case of sufficient fine-grained sediment supply, suspended sediment and bedload are transported landward under the influences of "settling-and scour-lag effects" and "time-tidal current velocity asymmetry", forming broad tidal flats [1,15,56], making it difficult for wave energy to penetrate into the intertidal area and resulting in significantly less wave-driven than tidal current-driven sediment transport [46].However, when the supply of fine sediment is interrupted, the environment dominated by tidal action is disrupted, and the previously suppressed wave action gradually becomes active, with a gradual changing from accretion to erosion of tidal flats under the combined action of waves and tidal currents [46].The observation results show that there is strong wave action in the intertidal flat of the study area under normal weather conditions.Although generally lower than τ c , τ w is significantly higher than τ c during the early stage of flood tide, high tide, and the late stage of ebb tide (Figures 4 and 5).In terms of the average value of the tidal cycle, the proportion of τ cw > τ cr in the lower intertidal flat during winter and summer is 84% and 82%, respectively, and 50% and 57%, respectively, in the upper intertidal flat, higher than the proportions observed in the tidal flats of the Yangtze River estuary [57].The statistical results show that τ w accounts for approximately 40% and 67% of τ c in the lower and upper parts of intertidal flat, respectively, in winter, and the proportion was 46% and 66% in summer, respectively.This indicates that the tidal current has a larger contribution to tidal flat erosion in the lower intertidal flat on the south side of the SYRE, and the contribution of wave action is gradually increasing in the upper intertidal flat, which is consistent with the field observations and theoretical analyses [1,46,58]. Generally, tidal currents transport sediments landward when there is sufficient sediment supply, whereas wave action is more pronounced under conditions where sediment is transported seaward [1].Based on the observed data, there is relatively strong wave action in the study area.Based on the calculated total net eastward transport fluxes of F H and Q b during a tidal cycle at B0 and B1, then from the perspective of sediment balance, we assume that if the net eastward transport flux across B1 is smaller than that across B0, it can be considered as a net input of sediment between the two stations; that is, deposition occurs on the tidal flat between two stations.On the contrary, if the net eastward transport flux across B1 is greater than that across B0, it can be considered as a net export of sediment between the two stations, that is, the erosion of the tidal flat between the two stations.The statistical results show that the total net eastward sediment load (e.g., F H + Q b ) induced by τ cw in the intertidal flat between B0 and B1 is 789.5 kg and 64.9 kg in winter and summer, respectively.In other words, a net export of sediment occurs between the two stations both in winter and in summer.Considering a wet sediment density of 1960 kg/m 3 [59], the mean erosion depth of the intertidal flat area between the two stations throughout the year is calculated as 19.07 cm/a, which is less than the erosion depth near the AYRD (25 cm/a~33 cm/a) and more than that north of SYRE (13 cm/a~18 cm/a) [27].It should be noted that for this net export flux of suspended sediment, only the transport fluxes of near-bed suspended sediment are considered and not the whole water column.Therefore, this calculation result may be affected by bed erosion intensity.According to the results calculated for erosion-deposition at two observation sites, both in winter and summer, there is much larger erosion flux than deposition flux in the intertidal flat of the study area, and the intertidal flat is dominated by net erosion (Figure 6).The net erosion depths induced by the combined action of waves and tidal currents in the lower and upper parts of the intertidal flat within a tidal cycle range from 0.37 cm to 3.96 cm and from 0 cm to 0.80 cm, respectively, with average erosion depths of 1.98 cm and 0.24 cm, respectively, in winter, and from 0.77 cm to 2.32 cm and from 0.06 cm to 0.61 cm, respectively, with average erosion depths of 1.65 cm and 0.26 cm in summer, respectively.These erosion depths are greater than those observed in many tidal flats in the Yangtze River estuary, Yellow River estuary, Min River estuary, and North Sea [21,57,[59][60][61][62], and the erosion rates are also higher than that of the tidal flats near XYRE in the southern part of the study area [63].From Figure 9, it can be seen that both bedload and near-bed suspended sediment are transported southeast in winter and northeast in summer, and the total transport flux is greater in winter is greater than in summer, indicating that throughout the year, there is net transport of the sediment from tidal flat erosion in this area southeast.The modeling results indicated that the sediment eroded by the tidal flats near the SYRE was mainly transported to the southeast and accumulated on the tidal flats near the Dafeng coastal area [25,26]. The calculated results show that there is a very high rate of bed erosion in the study area.One reason for this is that the tidal flats near the SYRE are currently experiencing severe erosion due to severe sediment supply deficiency [13], which results in an increase in bed slope [16,64], and the wave action strengthens, which then further aggravates the increase in the bed erosion rate.Another reason is due to limitations in the observation methods used in this paper.According to the instrument parameters and installation methods used in this study, the observation data are incorrect when the inundation height is less than 0.5 m.Therefore, the calculated fluxes of sediment transport and erosion-deposition in this study were obtained for periods when the submerged inundation height exceeded 0.5 m, which does not cover the entire tidal cycle, especially during the early stage of flood tide and late stage of ebb tide, when the inundation height is very shallow and wave action is more significant.The observation results also show that in extremely shallow-water environments (water depth less than 0.2 m), there are very evident variations in tidal flat erosion and accretion, with significant erosion caused by wave-current interaction during the early stage of flood tide [23] and substantial sediment deposition during the late stage of ebb tide [24], which were not included in this study.In addition, the intertidal flat is rich in algae and abundant in benthic biological activity, which significantly influences the critical shear stress for erosion of sediments [65,66].In calculating the critical shear stress for erosion of surface sediments in this study, only sediment grain size composition was considered, and the factors of biological activity were not taken into account; thus, the actual critical shear stress for erosion may be underestimated.In summary, the erosion depth and erosion rate of the intertidal flat near the SYRE calculated in this study may be overestimated. Conclusions The tidal flats near the Sheyang River estuary in Jiangsu Province are currently experiencing severe erosion, and the erosion is intensifying, with constant migration of the eroded shoreline toward the south.The research findings of this study indicate the following: (1) The surface sediments of the tidal flats on the southern side of the Sheyang River estuary are dominated by sand and silt, with the sediment being coarser in the lower intertidal flat than in the upper part.There are significantly stronger hydrodynamics in the lower than in the upper intertidal flat, and the variations in near-bed tidal current velocity show significant temporal characteristics on a seasonal scale, being larger in winter than in summer.There is no obvious seasonal variation of waves. (2) The erosion processes on the south side of the Sheyang River estuary have evident spatiotemporal variation characteristics: the intertidal flat in the study area is in a state of erosion most of the time, with the higher erosion flux in the lower than in the upper intertidal flat; in the lower part of the intertidal zone, there is higher erosion flux in winter than in summer, while the opposite pattern is exhibited in the upper intertidal flat. (3) Near-bed suspended sediment and bedload show net seaward transport in both the lower and upper parts of the intertidal flat, with net transport toward the southeast in winter and north-northeast in summer. (4) The erosion of tidal flats on the south side of the Sheyang River estuary is characterized by coastline retreat and bed erosion.The average retreat rate of coastline in the study area increased from 23.3 m/a during 2014-2019 to 43.5 m/a during 2019-2021, with the proportion of eroded coastline length increasing from 23% to 96%.The average erosion depth in the lower and upper parts of the intertidal flat with a tidal cycle is 1.98 cm and 0.24 cm, respectively, in winter and 1.65 cm and 0.26 cm in summer, respectively. (5) The proportion of the wave-induced bottom shear stress to the tidal-currentinduced bottom shear stress is 0.40~0.46 in the lower intertidal flat and increases to 0.66~0.67 in the upper intertidal flat, indicating that in the study area, intertidal flat erosion is primarily driven by tidal currents, with significant contributions from wave action, especially in the upper intertidal flat. As a next step, we will improve our research methods, including by supplementing the observation and analysis of suspended sediment transport flux for the full water column and sedimentary dynamic processes in extremely shallow-water environments (<0.5 m) and studying the impact of biological activities on sediment erosion-deposition processes toward providing a clearer understanding of the erosion process and control mechanisms affecting tidal flats in the central Jiangsu coast. Figure 1 . Figure 1.Sketch map of study area and observation sites: (a) location of study area; (b) observation sites; (c) profile of elevation indicating installation of on-site instruments. Figure 1 . Figure 1.Sketch map of study area and observation sites: (a) location of study area; (b) observation sites; (c) profile of elevation indicating installation of on-site instruments. Figure 3 . Figure 3. Grain size frequency curves of sediments in the upper and lower intertidal flats of the study area. Figure 3 . Figure 3. Grain size frequency curves of sediments in the upper and lower intertidal flats of the study area.J. Mar.Sci.Eng.2024, 12, x FOR PEER REVIEW 9 of 20 Figure 4 . Figure 4. Near-bottom time-series sediment dynamics parameters in winter 2021 at sites B0 and B1: (a) inundation height; (b) near-bottom northward current velocity; (c) near-bottom eastward current velocity; (d) significant wave height; (e) bottom shear stress induced by tidal currents; (f) bottom shear stress induced by waves; and (g) near-bottom SSC.The red curve is the data from B1 in the lower intertidal flat, and the blue curve is the data from B0 in the upper intertidal flat.The yellow bands represent the number of tidal cycles. Figure 4 . Figure 4. Near-bottom time-series sediment dynamics parameters in winter 2021 at sites B0 and B1: (a) inundation height; (b) near-bottom northward current velocity; (c) near-bottom eastward current velocity; (d) significant wave height; (e) bottom shear stress induced by tidal currents; (f) bottom shear stress induced by waves; and (g) near-bottom SSC.The red curve is the data from B1 in the lower intertidal flat, and the blue curve is the data from B0 in the upper intertidal flat.The yellow bands represent the number of tidal cycles. J 20 Figure 5 . Figure 5. Near-bottom time-series sediment dynamics parameters in summer 2022 at sites B0 and B1: (a) inundation height; (b) near-bottom northward current velocity; (c) near-bottom eastward current velocity; (d) significant wave height; (e) bottom shear stress induced by tidal currents; (f) bottom shear stress induced by waves; and (g) near-bottom SSC.The red curve is the data from B1 in the lower intertidal flat, and the blue curve is the data from B0 in the upper intertidal flat.The yellow bands represent the number of tidal cycles. Figure 5 . Figure 5. Near-bottom time-series sediment dynamics parameters in summer 2022 at sites B0 and B1: (a) inundation height; (b) near-bottom northward current velocity; (c) near-bottom eastward current velocity; (d) significant wave height; (e) bottom shear stress induced by tidal currents; (f) bottom shear stress induced by waves; and (g) near-bottom SSC.The red curve is the data from B1 in the lower intertidal flat, and the blue curve is the data from B0 in the upper intertidal flat.The yellow bands represent the number of tidal cycles. winter and summer, with net erosion fluxes of −622.31kg/m and −516.50 kg/m 2 , respectively, at B1 and −75.37 kg/m 2 and −83.00 kg/m 2 at B0, respectively.The erosion of the intertidal flat has distinctive spatiotemporal variation characteristics, i.e., the erosion and deposition fluxes are larger in the lower than in the upper intertidal flat, and the erosion fluxes in the lower intertidal flat are larger in winter than in summer, while those in the upper intertidal flat exhibit the opposite pattern. Figure 6 . Figure 6.Total erosion-deposition fluxes within a tidal cycle during the observation period in the study area: (a) erosion and (b) deposition fluxes at B1 in winter; (c) erosion and (d) deposition fluxes at B0 in winter; (e) erosion and (f) deposition fluxes at B1 in summer; (g) erosion and (h) deposition fluxes at B0 in summer. Figure 6 . Figure 6.Total erosion-deposition fluxes within a tidal cycle during the observation period in the study area: (a) erosion and (b) deposition fluxes at B1 in winter; (c) erosion and (d) deposition fluxes at B0 in winter; (e) erosion and (f) deposition fluxes at B1 in summer; (g) erosion and (h) deposition fluxes at B0 in summer. from 2 . 8 kg/m to 539.4 kg/m, respectively, in winter and from 74.6 kg/m to 9455.7 kg/m and from 13.6 kg/m to 1237.2 kg/m in summer, respectively (Figure7).The maximum bedload transport rate caused by τcw at B1 and B0 are 5.78 × 10 −2 kg/m•s and 0.79 × 10 −2 kg/m•s, respectively, in winter and 1.94 × 10 −2 kg/m•s and 0.65 × 10 −2 kg/m•s in summer, respectively.The statistical results show that the net bedload transport fluxes within the tidal cycle at B1 and B0 range from 3.12 kg/m to 331.60 kg/m and from 0 to 28.25 kg/m, respectively, in winter and from 1.95 kg/m to 191.98 kg/m and from 1.24 kg/m to 30.44 kg/m in summer, respectively (Figure8). Figure 7 . Figure 7. Transport flux of near-bed suspended sediment within a tidal cycle on the tidal flat in the study area: (a) northward and (b) eastward transport fluxes at B1 in winter; (c) northward and (d) eastward transport fluxes at B0 in winter; (e) northward and (f) eastward transport fluxes at B1 in summer; (g) northward and (h) eastward transport fluxes at B0 in summer. Figure 7 . 20 Figure 8 . Figure 7. Transport flux of near-bed suspended sediment within a tidal cycle on the tidal flat in the study area: (a) northward and (b) eastward transport fluxes at B1 in winter; (c) northward and (d) eastward transport fluxes at B0 in winter; (e) northward and (f) eastward transport fluxes at B1 in summer; (g) northward and (h) eastward transport fluxes at B0 in summer.J. Mar.Sci.Eng.2024, 12, x FOR PEER REVIEW 13 of 20 Figure 8 . Figure 8. Bedload transport flux within a tidal cycle on the tidal flat in the study area: (a) northward (b) eastward at B1 in winter; (c) northward and (d) eastward at B0 in winter; (e) northward and (f) eastward at B1 in summer; (g) northward and (h) eastward at B0 in summer. Figure 8 . Figure 8. Bedload transport flux within a tidal cycle on the tidal flat in the study area: (a) northward and (b) eastward at B1 in winter; (c) northward and (d) eastward at B0 in winter; (e) northward and (f) eastward at B1 in summer; (g) northward and (h) eastward at B0 in summer. Figure 9 . Figure 9. Net transport fluxes of (a) near-bed suspended sediment and (b) bedload on the tidal flat within a spring-neap tidal cycle in winter and summer.Figure 9. transport fluxes of (a) near-bed suspended sediment and (b) bedload on the tidal flat within a spring-neap tidal cycle in winter and summer. Figure 9 . Figure 9. Net transport fluxes of (a) near-bed suspended sediment and (b) bedload on the tidal flat within a spring-neap tidal cycle in winter and summer.Figure 9. transport fluxes of (a) near-bed suspended sediment and (b) bedload on the tidal flat within a spring-neap tidal cycle in winter and summer. Figure 10 . Figure 10.Coastline evolution from 2014 to 2021 near the study area. Figure 10 . Figure 10.Coastline evolution from 2014 to 2021 near the study area. Table 1 . Statistical values of observed and calculated sediment dynamic parameters. Table 1 . Statistical values of observed and calculated sediment dynamic parameters. Notes: h: inundation height, H s : significant wave height, T: wave period, u a : mean horizontal current velocity, SSC: suspended sediment concentration.τ c , τ w , and τ cw represents bottom shear stress induced by tidal currents, waves, and the combined action of waves and currents, respectively. Table 2 . Erosion and deposition rates within a tidal cycle. Table 2 . Erosion and deposition rates within a tidal cycle. Season Statistical Value Erosion Rate (×10 −3 kg/m 2 •s) Deposition Rate (×10 −5 kg/m 2 FE-c, FE-w, and FE-cw are erosion rates induced by tidal currents, waves, and the combined action of waves and currents, respectively; FD-c, FD-w, and FD-cw are deposition rates induced by tidal currents, waves, and the combined action of waves and currents, respectively.A negative value represents erosion, and a positive value represents deposition. •s)Notes: Table 3 . Total net erosion flux, net transport fluxes, and direction of near-bed suspended sediment and bedload within a spring-neap tidal cycle during the observation.
2024-04-24T15:08:15.142Z
2024-04-22T00:00:00.000
{ "year": 2024, "sha1": "de9360c8ff32ba282aa90178fb3cf86731ec2e7e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1312/12/4/687/pdf?version=1713768928", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2149b4b2de165fafc24faca2f1cbef3568a30318", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [] }
256002523
pes2o/s2orc
v3-fos-license
Revisiting instanton corrections to the Konishi multiplet We revisit the calculation of instanton effects in correlation functions in N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} SYM involving the Konishi operator and operators of twist two. Previous studies revealed that the scaling dimensions and the OPE coefficients of these operators do not receive instanton corrections in the semiclassical approximation. We go beyond this approximation and demonstrate that, while operators belonging to the same N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} supermultiplet ought to have the same conformal data, the evaluation of quantum instanton corrections for one operator can be mapped into a semiclassical computation for another operator in the same supermultiplet. This observation allows us to compute explicitly the leading instanton correction to the scaling dimension of operators in the Konishi supermultiplet as well as to their structure constants in the OPE of two half-BPS scalar operators. We then use these results, together with crossing symmetry, to determine instanton corrections to scaling dimensions of twist-four operators with large spin. Introduction Recently impressive progress has been achieved in understanding the properties of fourdimensional maximally supersymmetric N = 4 Yang-Mills theory in the planar limit, see [1]. Thanks to integrability of the theory in this limit, it becomes possible to compute various quantities for an arbitrary 't Hooft coupling constant. At weak coupling, the resulting expressions agree with the results of explicit perturbative calculation whereas at strong coupling they match the predictions coming from the AdS/CFT correspondence. Much less is known however about the properties of N = 4 SYM beyond the planar limit and in particular about nonperturbative effects induced by instanton corrections. The motivation for studying instanton corrections is multifold. Firstly, N = 4 SYM possesses S-duality [2][3][4], namely, invariance under modular SL(2, Z) transformations acting on the complexified coupling constant (1.1) Instantons are expected to play a crucial role in restoring the invariance of the spectrum of scaling dimensions under the S-duality group. Secondly, previous studies revealed a JHEP12(2016)005 remarkable similarity between instanton corrections to correlation functions in N = 4 SYM at weak coupling and the dual supergravity amplitudes induced by D-instantons in type IIB string theory [5][6][7]. This suggests that the AdS/CFT correspondence can be tested beyond the planar limit to include instanton effects. Finally, the crossing symmetry of correlation functions leads to nontrivial constraints for the conformal data of the theory. They have been used in [8] to derive bounds for the scaling dimensions of leading twist operators of various spins. These bounds are expected to be saturated at fixed points of the S-duality group [9,10], were instanton contributions cannot be neglected. In this paper, we revisit the calculation of instanton corrections to various correlation functions in N = 4 SYM at weak coupling. To compute such corrections we follow the standard approach (see reviews [11][12][13]). Namely, we decompose all fields into the sum of classical instanton solutions and fluctuations and then integrate out the latter. In the semiclassical approximation the quantum fluctuations can be neglected and the correlation functions can be reduced to finite-dimensional integrals over the collective coordinates of instantons O(1) . . . O(n) inst = dµ phys e −S inst O(1) . . . O(n) , (1.2) where all fields on the right-hand side are replaced by their expressions on the instanton background. In the simplest case of the SU(2) gauge group, the one-instanton solution depends on bosonic collective coordinates ρ and x 0 defining the size of the instanton and its location as well as on 16 fermionic coordinates ξ A α andη Ȧ α (with A = 1, . . . , 4 and α,α = 1, 2), reflecting the invariance of the equations of motion under N = 4 superconformal transformations. The corresponding integration measure over the collective coordinates for the one-instanton sector in the SU(2) gauge group is [5] dµ phys e −S inst = g 8 2 34 π 10 e 2πiτ d 4 x 0 3) can be generalized to the SU(N ) gauge group for the one-instanton solution [6,11] and for multi-instanton solutions at large N , see [7]. Applying this approach we can systematically take into account the instanton effects to correlation functions and, then, extract the corresponding corrections to the conformal data of the theory. In particular, the instanton corrections to the scaling dimensions of operators have the following general form [13] where the two terms inside the brackets, e 2πinτ and e −2πinτ , describe the leading contribution of n instantons and n anti-instantons, respectively, and perturbative fluctuations produce subleading corrections suppressed by powers of the coupling constant g 2 . Previous studies of four-point and two-point correlation functions revealed [15][16][17][18] that for many operators in N = 4 SYM, including those with lower bare dimension (Konishi operator) and twist-two operators, the leading instanton corrections vanish, γ n,0 (N ) = 0. Going beyond the semiclassical approximation, one can envisage two possible scenarios: (i) the instanton corrections vanish to all orders in the coupling constant, γ n,k = 0, due to some symmetry, or (ii) the instanton corrections do not vanish, namely γ n,k = 0 for k ≥ K, but they are suppressed by a power of the coupling constant g 2K . The first scenario seems to be incompatible with the expected S-duality of N = 4 SYM, whereas to test the second scenario would require to take into account quantum corrections making the calculation of instanton effects extremely complicated. This explains, in part, why little progress has been made in improving the existing results over the last decade. In this paper we demonstrate, for the first time, that the scaling dimensions of the Konishi operator as well as other members of the corresponding N = 4 supermultiplet receive instanton corrections at order O(g 4 ), that is γ n,0 = γ n,1 = 0 but γ n,2 = 0. We identify the leading nonvanishing correction and compute the corresponding coefficient γ n,2 . We also evaluate the three-point correlation function of the Konishi operator and two half-BPS scalar operators and show that it receives a nonvanishing instanton correction at order O(g 2 ). While operators of the same supermultiplet ought to have the same anomalous dimension (and related OPE coefficients with two half-BPS operators), we observe that quantum instanton computations for some operators map to semiclassical instanton computations for others! This allows us to make progress. Using these results, we obtain the instanton contribution to the asymptotic behavior of the four-point correlation function of half-BPS operators in the light-cone limit and, then, employ crossing symmetry to compute the instanton corrections to twist-four operators with high spin. The paper is organized as follows. In section 2 we review the conventional instanton calculus in N = 4 SYM and apply it to compute instanton effects to various correlation functions involving the Konishi operator. We show that, due to a different coupling constant dependence of the leading instanton corrections to two-and four-point correlation functions, it is to possible to compute O(g 4 ) corrections to the scaling dimension of the Konishi operator. Furthermore, we consider anomalous dimensions as well as OPE coefficients (with two half-BPS operators) corresponding to general twist-two operators. We show that at order O(g 2 ) only the OPE coefficient corresponding to the Konishi supermultiplet gets instanton constributions. In section 3 we use this information in order to infer the instanton contribution to the light-cone asymptotics of correlation function of four half-BPS operators. This information, together with crossing symmetry, is then used to compute the instanton corrections to twist-four operators with large spin. Concluding remarks are presented in section 4. Useful definitions are included in two appendices. JHEP12(2016)005 2 Instanton corrections to correlation functions In this section, we evaluate instanton corrections to two-and three-point correlation functions of various operators in N = 4 SYM theory in the semiclassical approximation. As was explained in the previous section, the calculation amounts to evaluating the product of operators in the background of instantons and integrating the resulting expression over the collective coordinates. Instanton in N = 4 SYM The Lagrangian of N = 4 super Yang-Mills theory describes a gauge field A µ , (anti)gaugino fieds λ A α andλα A , as well as scalars φ AB satisfying the reality conditionφ AB = 1 2 ǫ ABCD φ CD Here we used spinor notations (see appendix A for our conventionts) and denoted by F αβ and Fαβ the (anti) self-dual part of gauge field strength tensor All fields take value in the SU(N ) algebra, e.g. A µ = A a µ T a , with the generators satisfying [T a , T b ] = if abc T c and normalized as tr(T a T b ) = 1 2 δ ab . By definition, the instanton is a solution to the classical equations of motion. Due to our choice of normalisations in the Lagrangian (2.1), all elementary fields in the instanton background, A µ , λ A α ,λα A and φ AB , are independent of the coupling constant. Their explicit expressions for the SU(N ) gauge group are rather complicated and only few terms in their expansion in powers of 8N fermionic collective modes are currently known [11][12][13]. Significant simplification occurs however for the SU(2) gauge group. In this case, the general one-instanton solution can be obtained by applying N = 4 superconformal transformations to a special solution to the equations motion µ is the well-known one-instanton solution in pure Yang-Mills theory. It depends on the collective coordinates ρ and x 0 defining the size and the position of the instanton, respectively. Here η a µν are the 't Hooft symbols and the SU(2) generators are related to Pauli matrices T a = σ a /2. The leading terms of the expansions (2.3) have been worked out in ref. [12]. For our purposes we will also need subleading terms. Their direct calculation is more involved, e.g. finding φ AB, (6) amounts to applying Q-andS-transformations to (2.2) subsequently six times. There is, however, a shortcut that simplifies the calculation significantly. Namely, the subleading corrections depend on the instanton field A µ (σ µ ) αα and fermion collective coordinates, ξ A α andηα A . It turns out that the requirement for the fields (2.3) to have the correct properties with respect to conformal symmetry, R-symmetry and gauge transformations, fixes their general form up to a few constants. The latter can be determined by requiring the fields (2.3) to satisfy the classical equations of motion derived from (2.1). Going through the calculation we have found the expressions for the subleading corrections to gauge field A (4) αα = iA (4) µ (σ µ ) αα and scalar φ (6),AB . Together with the leading correction φ (2),AB they are given by where we have introduced a short-hand notation for a particular linear x-dependent combination of fermionic modes and for various Lorentz contractions of fermion modes Here F αβ = F βα is the strength tensor for the SU(2) instanton 2 where i and j are the SU(2) indices. JHEP12(2016)005 A peculiar feature of φ (6),AB in (2.4) is that it depends on the fermionic modes only through the variable ζ defined in (2.5). This is not the case however for the gauge field A (4) αα . The difference is due to different transformation properties of fields under conformal transformations. By virtue of superconformal invariance, the action of N = 4 SYM evaluated on the instanton configuration (2.3) does not depend on the fermionic modes ξ andη and is given by where τ is the complex coupling constant (1.1). Normalization of operators Later in this section we shall compute the leading instanton corrections to scaling dimensions and OPE coefficients of various composite gauge invariant operators built from scalar fields φ AB andφ AB = 1 2 ǫ ABCD φ CD . Before doing this, we have to carefully examine the normalization of the operators. The reason for this is that, due to our definition of the Lagrangian (2.1), the free scalar propagator depends on the coupling constant where we used a shorthand notation for x 12 = x 1 − x 2 and D(x) = 1/(4π 2 x 2 ). Taking this into account, we define the simplest scalar operators of bare dimension 2 where Y AB is an antisymmetric SU(4) tensor satisfying ǫ ABCD Y AB Y CD = 0. It was introduced to project the product of two scalar fields onto the irreducible SU(4) representation 20 ′ . The half-BPS operator O 20 ′ is annihilated by half of the N = 4 supersymmetries and its scaling dimension is protected from quantum corrections. The Konishi operator K is the simplest unprotected operator. It is also convenient to introduce the following operator of bare dimension 4 where we used the standard notation for complex scalar fields Z = φ 14 and X = φ 24 . The operator K ′ is a supersymmetric descendant of the Konishi operator, K ′ ∼ δ 2 Q δ 2 Q K, and, as a consequence, its anomalous dimension and OPE coefficient with two half-BPS operators coincide with those of the Konishi operator. JHEP12(2016)005 The definition of the operators in (2.10) and (2.11) involves inverse powers of the coupling constant, one per each scalar field. They were introduced in order to ensure that the correlation functions scale as O(g 0 ) in the Born approximation. Indeed The presence of different powers of the coupling constant in the definition of the operators (2.10) and (2.11) leads to important consequences for instanton corrections to the correlation functions. Since the fields in the instanton background (2.3) do not depend on the coupling constant, the dependence on g 2 of the correlation function in the semiclassical approximation comes solely from the SU(2) integration measure over the moduli of instantons (1.3) and the from powers of 1/g 2 accompanying each operator. In this way, we find where the extra factor of g 4 in the first relation arises due to a different power of 1/g 2 in the definitions (2.10) and (2.11). In general, two-point correlation functions develop logarithmic singularities and generate corrections to anomalous dimensions of operators. Then, assuming that the correlation functions (2.13) are different from zero, we would deduce that instanton corrections to anomalous dimensions of operators K and K ′ should have different dependence on the coupling constant: γ K ′ = O(e 2πiτ ) whereas γ K = O(g 4 e 2πiτ ). However, this contradicts the fact that the two operators belong to the same supermultiplet and, therefore, their anomalous dimensions have to coincide. In other words, the superconformal symmetry dictates that γ K ′ has to vanish in the semiclassical approximation. 3 To get a nonzero result for γ K ′ we have to go beyond this approximation and take into account quantum fluctuation around instanton onfigurations. The calculation of quantum corrections to K ′ (1)K ′ (2) inst is way more complicated but the resulting expression for γ K ′ ought to match γ K = O(g 4 e 2πiτ ) obtained in the semiclassical approximation. We shall use this observation to compute the leading instanton correction to the scaling dimension of the Konishi operator in section 2.4. To observe another interesting feature of the Konishi operator, we examine the coupling dependence of the following correlation functions (2.14) JHEP12(2016)005 where each operator brings in the factor of 1/g 2 multiplied by the factor of g 8 exp(2πiτ ) coming from the SU(2) integration measure. Performing conformal partial wave expansion of the four-point correlation function in the second relation in (2.14) we can identify the contribution of the operators whose anomalous dimensions and/or OPE coefficients scale as O(e 2πiτ ). At the same time, as follows from the first relation in (2.14), the OPE coefficient of the Konishi operator scales as O(g 2 e 2πiτ ) and, therefore, it provides a vanishing contribution to the four-point correlation function of half-BPS operators to order O(e 2πiτ ), in agreement with findings of refs. [15,16]. However, we can also turn the logic around and use the first relation in (2.14) to predict the leading O(g 2 e 2πiτ ) contribution of the Konishi supermultiplet to the four-point correlation function! A direct calculation of such correction would require taking into account quantum corrections. Instanton profile of operators As the first step, we evaluate the operators (2.10) and (2.11) in the instanton background for the SU(2) gauge group. We start with the Konishi operator (2.10) and replace scalar fields by their expressions (2.3). This leads to where K (n) denotes the contribution containing n fermion modes. Notice that K (n) is independent of the coupling constant. Since the scalar field has at least two fermion modes, the expansion starts with K (4) . By virtue of the SU(4) symmetry, the number of fermion modes in the subsequent terms of the expansion increases by four units. The last term of the expansion contains the maximal number of fermion modes. We find, in agreement with [17], that the first term on the right-hand side of (2.15) vanishes where ζ = ξ + xη is a linear combination of fermion modes defined in (2.5). Here in the second relation we took into account that (ζ 2 ) AB = ζ αA ζ B α is symmetric with respect to SU(4) indices. As a result, the expression for the Konishi operator contains eight fermion modes at least and the leading term is given by where ζ 8 = A,α ζ αA is the product of eight fermion modes. Expressions for higher components of (2.15) are more complicated but we do not need them for our purposes. Let us consider the half-BPS operator O 20 ′ (x, Y ) defined in (2.10). Since this operator is annihilated by half of the N = 4 supercharges, it depends on four ξ and fourη fermion modes. As a consequence, its expansion in powers of Grassmann variables is shorter as compared with (2.15) JHEP12(2016)005 where the two terms on the right-hand side involve four and eight fermion modes respectively, and are given by As was already mentioned, the scalar fields φ (2),AB and φ (6),AB depend on fermion modes only through their linear combination (2.5) and the same is obviously true for the components (2.19). This property alone implies that O 20 ′ were different from zero, it would be proportional to ζ 8 = (ξ + xη) 8 and, therefore, would contain terms with more than four ξ's and fourη's, in contradiction with half-BPS condition. The relation O (8) 20 ′ = 0 can be also verified by a direct calculation. 4 Thus, the operator O 20 ′ (x, Y ) contains exactly 4 fermion ζ-modes and is given by ( 2.20) This feature allows us to verify the known properties of correlation functions of half-BPS operators. We recall that in order for a correlation function to be different from zero, the product of operators should involve terms containing sixteen fermion modes. Since the product of two and three half-BPS operators have 8 and 12 fermion modes, respectively, the corre- do not receive instanton corrections in the semiclassical approximation. This result is in agreement with the known fact that the above mentioned correlation functions are protected from quantum corrections and are given by their Born level expressions. The simplest correlation function that receives instanton correction involves four half-BPS operators. We shall return to this correlation function in section 3. Let us finally examine the operator K ′ defined in (2.10). Replacing the scalar fields by their expressions (2.3), we find in a similar manner where the lowest term involves eight fermion modes, As in the previous case, the dependence on fermion modes enters into these expressions through their linear combination ζ A α defined in (2.5). According to (2.4), the scalar field φ (2),AB is given by the product of two fermion modes φ (2),AB ∼ ζ A ζ B . Taking into account that Z = φ 14 and X = φ 24 , we obtain that K ′ (8) ∼ (ζ 4α ζ 4 α ) 2 = 0, which is consistent with the findings of [18]. In a similar manner, we can show that K ′ (12) ∼ (ζ 4α ζ 4 α )ζ 4 β = 0. JHEP12(2016)005 The top component K ′ (16) depends on all 16 fermion ξ-andη-modes. However their product ξ 8η8 is the SU(4) singlet and can not contribute to K ′ (16) due to mismatch of the SU(4) quantum numbers leading to K ′ (16) = 0. We therefore conclude that K ′ = 0 in the instanton background and, as a consequence, all correlation functions involving this operator vanish in the semi-classical approximation. Instanton corrections to Konishi operator We are now ready to evaluate the leading instanton corrections to correlation functions involving the Konishi operator for the SU(2) gauge group. We start with the two-point function where the SU(2) integration measure dµ phys is defined in (1.3). Here in the first relation we replaced operators by their instanton profile (2.15) and in the second relation retained terms involving 16 fermion modes. The terms with higher number of fermion modes do not contribute. Replacing K (8) with (2.17) we get where the integration is over the size and position of the instanton. We verify that this expression has the expected dependence (2.13) on the coupling constant. The integral on the right-hand side of (2.24) develops a logarithmic divergence that comes from integration over instantons of small size, ρ → 0, located close to one of the operators, |x 10 | → 0 or |x 20 | → 0. It indicates that the instanton corrections modify the scaling dimension of the operator. It is convenient to regularize the integral by modifying the integration measure over x 0 The resulting integral in (2.24) is well-defined for ǫ < 0 and develops a simple pole as ǫ → 0. Combining (2.24) with the Born term (2.12) (evaluated for the SU(2) gauge group) we obtain JHEP12(2016)005 Following a standard procedure, we apply the dilatation operator i (x i ∂ i ) to both sides of this relation, and find that the correction to the scaling dimension of K is given by the residue at the pole. Thus, we conclude that the leading instanton correction to the anomalous dimension of the Konishi operator in the SU(2) gauge group is given by where we have added a complex conjugated term to take into account the contribution from the anti-instanton. Notice that γ K has negative sign. The result (2.27) holds for the SU(2) gauge group. Its generalization to SU(N ) gauge group will be discussed in section 2.6. For the three-point function of a Konishi operator and two half-BPS operators we can proceed analogously. We find where in the second relation we replaced operators by their expressions on the instanton background, eqs. (2.17) and (2.20), and introduced a short-hand notation for the integrals over bosonic and fermion collective coordinates with ζ 2 (x) given by (2.6) and (2.5). Both integrals are well-defined and their dependence on x-and Y -variables is uniquely fixed by conformal and R-symmetry, respectively. Going through the calculation we get where y 2 12 is defined in (2.12). Plugging (2.30) into (2.28) we obtain the final result for the instanton contribution. Taking into account the anti-instanton contribution and combining this with the Born term we obtain K by the same factor that enters the right-hand side of (2.31) 5 (2.32) Instanton corrections to twist-two operators The twist-two operators provide the leading contribution to four-point correlation functions in the light-cone limit x 2 12 → 0. In N = 4 SYM these operators belong to the same supermultiplet which allows us to restrict our consideration to a particular twist-2 operator modes. Since the product of two half-BPS operators contains 8 modes, the remaining 8 modes should be soaked up by O(z). Replacing Z = Z (2) + Z (6) + . . . and A = A (0) + A (4) + . . . in (2.34) we find the corresponding contribution is given by where the subscript indicates the number of fermion zero modes and E (0) (z 1 , z 2 ) depends on A (0) . Taking into account (2.4) and recalling that Z = φ 14 , we can evaluate O (8) (z) explicitly. Going through a lengthy calculation (the details are presented in [19]) we obtain where y 2 ij are defined in (2.12) with all Y 3,AB vanishing except Y 3,14 = −Y 3,41 = 1/2. Here the product of y-variables keeps track of the R-charges of the operators whereas the nontrivial dependence on x-variables is described by D-function defined in appendix B. To extract the correlation function O 20 ′ (1)O 20 ′ (2)O S (0) , we expand the expression in the second line of (2.37) in powers of z and decompose it over the conformal partial waves. In this way we find that (2.37) receives a nonvanishing contribution from only one partial wave with S = 2. In other words, the three-point correlation function in the semiclassical approximation is different from zero only for twist-two operators with spin S = 2: where we have added the contribution from the anti-instanton. For S = 0, this correlation function is protected from quantum corrections. For higher spin S > 2, the instanton corrections to (2.38) scale as O(g 4 e 2πiτ ) at least. For S = 2 we verified that the relation (2.38) divided by the Born level result coincides with the analogous expression for the Konishi operator (2.31). This is not surprising given the fact that the two operators O S=2 and K belong to the same N = 4 supermultiplet, but rather serves as a nontrivial check of our calculation. It is straightforward to extend the above considerations to the two-point correlation function of twist-two operators. Computing the leading instanton correction to the twopoint correlation function of light-ray operators (2.34) and projecting them onto operators O S with a help of (2.34), we obtain (see [20] for details on the projection procedure) JHEP12(2016)005 In other words, the instanton corrections vanish for all spins except for S = 2. In the latter case, they generate the same correction to the scaling dimension of the operator O S=2 as to the Konishi operator (2.27). We recall that the two operators belong to the same supermultiplet and their anomalous dimension ought to coincide. Generalization to the SU(N ) gauge group Having determined the contribution of a single (anti)instanton to correlation functions in N = 4 SYM for the SU(2) gauge group, we can now generalize the above results to the SU(N ) gauge group and, in addition, take into account the contribution of an arbitrary number of (anti)instantons at large N . The instanton for the SU(N ) gauge group has 8N fermion modes. Among them there are 16 exact supesymmetric and superconformal zero modes, ξ andη, respectively. The remaining 8N − 16 'nonexact' fermion modes do not correspond to any symmetry and the corresponding SU(N ) instanton action S inst develops a nontrivial dependence on these modes. This leads to significant simplification in computing the correlation functions. As in the previous case, for the instanton correction to be different from zero, all fermion modes should be saturated. Then, in the semiclasscial approximation, the exact modes are absorbed by the instanton profile of the operators whereas the nonexact modes are saturated by S inst . As a consequence, the contribution of the nonexact modes to the correlation functions factorizes into a univeral N -dependent factor [6,11] where κ N takes into account both the embedding of the SU(2) instanton in SU(N ) and integration over the nonexact modes (2.41) In application to the Konishi operator, we can use (2.40) together with (2.27) and (2.32) to get its anomalous dimension and OPE coefficient for the SU(N ) gauge group (2.42) Here we inserted an additional factor of 3/(N 2 − 1) to account for N -dependence of twoand three-point correlation functions in the Born approximation (see eq. (2.12)). The relations (2.42) can be further generalized to include the contribution of multiinstantons. As was shown in [7], the calculation simplifies dramatically in the large N limit due to the fact that the integration over the moduli space of n-instantons is dominated JHEP12(2016)005 by the saddle-point. Repeating the analysis of [7], we find that in this limit the profile of the Konishi operator in the n-instanton background is proportional to its one-instanton expression. This makes the evaluation of instanton corrections to the Konishi operator very similar to that performed in [7] for the half-BPS operator. In this way, going through the calculation we find the generalisation of (2.42) where we added the contribution of n anti-instantons and introduced where the sum runs over the positive divisors of n. We would like to emphasize that the relations (2.43) hold up to corrections suppressed by powers of 1/N and g 2 . The latter come from taking into account quantum fluctuations around the instanton configuration. Instanton corrections to higher spin operators from crossing symmetry In the previous section we have computed the instanton corrections to the scaling dimensions of the Konishi and twist-two operators, as well as to the OPE coefficients defining their contribution to the product of two half-BPS operators. Using these results we can determine the leading instanton contribution to the four-point correlation function at short distances, x 1 → x 2 , and in the light-like limit, x 2 12 → 0. In this section we use this information, together with crossing symmetry, in order to compute instanton corrections to twist-four operators with large spin. Properties of the correlation function The four-point correlation function of half-BPS operators in N = 4 SYM with SU(N ) gauge group has the following structure [22] where G short and G long denote the contributions from (semi-)short multiplets and from long multiplets, respectively. The former contribution does not depend on the coupling constant, whereas the latter can be expressed in terms of a single function A long (u, v) of the conformal cross-ratios JHEP12(2016)005 The prefactor carries the R-charge dependence of the operators and we have introduced the following notation and similar for αᾱ = y 2 12 y 2 34 /(y 2 13 y 2 24 ) and (1 − α)(1 −ᾱ) = y 2 23 y 2 14 /(y 2 13 y 2 24 ). The function A long (u, v) admits a decomposition in terms of super-conformal blocks [23] A where the sum runs over superconformal primary operators (and hence in the singlet of SU(4)) with even Lorentz spin S and scaling dimension ∆ ≥ S + 2 and a ∆,S is the square of the canonically normalised OPE coefficient. The contribution from super-conformal descendants is taken into account by the super-conformal blocks, where with k β (z) = 2 F 1 (β/2, β/2, β; z) and complex z andz variables defined in (3.4). It is convenient to decompose A long (u, v) into the free-theory result A Born plus the quantum (coupling dependent) contribution A (3.7) The explicit expression for A Born (u, v) is not needed for our purposes, but it can be derived from the analysis of [24]. At weak coupling, the expansion of A(u, v) runs in powers of 't Hooft coupling constant a = g 2 N/(4π 2 ) and (anti) instanton weight factors, e 2πiτ and e −2πiτ . To leading order in these parameters we have where theD-functions are introduced in appendix B. Here the dots denote subleading terms suppressed by powers of the expansion parameters. Higher order perturbative corrections to A(u, v) were found in [25][26][27]. Invariance of G 4 under the exchange of any pair of points leads to the crossing symmetry relations 7 Each term on the right-hand side of (3.8) satisfies this relation. 7 While the free theory contributions G short and ABorn(u, v) mix with each other. Instanton corrections to light-cone asymptotics In the light-like limit x 2 12 → 0, or equivalently u → 0, the leading asymptotic behavior of (3.7) comes from the contribution of twist-two operators with scaling dimension ∆ S = 2 + S + γ S where the collinear conformal block f ∆,S (v) describes the small u limit of (3.6) The first term on the right-hand side of (3.10) with S = 0 corresponds to the Konishi supermultiplet. It gives the leading asymptotic behaviour of A(u, v) at short distances, x 12 → 0, or equivalently u → 0 and v → 1. According to (3.7) and (3.8), the instanton correction to A long (u, v) takes the form where A inst ∼ u 2 v 2D 4444 is given by the second term on the right-hand side of (3.8) and A (2) inst is the first subleading correction that we shall discuss in a moment. Since A (1) inst (u, v) scales as O(u 2 ln u) in the light-cone limit, it does not affect the leading asymptotic behaviour (3.10). This is in agreement with the known fact that the scaling dimensions and the OPE coefficients of the Konishi and twist-two operators do not receive instanton corrections at leading order, see e.g. [28] and [16]. 8 In the previous section we have shown that, among all twist-two operators, only those belonging to the Konishi supermultiplet receive O(g 2 ) instanton corrections to their OPE coefficients and O(g 4 ) corrections to their scaling dimensions. Together with (3.10) this allows us to fix the small u behaviour of the subleading instanton corrections to (3.12): where we neglected O(g 4 ) corrections to the scaling dimension ∆ S=0 = 2+γ K , since they do not enter at this order. Here a (inst) 2,0 denotes the instanton correction to the OPE coefficient of the Konishi operator. It can be written in terms of the free-theory coefficient 9 (3.14) JHEP12(2016)005 The ratio a (inst) 2,0 coincides with the instanton correction to (C K /C (0) K ) 2 (see eq. (2.42)). Note that A (2) inst (u, v) does not contain O(u log u) terms. This should of course be the case, as there are no instanton corrections to the anomalous dimensions of twist-two operators at this order. Such corrections first appear at order O(g 4 ). In the next subsection we use crossing symmetry of A inst (u, v) together with the small u behaviour (3.13) in order to compute instanton corrections to certain higher spin operators. Crossing symmetry and higher spin operators Before proceeding, let us make an important comment. As follows from the light-cone asymptotic behaviour of A (1) inst (u, v) ∼ u 2 log u, the anomalous dimension of operators of twist four and higher do receive instanton corrections at the leading O(g 0 ) order [16]. The conformal partial wave analysis shows [29] that only operators with spin zero receive such corrections. We show below that the situation is very different at order O(g 2 ). Let us examine A inst (u, v) in the double light-cone limit u, v → 0. Combining the leading asymptotics (3.13) with the crossing relation A inst (u, v) = A inst (v, u) we infer that in the small u, v limit A inst (u, v) should contain the following term Following [30,31] we then try to answer how to get such asymptotics from a conformal partial wave expansion. The crucial observation is that, due to the presence of u 2 log u term, this must come from twist-four operators with anomalous dimension γ 4,S = O(g 2 e 2πiτ + e −2πiτ ). More precisely, the expansion (3.5) should contain a contribution from a tower of twist four operators with Lorentz spin S such that It is important to emphasise that, as opposed to twist-two operators, for a given spin S the sum on the left-hand side receives the contribution from many twist-four operators. To simplify formulae we do not add an additional index to distinguish such operators. A very important point about (3.17) is that, given that each term on the left-hand side diverges only logarithmically as v → 0, we need an infinite number of them in order to reproduce the power law divergence 1/v on the right-hand side of (3.17). Furthermore, the divergence will come from the region with large spin, S ≫ 1. The corresponding OPE coefficients a 4,S in the free theory were found in [24]. In the large spin limit they reduce to (3.18) JHEP12(2016)005 The leading asymptotic behaviour of the left-hand side of (3.17) for v → 0 can be computed following [32] (see footnote 19 there). Matching the leading 1/v terms on both sides of (3.17) we obtain Replacing a (inst) 2,0 with (3.14), we arrive at our final expression for the large spin behaviour of the anomalous dimension of twist-four operators (3.20) We remind that since twist-four operators are degenerate, this anomalous dimension should in principle be understood as an average weighted by the tree-level OPE coefficients. The following comments are in order. Firstly, the large spin asymptotics γ 4,S ∼ 1/S 2 is consistent with the expected behaviour of anomalous dimensions of double trace operators [33]. Moreover, γ 4,S is suppressed by the factor of (N 2 −1) as compared with the anomalous dimension of the Konishi operator (see eq. (2.42)) which is also a characteristic feature of double trace operators. Secondly, while at order O(g 0 ) only twist four operators with zero spin receive instanton corrections, at order O(g 2 ) operators with arbitrarily high spin do. It would be very interesting to compute these corrections directly. Finally, the relation (3.20) describes the contribution of one-(anti)instanton in the SU(N ) gauge group. Making use of (2.43), it is straightforward to generalize it to multi-instantons in the large N limit. Conclusions In the present paper we have computed, in the semi-classical approximation, instanton corrections to various correlation functions, involving the half-BPS operator O 20 ′ and the Konishi operator K. Our main results are the explicit expressions (2.42) and (2.43) for the leading instanton contribution to the anomalous dimension of the Konishi operator, as well as for the OPE coefficient of the Konishi operator with two half-BPS operators. In addition, we considered twist-two operators of general spin S, and showed that the only operators that receive the leading instanton corrections are those which carry spin S = 2 and belong to the Konishi supermultiplet. Using this information, we derived the asymptotic light-cone behaviour of the correlation function of four half-BPS operators and, then, employed the crossing symmetry to determine the instanton contribution to the anomalous dimension of twist-four operators, in the limit of large spin. Our computations show a very interesting interplay between semi-classical vs quantum instanton corrections and the symmetries of N = 4 SYM. An instance of this interplay arises when comparing instanton corrections to the scaling dimensions of Konishi operator K and its dimension-four supersymmetric descendant K ′ ∼ δ 2 Q δ 2 Q K. Superconformal symmetry implies that these two operators should have the same anomalous dimensions. On the other hand, while the leading non-vanishing instanton correction to ∆ K comes from a semi-classical computation, from the perspective of ∆ K ′ computing the same correction JHEP12(2016)005 would require going through a highly nontrivial analysis of quantum fluctuations! Something similar happens when considering the four-point correlation function of half-BPS operators. Subleading instanton corrections to this correlator involve including quantum fluctuations, but on the other hand, the asymptotic behaviour of such corrections at short distances and on the light-cone is controlled through the OPE by two-and three-point correlation functions, in which instanton corrections at the same order in the coupling constant can be derived from a semi-classical computation. All this seems to hint at existence of some hidden structure underlying instanton corrections in N = 4 SYM. There are several directions in which this work can be extended. As was mentioned in the Introduction, the spectrum of the dilatation operator in N = 4 SYM should be invariant under the S-duality. Viewed as functions of the coupling constant, the scaling dimensions of operators carrying the same quantum numbers with respect to global symmetries cannot cross each other and they should be invariant under modular transformations. Since modular invariant functions independent on θ-angle ought to be constant, the scaling dimensions should have a nontrivial dependence on θ. One of the direct consequences of our study is that it opens up the possibility to construct such functions in N = 4 SYM by taking into account the instanton corrections. Our results represent the first explicit calculation of the instanton correction to Konishi operator which is the lowest unprotected operator at weak coupling. The obtained expression (2.42) can be thought of as representing the first term in the expansion of the modular invariant function ∆ K (τ,τ ) at weak coupling. It would be interesting to try to determine the modular properties of ∆ K for an arbitrary coupling. As a first step in this direction, one can use the results of this paper to improve an interpolating procedure proposed in [9,10]. Another interesting question concerns S-duality properties of the OPE coefficients. In distinction with the scaling dimensions, they do not have to satisfy von Neumann-Wigner non-crossing rule [34] and, as a consequence, they may transform nontrivially under the modular transformations of the coupling constant. In general, the properties of structure constants under S-duality are poorly understood. Our results may provide the first hints in this direction, for the simplest case, corresponding to the OPE coefficient of two half-BPS operators and a unprotected operator. For the Konishi operator the leading instanton correction is given by (2.42), finding higher order corrections is an open problem. It would also be interesting to understand better the interplay between semi-classical and quantum instanton corrections mentioned above. Our results seem to hint at unexpected simplifications when considering quantum corrections around instanton backgrounds in N = 4 SYM. In addition to this, we demonstrated that the semiclassical result for the Konishi operator combined with the crossing symmetry leads to a definite prediction for the instanton correction to the scaling dimensions of twist-four operators with large spin. Computing this correction directly would require to include a quantum fluctuations. This remains a largely unexplored subject. JHEP12(2016)005 Institute, respectively, for hospitality where part of this work has been done. The work of L.F.A. was supported by ERC STG grant 306260. L.F.A. is a Wolfson Royal Society Research Merit Award holder. This work of G.P.K. was supported in part by the French National Agency for Research (ANR) under contract StrongInt (BLANC-SIMI-4-2011). B Definition of the D-functions The D-functions are defined as Using Schwinger parameterization together with the Symanzik star formula we get JHEP12(2016)005 where K = π 2 Γ 1 2 ∆ i − 2 /(2 i Γ(∆ i )) and the integration in the last relation goes parallel to the imaginary axis. The variables δ ij are not independent and satisfy the relations leaving only two variables independent. It is convenient to choose the latter as δ 12 = j 1 and δ 23 = j 2 . In this way, we obtain where theD-function only depends on conformal cross-ratios (3.4) Here the integration contours are chosen in such a way that the poles generated by the product of gamma-functions in the first and second lines are located on the different sides. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
2023-01-20T15:28:14.110Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "6829f855b2bd14d08807efa50accb39baa746ded", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/jhep12(2016)005", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "6829f855b2bd14d08807efa50accb39baa746ded", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
232352509
pes2o/s2orc
v3-fos-license
CUDA Tutorial -- Cryptanalysis of Classical Ciphers Using Modern GPUs and CUDA CUDA (formerly an abbreviation of Compute Unified Device Architecture) is a parallel computing platform and API model created by Nvidia allowing software developers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing. This 90-pages tutorial introduces the CUDA concepts in an easy-to-grasp and interactive way with ready-to-run code samples tested on Windows and Linux. Starting from scratch, a complete stand-alone GPU tool is implemented which automatically performs a ciphertext-only attack on ciphertexts encrypted by monoalphabetic substitution and columnar transposition. Throughout this process, you will learn how to architect the tool, what optimizations could significantly accelerate the routines, why the choice of an adequate metaheuristic is critical, and how to draw sketches to enlighten the design process. This tutorial will be incorporated in the CrypTool book as chapter 13. Introduction CUDA (formerly an abbreviation of Compute Unified Device Architecture) is a parallel computing platform and API model created by Nvidia allowing software developers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing. 1 Throughout this tutorial, we introduce the CUDA concepts in an easy-to-grasp interactive way. However, to fully benefit from the tutorial, some basic prerequisites are desirable: • A basic familiarity with C, C++ or a similar language • A basic understanding of cryptology • A basic understanding of heuristic techniques • Possession of a CUDA capable device Section 2 gives a brief overview of the graphical processing unit (GPU). In this section we setup our programming environment and learn how to exploit the multi-core architecture of the GPU by solving some trivial problems. Section 3 provides the concept of GPU threads and analyzes how modern GPUs could be beneficial in terms of performing automatic cryptanalysis on classical cryptosystems. Starting from scratch, we implement a complete stand-alone GPU tool for automatically decrypting ciphertexts (ciphertext-only attack) encrypted by monoalphabetic substitution (MAS). Throughout this process, we will learn how to architect the tool, what optimizations could significantly empower our routines, why the choice of an adequate metaheuristic 2 is critical, and how to draw sketches to enlighten the design process, by proactively solving upcoming issues. The thread-count limitation problem could be easily solved by organizing the threads in blocks, as shown in Section 4. Then, we setup another, beside the metaheuristic choice, critical part of the cryptanalysis -the pseudo-random number generator (PRNG) nested in the device itself. Further discussion why PRNGs are significantly beneficial to the classical cryptanalysis could be found in subsections 3.3 and 4.3. Having this at hand, we can further optimize our cryptanalysis tool by using a stand-alone pool of pseudo-random numbers, which allows to develop a more flexible metaheuristics. Section 5 briefly discusses some basic CUDA debugging tools used to differentiate errors yielded by the host from those yielded by the device. Furthermore, we sketch out a compact overview of the CUDA memory model and why using shared memory could significantly increase the speed of our future applications. Then, Section 6 briefly discusses the usage of dynamically allocated memory on the device, and more specifically, why it should be avoided. As an example, we show how this usage could be avoided by using predefined macros. Section 7 first summarizes the most critical improvements made throughout the tutorial. Then, by following the design principles outlined during the previous sections, a stand-alone CUDA application is constructed for automatic cryptanalysis of ciphertexts encrypted by single-columnar transposition cipher. At the end, the provided CUDA application is compared with a state-of-the-art tool for cryptanalysis. The final Section 8 summarizes the critical paradigms that should be addressed in the process of designing a CUDA cryptanalysis tool. Please note, that all CUDA examples, as well as all accompanying files needed for their compilation, are provided inside the Repository folder. It could be downloaded from https://www.cryptool.org/assets/ctb/CUDA-Tutorial_ Repository.zip. Practical introduction to GPU architecture This section gives an overview of the GPU architecture. The examples are accompanied by visual interpretations and illustrations. For the sake of comprehensibility, we purposely avoid some technical details. What is the abstract model of a given GPU? Imagine you have a function F , which just prints Hello! on the screen. If you launch this function once, a single output row will be printed. This is not the same when using a GPU, which is a multi-core system and as such it possesses lots of parallel processors called CUDA cores. Each of these cores is similar to a computer processor. Taken this into account, one could be already wondering: What will happen if we launch the function F on all CUDA cores supplied by a given GPU? How to program and launch the function F on the GPU in the first place? Moreover, how to spread the work of a given problem to a multi-core architecture in an efficient and productive way? In this tutorial, we are going to answer all these questions. Let's first construct a handy visual interpretation of a GPU multi-core architecture. Figure 1 treats each square as a GPU core. In the current abstract example we have a total of 81 cores. Since we have initialized and loaded the function F on each one of them, the GPU will launch 81 instances of the function F and as a consequence, we will have a total of 81 printed rows with the message Hello!. Let's setup our environment and write down our very first CUDA program. For this, we first need to download and install the CUDA Development Tools. For Windows, an installation guide can be found in [NVIDIA, 2020a], for Linux the corresponding installation guide can be found in [NVIDIA, 2020b]. The Linux installation is pretty much straightforward. In contrast, for the Windows installation, we need to further install and link the C++ compiler tools. The following guide summarizes the major stages of setting up a Windows machine: • Install the CUDA driver. • Install the CUDA toolkit. Make sure the nvcc compiler is present by typing nvcc -version in the command prompt. • Make sure Visual Studio (VS) is present at the machine. Furthermore, make sure that the VS module Desktop Development with C++ is Figure 1: Example of GPU multi-core architecture installed. In case VS or the VS module is missing, you can always grab the free Visual Studio 2019 community version to be found in [Microsoft, 2021]. • Locate the cl.exe file. For example, if you are using Visual Studio 2019 community version, the default location of the file is similar to C: \Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\ Tools\MSVC\<version>\bin\Hostx64\x64. • Go to the directory where your CUDA examples are downloaded. In case you want to compile a file with name example.cu, the following command should be typed: nvcc -ccbin = " C :\ Program Files ( x86 ) \ Microsoft Visual Studio → \2019\ Community \ VC \ Tools \ MSVC \ < version >\ bin \ Hostx64 \ x64 → " example . cu -o example In case no compilation errors occur, the compiled file example.exe should be created in the current directory. The CUDA platform is designed to work with programming languages such as, but not limited to, C, C++, and Fortran. Throughout this tutorial we are going to use the C language. Throughout the tutorial, by host we always reference to the CPU, while by device -we reference to the GPU. Every CUDA program could be logically divided into two parts. The first part consists of the host-related code routines, i.e. the source code relevant to the host, while the second part contains the device-related code routines, i.e. the source code relevant to the external device. Assuming that our machine is prepared, let's compile our first CUDA application. The function F (see Listing 1) is a function specific to the device only. By using the declaration global for some function G, we announce that G is going to be executed on the device only. In CUDA, we have three different function declarations: • global : functions, that are called from the host and then executed on the device; they are called GPU kernels. • device : functions, that could be called only from the device and executed only on the device. • host : functions, that could be called only from the host and executed only on the host. However, as we will see later throughout the tutorial, for example in subsection 4.3, in case we need a function to be shared with both the device and the host, we can use both host and device declarations. The actual initialization of the program starts from the main() function. Then, we call the GPU kernel function F by using this triple-bracket alike syntax: <<< x, y >>>, where x defines the number of blocks (will be discussed in Section 4 ), and y defines the number of threads (cores) per block. Since our hypothetical GPU example (see Figure 1) is based on exactly 81 cores, we launch the kernel by supplying a tuple <<< 1, 81 >>>, which means that we are going to launch the F function on all available 81 cores, without dividing them by blocks. Similarly to a C compiling procedure, the compilation is done by using nvcc -o {programName} {programName}.cu. NVCC is the CUDA compiler driver. Please note, that, as stated in the CUDA documentation, all non-CUDA compilation steps are forwarded to a C++ host compiler that is supported by nvcc. The cudaDeviceSynchronize() function is a CUDA function, which handles the synchronization of all the threads we have started. Using synchronization is a good practice as it helps to properly print out the messages defined in the F function. We want to emphasize that the kernel calls are asynchronous, which means that the control is returned to the host before the kernel completes. Having this in mind, the application could terminate before the kernel had the opportunity to print out the desired messages. Let's compile and run the example. We can see that the message Hello! is printed 81 times in the console. However, in the given situation, we are not able to link a specific row with the corresponding parent GPU core. CUDA provides us with some very important built-in primitives, which can be utilized for thread's identification numbers. Let's slightly modify the example to illustrate this (see Listing 2). Listing 2: CUDA thread identification c u d a D e v iceSy nchr onize () ; 10 return 0; 11 } The threadIdx.x number returns the unique ID of the thread. The IDs starts from 0, and by considering the current example, up to 80 (inclusive). Now, in case we want to find the squares of all the integer numbers from 0 to 80, we can change a little bit the function F to solve this problem in parallel, benefiting from the GPU architecture (see Listing 3). Parallel squaring -Part I What if we want to square a set of numbers, which should be read from the host memory? What is the mechanism of transferring data from the host memory to the device memory? The procedure is similar to the memory allocation in a C language routine. We call the function cudaMalloc((void **)&P, SIZE) which is going to reserve a memory block of size SIZE on the device. The pointer to the reserved block will be saved to the pointer P. In case we don't need this memory allocation anymore, we should free it with cudaFree(P). To interchange memory blocks between the host and the device we use the built-in CUDA function cudaMemcpy(). Let's consider an example which illustrates the usage of these functions (see Listing 4). Now let us inspect the code line by line. Throughout the process, we are going to depict the memory blocks of the host and the device. As shown in Figure 3, at the beginning both the memory blocks, the host related one (on the left) and the device related one (on the right) are empty. At line 11 of Listing 4 we declare an array of integers called seeds. We will use it as buffer located on the host memory. Then, at lines 13 and 14, we populate the cells in the host memory block with the integer numbers from 0 to 80 (the total number of our threads). At this moment our memory snapshot of the host and the device is given in Figure 4. At line 17 of Listing 4 we declare an integer pointer called d seeds, which is going to point to a memory block inside the device (not allocated yet). As shown in Figure 5, there is almost no change in the snapshot of the host and device memory blocks. To proceed, we need to transfer the host memory block to the device memory block. For this purpose, we are going to use the aforementioned cudaMemcpy function. Line 23 of Listing 4 copies the contents from the host memory block seeds to the device memory block d seeds. The total size of data to be transferred is defined via size*threads. The current snapshot of host and device memory blocks is given in Figure 7. The gray dotted line highlights the direction of the transferred data. In this case, since we call the cudaMemcpy routine with the "cudaMemcpyHostToDevice" option, the dotted arrow is pointing to the device memory block. Finally, cudaFree() frees the allocated device memory block (line 27 of Listing 4). The final snapshot of host and device is given in Figure 9. Now the variable d seeds points to an un-allocated memory block in the device, which is depicted by a dotted red line. Figure 9: Finally, when all of the 81 instances of H are finished, the memory block allocated on the device is freed. Launching the kernel H activates all the defined threads. However, we should be very carefully in writing the kernel routines. For example, fragments of memory blocks, to which two or more threads have unintentional write access is an undesirable behavior in our program. Let's take two different threads t 1 and t 2 , and some bytes located on the device memory block denoted as B x , to which both threads t 1 and t 2 have write access. While the kernel routine t 1 writes to B x the value v 1 , t 2 writes the value v 2 to B x . In this scenario, and when the kernel routines are finalized, we can only speculate what is the final value of B x -v 1 or v 2 . Having this in mind, we should always check for possible thread interference or deadlocks. The threads on the device and the used GPU memory are visualized in Figure 10. Since each thread has a unique thread id (via the call in Line 25 of Listing 4), each thread, labeled with an id number, is visualized as a distinct color. Above the labeled threads, an illustrative example of the GPU global memory is shown. Starting from left to right, each color stripe corresponds to the thread id sharing the same color. In this specific example, we have organized the corresponding color map from lower id numbers to higher ones, i.e. thread id 0 is at the left-most positioned stripe inside the global device memory, while thread id 80 corresponds to the final right-most stripe inside the global memory. Each stripe is strictly related to the thread block sharing the same color. In Listing 5 an example output on the host terminal is given. Some lines of the output are omitted. Since each thread is independent from the others and the order of thread completion is hard to predict, the results could be reported back in a scrambled way. Figure 10: Visualization of the GPU global memory as a multi-color rectangle (to be found above the thread matrix). Each GPU thread and each stripe is depicted with a distinct color. Parallel squaring -Part II Now, let's try another strategy -instead of printing out the squared numbers by the kernel threads, we can copy the results calculated in the device memory block back to the given host memory block (so the host prints the results). In fact, calling the printing function inside the kernel is not a common routine and should be avoided. The modified program is shown in Listing 6. At line 5 in Listing 6, inside the newly defined kernel function K, we write the result of squaring the number back to the d seeds array. The main function remains almost the same as in the previous example. The only difference is that, this time (see line 28) we transfer the results back to the host memory, i.e. we copy the data pointed by d seeds to seeds. The fourth argument of the cudaMemcpy function is cudaMemcpyDeviceToHost. Finally, as shown at lines 31 and 32, the results are printed out from the outside of the device. As a summary, the snapshot of the host and the device during these operations (line 28 from Listing 6) is given in Figure 11. The gray dotted line highlights the direction of the transferred data. In this case, since we call the cudaMemcpy routine with the "cudaMemcpyHostToHost" option, the dotted arrow is pointing to the host memory block. Figure 11: cudaMemcpy() transfers data back from the device memory array to the host seeds array. We have developed a handy pen-and-paper method to proactively design and architect a GPU implementation. Now, we have all the necessary tools to complete our first project -a full GPU-based automatic cryptanalysis tool for messages encrypted by a monoalphabetic substitution cipher. GPU cryptanalysis In this section, we are going to launch an automatic GPU-based cryptanalysis on English messages encrypted by a substitution cipher. However, the method is universal for any language, as long as you possess the necessary statistics. For example, in this section, we are going to exploit the partially-predictable structure of the English language by using bigrams. Frequency analysis Let's assume that we have a single file describing the properties of the English language bigrams. At each row of this file a single bigram with its corresponding score value is given. Since we have a total of 26 letters in the English alphabet we have a total of 26 * 26 = 676 distinct rows (bigrams from AA to ZZ). Each score for a given bigram reflects the likelihood of this specific bigram to participate in an authentic English text. In short, the higher the overall score, the closer to a grammatically correct English the text is. So, the probability of some random bigram to be ve or el is respectively 1 59 or 2 59 . However, this is far from adequate statistics -we have used a tiny negligible fragment of English corpus. Let's continue with the construction of our main tool. Since working with strings is slow, we can create a map between bigrams and some integer array of length 676 3 , which is going to hold down all the bigram score values. Let's define the English alphabet as Ω = {A, B, C, · · · , Z}. We map each consequent alphabet letter from Ω to an integer value from 0 to 25, i.e. Let's store the score values of each of the bigrams in the array L of length |Ω| 2 = 26 2 = 676. For each of the bigrams µν we compute it's index in L as 26µ + ν. For example, the score of bigram AB is mapped to the index 26 * 0 + 1 = 1 of L, while the score of bigram CA is mapped to the index 26 * 2 + 0 = 52 of L. The score of the last bigram over Ω, i.e. ZZ, is mapped to the index 26 * 25 + 25 = 675 of L. Now, our first task is to read and parse the file with all the English bigrams and their corresponding score values. Then, we need to organize and map the score values into an array following the aforementioned mapping. Finally, we are going to transfer the created mapped array to the device global memory, making the array accessible within each GPU thread. The code fragment is given in Listing 7. Listing 7: Helper function to read and parse from bigram collection file 1 // a host function to extract and map 2 // all the bigrams from a given file 3 __host__ void extractBigrams ( unsigned long long int * scores ) { 4 FILE * bigramsFile = fopen ( " bigramParsed " , " r " ) ; 5 while (1) As shown in the previous source code fragment, we named the bigram collection file as bigramParsed (line 4). Then, by reading the file line by line, we repeatedly extract each bigram and then compute the corresponding score (see line 8). We recall that the fscanf function reads data from the current position of the specified stream into the locations that are given by the entries in the argument-list. The actual mapping is done at line 10. To achieve that, we subtract the ascii value of a from each letter from the bigram, so: • a is mapped to 0 • z is mapped to 25 Having this in mind, the helper function is adequate to low-cased bigrams collection related files only. If you want to use another bigram format then a further modification of the function logic is needed. Finally, we save all the scores to an unsigned long long int array named scores. The main function of our GPU cryptanalysis tool is given in listing 8. Take a brief look, but don't try to grasp all the details and "magic" constants yet. A detailed explanation accompanied by helpful comments is given right after the source code fragment. Line 3 initializes the seed of the PRNG, which is going to be repeatedly utilized throughout the main function core routine. There are a total of 26 2 = 676 bigrams over the English language (line 4). The encrypted message is defined as a string with name encrypted (line 6). We further take its length (line 7), and make a declaration of another array of integers named encryptedMap, of the same length as the encrypted message itself (line 8). On the multi-threaded hill climbing design Now let's pause for a while. Since we want to utilize the multi-core capabilities of the GPU, we are going to perform a multi-threaded hill-climbing method to automatically decrypt a given message encrypted by MAS cipher. Hill climbing is a mathematical optimization technique and an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found. More details can be found in [Lasry, 2018]. Now, for simplicity, let's denote the encrypted message as E, which was constructed over some alphabet Ω. Let's denote the key of the MAS cipher as K. We can decompose K to 26 single substitutions, i.e.: where ζ i ∈ Ω and ∀i, j =⇒ ζ i = ζ j . A normal hill-climbing approach to recover an unknown key of such encryption scheme can be summarized with the following steps: 1. Generate two random letters ψ 1 and ψ 2 over the alphabet Ω. 2. Given an encrypted message E, interchange all the letters ψ 1 with ψ 2 . Let's denote the resulted message as E . 3. By using bigrams (or any other statistics) compare the scores of E and E . If E is a better candidate than E, we overwrite E with E . 4. Repeat until some threshold value is reached. Since we want to benefit from the multi-core capabilities of the GPU, we can tweak and modify the normal hill climbing algorithm with the following multi-threaded hill variant: 1. Generate two random distinct letters ψ l and ψ r over the alphabet Ω. Given an encrypted message E, we make sure that both ψ l , ψ r ∈ E. 2. Launch the kernel function in such a way, that every distinct thread corresponds to a unique unordered pair of distinct letters δ l and δ r over the alphabet Ω. Let us define such mapping as Υ. Example: Let's say we have an encrypted message E over some reduced alphabet consisting of the letters a, b and c only. Since the reduced alphabet consists of 3 letters, the total count of unique pairs of distinct letters δ l and δ r over Ω is 3 * 2 2 = 3. Indeed, we have the following possible set of ordered pairs δ l and δ r : . which is reduced to the following set of unordered pairs δ l and δ r : Having this in mind, we need to utilize a total number of 3 GPU threads. Let's denote them as t 1 , t 2 and t 3 . A possible mapping Υ 1 could be: This is a valid mapping, since each thread is mapped to a distinct pair and there is a total number of 3 mappings. 3. Given an encrypted message E, each thread should interchange ψ l with δ l , as well as ψ r with δ r . However, to guarantee that the order of the interchanges is irrelevant, we make sure that they take place if and only if ψ l = δ r and ψ r = δ l . Let's denote the resulted message as E . 4. By using bigrams, each distinct thread calculates the corresponding score of E denoted by F (E ). Then, the score F (E ) is further saved to the corresponding thread memory cell. When all the threads are ready and synchronized, we transfer back all the scores to the host. 5. We analyze all the scores and pick up the best yielded score. Let's denote the best score on index i as F (E i ). If E i possesses a better score than E, We should emphasize that currently we have F (E i ), not E i itself. However, by taking the index of the best yielded score, i.e. i, we can reconstruct the actual value of E i . We can easily achieve that by applying the inverse mapping Υ −1 on thread number i. Example: We use the same scenario as in step 2. We have the following setup before launching the GPU threads on the encrypted message E: The letters ψ l , ψ r ∈ E are chosen pseudo-randomly. By applying the interchanges described above, a given text E i is computed by the cor- Then, the concatenated values of the scores of E i are transferred back to the host. Let's store these values in the array is the best among all the scores. We further recall that the host doesn't have the E 2 text itself at the moment. However, we know the position of F (E 2 ), i.e. 2, which corresponds to the index of the thread that yielded this score, e.g. t 2 . By using the inversion mapping of Υ 1 : we can trace-back the operation on E, which yielded E 2 and, therefore, fully recover it. 6. We repeat until some threshold value is reached. This value usually depends on the size of the set of the possible initial choices (in our implementation we have two choices -ψ l and ψ r ). One could ask -"What is the reason to introduce step 1? Why don't we just try all the possible unordered pairs (δ i , δ j ) inside the GPU?". Well, such strategy would benefit from the multi-core architecture of the GPU as well. However, there is one major drawback -it is deterministic. To illustrate that, let's start from some encrypted message E over some alphabet Ω, s.t. |Ω| = n. We transfer E to the GPU, together with all the necessary data to evaluate the generated candidates. We skip step 1 and we do not generate two letters ψ i and ψ j , s.t. ψ i , ψ j ∈ Ω. Each GPU thread corresponds to a given transposition of letters in E. Without loss of generality, we can represent the thread mapping in lexicographical order, i.e.: Υ : Now, let's denote the generated candidate from a given thread t i as E i . Furthermore, let's denote the score results of E i under some evaluating function F as F (E i ). For simplicity, we choose F in such a way, that with T = n(n−1) 2 . Then, we have the following composition: Then, we collect all the resulting scores and choose the best candidate E m , s.t. Let's denote as M 1 the value of the maximum achieved score, i.e. Now, if we restart the process from the beginning, the best yielded score will be exactly the same value, i.e. M 1 . However, if we do not restart the process the optimization process will proceed as usual -since F (E m ) can be found at the position m, then the thread m yielded this score, i.e. t m . Hence, we interchange the current message E with the reconstructed better candidate E m . Then, we repeat the procedure until we have reached a local maximum candidate E max . Let's denote the best score yielded on iteration i as M i . The history of our best yielded scores during the optimization process could be summarized as follows: where NA is the abbreviation of "not available", denoting the absence of a better score after the k-th iteration step. We can generalize our previous observation, by stating that the restart of the optimization process from a given encrypted message E x , which corresponds to, for example, a score M x , and applying only one iteration, will lead to the best yielded score M x+1 . Therefore, if we restart the process from the beginning, we will reach M 1 , then M 2 , · · · , then M k , to travel exactly the same optimization path, and to reach exactly the same final local optimum. In this scenario, having a complete deterministic metaheuristic, we define M k as the attractor of M 1 . Implementation Now, in the context of our current problem, we can define the bijective mapping Υ as a lexicographical order mapping. Since we have a total of 26 letters, without considering the interchanging restriction introduced in step 3, we need a total of 26 2 = 325 threads (see line 5 of the main function source code). We save all the scores reported back from the device threads into the results array (defined at line 10). A regular mapping from ASCII to integer indexes is performed on the encrypted text as well (lines [12][13][14]. The current snapshot of the memory blocks of the host and the device is given in Figure 12. At the beginning, we read the encrypted message from a given file and save it in the character array encrypted. Then, we translate the character message to an integer one by using the integer array encryptedMap. We continue by We are ready to prepare and allocate all the necessary memory blocks inside the device global memory (lines 19-30). The memory block snapshot is depicted in Figure 13. Then, the bigram scores are transferred from the host to the device. After that, we enter the for block. The snapshot of the memory blocks of the host and the device, right before entering the for cycle, is given in Figure 14. We transfer to the GPU the bigram scores (line 30 in Listing 8). This is done just once. Next, we enter a for cycle, which is iterated 500 times. Starting from line 39 to line 69, we initiate the actual multi-threaded hill climbing instance. We define a reasonable number of iterations value of 500 (at line 39). By reasonable, we are emphasizing on the negligible probability of missing a better candidate, if such exists. First, we generate two random distinct letters ψ l and ψ r over the English alpha- bet, which are to be seen at least once in the encrypted text (see lines 41-45). We further transfer the encrypted message to the device -recall that the message is not static and is always replaced in the cases when better candidate messages are found. By completing the aforementioned routines, we are ready to launch the kernel K. As soon as the kernel is activated, each thread applies the corresponding (unique) interchange (ψ l → δ l , ψ r → δ r ) and further calculates the score of the modified text. Then, the score is saved to the dedicated cell inside the threads array. Completing all the steps described in this paragraph will yield the memory block snapshot of the host and the device shown in Figure 15. By using the host (CPU) PRNG we generate two distinct letters leftLetter and rightLetter and transfer the encrypted text to the device (line 47 in Listing 8). We launch the kernel K. Once the routine is finished, the results are populated inside the d results array on the GPU. Then, we transfer back the results (lines 48-54), to further analyze them and extract the best score. The extraction of the best score is given by the following helper function getMaxElement (see Listing 9): Listing 10: The translateIndexToLetters helper function 1 // translate a given index to a map of pair of distinct letters 2 __host__ void t ra n sl a te In d ex T oL et t er s Next, we continue with the main function routines. We accept the better candidate by calling the helper function climb (see Listing 11): Listing 11: The climb helper function 1 // interchange left with maxLeft and right with maxRight 2 __host__ void climb ( int * encrMap , int length , int left , int In case a better candidate is found, we print out some details about the can-didate (lines 64 and 65), as well as the current state of the message to be decrypted (line 66). This is done by the helper function demap (see Listing 12), which should not be mistaken with the bijective mapping Υ: Listing 12: The demap helper function 1 // demap and print a given array of integer letters printf ( " \ n " ) ; 7 } Figure 16 summarizes the current snapshot of memory blocks of the host and the device. We transfer the results from the device (the GPU) to the host (CPU) (line 54 in Listing 8), to further analyze them and pick the best candidate. We further update the current text by using the helper function climb. Once this is done, we step into the next iteration of for cycle. The final construction of the kernel and the complete version of the program is given in Listing 13. For a better overview some sections are omitted. The full code, including a real example to decrypt, is given in Listing 14. Listing 13: The complete GPU monoalphabetic-substitution solver. Lines 13-24 (to be found inside the kernel definition) are mapping each thread number to specific unique pair of letters by using the bijective mapping Υ. Then, each thread applies the corresponding interchange to calculate the score of the newly created candidate (lines 26-51). Furthermore, we make sure the restriction we have introduced in step 3 of the multi-threaded hill climbing variant is valid (lines 52 and 53). If we happen to be in a thread, where this restriction is not valid, we give the candidate the lowest possible score of 0. It should be emphasized that running a single instance of the final program doesn't guarantee a successful decryption of the encrypted text. In fact, it's highly unlikely that the plaintext is found immediately after the first try. Due to the high number of possible keys in the key search space, i.e. 26!, the multithreaded hill-climbing algorithm can easily stuck in some local maximum. A helpful illustration is given in Figure 17. The cold areas (nuances of blue) corresponds to candidates with low score values, while the hot areas (nuances of red) corresponds to candidates with higher score values. The algorithm usually starts from some random blue position and repetitively makes its way to the top. However, the finally reached peak only guarantee that there is no neighbor with a better score value than the peak itself -it doesn't guarantee that it is the highest peak possible. As you can see in Figure 17 we have a rich set of peaks, but in most real-world application, there is only one peak we are interested in. In literature, each peak is defined as a local maximum, while the highest peak is declared as the global maximum. If we are stuck in a local maximum, we should either try to escape from this peak (by actions which worsen the objective function value), or to reinitialize the optimization problem from the beginning. Having this in mind, we should reinitialize the GPU program several times and to collect all the reached peaks. Hopefully, one of them will be the global maximum. As an exercise, instead of manually reinitialize the program until the desired peak is reached, implement another loop wrapper of the main loop, to automatically reinitialize the multi-thread hill climbing algorithm. Few words should be mentioned regarding some difficulties you could experience during the compilation of a given CUDA source. More precisely, in those cases when you are using Windows OS as a host, the Microsoft provided compilers and linkers do not support the C programming language standard C99 (ISO/IEC 9899:1999). There are some noticeable differences between C99 and newer standards. More detailed information could be found in [OpenSTD, 2021]. One common issue which arises during the process of migrating a C99 complaint program to a newer standard is the restriction on declaring static arrays with non-constant lengths. Having this in mind, all the source codes to be found in this tutorial are attuned to be C99, C11 and C17 compatible. One last thing before delving into our first real problem task. As we have already mentioned, all non-CUDA compilation steps are forwarded to a C++ host compiler. If you want to send some addition options to the compiler, like optimization flags, the standard that should be taken under consideration, or anything else, the -Xcompiler switch could be used. More detailed information could be found in the CUDA Toolkit Documentation [NVIDIA, 2021c]. In Listing 14 the complete source code of Listing 13 is given. This is our first practical usage of the tools we have created so far. The lines 8-11 introduce the flexibility of the provided example, so to be easily compiled on C99, C11 or C17 programming language standards. The source code is to be compiled by nvcc. Can you recover the encrypted message and find the original book from which the plaintext was extracted? Please note: it is highly unlikely that you will be able to successfully decrypt the encrypted message from the first try. A little bit more tries are required for reaching the real solution. In most of the times you will be stuck in some local maximum having a gibberish-looking text. Just keep trying until some reasonable text uncovers. CUDA blocks and pseudo-randomness So far we have utilized the thread parameter inside a given kernel launch only. However, as we will later see, this is unwise approach due to inefficient usage of resources. Furthermore, the CUDA platform has a limitation of the maximum threads we can initiate inside a CUDA block -512, 1024 or 2048. Those limits depend from the specific model of the device. CUDA blocks A CUDA block, in short, is a collection of threads. For example, we can have x blocks with y threads each or a total of xy threads. A visual example for a CUDA kernel organized by blocks is given in Figure 18. Each column is depicted with a different color since it corresponds to a distinct kernel block. However, each thread number is unique only inside a given block. Having this in mind, we should use some other CUDA feature. To distinguish, for example, the following two threads: the first one with number i coming from the block b 1 ; and the second one with the same number i initiated from another block b 2 . For this, we are going to use the built-in variables blockDim (returns the size of a block) and blockIdx (returns the unique identification number of the current block). Let's say we want to launch a kernel having 16 blocks with 8 threads each. The task that should be performed involves a simple power calculation. Assume that we would like to raise the numbers from 0 to 7 (the thread IDs) to the powers 0 to 15 (the block IDs). To achive this, we can use the following code (see Listing 15). Lines 3-8 define a helper function powr (x,n), which raises x to the power of n, where x and n are integers. The kernel function is specified at the lines 10-13. The actual calling of the kernel is made at line 16. As you can see we launch the kernel with parameters 8,16, i.e. 8 blocks with 16 threads each. Now, we can divide or parallelize a given problem to blocks and threads. However, if you recall the MAS problem (Section 3), we have used the PRNG of the host only. This leaded to a noticeable overhead and over-complication of the problem: we first generated two pseudo-random letters ψ i and ψ j , then we pass them to the device as kernel parameters, to further interchange them with all the possible unordered pairs of letters in the corresponding alphabet. CUDA as pseudo-random number generator What if we want to use a PRNG inside the device? Furthermore, how to be sure that the pseudo-randomly generated numbers inside the threads are not equal? For this a fast and reliable PRNG is needed. Using the host PRNG routines is not a wise approach -we have to repeatedly transfer the generated values back and forth from the host to the device. With the help of the following CUDA code example given in Listing 16, we are going to learn some new techniques: • How to define and launch multiple kernels inside a single CUDA program. • How to use the external libraries provided by CUDA. • How to generate random float numbers by using the device itself. First (lines 2-3), we include the CUDA library, as well as the CUDA random kernel headers. For more flexibility, we define the block and thread numbers (lines 5-6) as constants B and T. The first kernel, which can be found at lines 8-11, is initializing each thread (from the pool of threads) with a unique initial seed. To achieve this we are using the built-in CUDA function curand init. We save each such seed to the corresponding cell inside the array state. The second kernel, which is defined at lines 13-19, uses the seed values initialized by the first kernel and the CUDA built-in function curand uniform to generate a random float number in the interval [0.0, 1.0]. We save this value in a corresponding cell inside an array named devRandomValues. Furthermore, we update the seed corresponding to this thread, so each consequent PRNG yields a different result. The main function follows the logic introduced during the examples in the previous sections. At lines 27-28 we allocate the device memory blocks: • a pointer to a device memory block devStates, which is going to save our current thread seeds; • a device memory block devRandomValues, which is going to save the generated pseudo-random values; Then, we launch the first kernel (line 31) to initialize the seeds, which is followed by the launch of the second kernel (line 35) to populate the pseudo-randomly generated numbers inside the memory block devRandomValues. The remaining lines were already discussed in the previous examples. Now, we need to address the following questions: • How to generate pseudo-random integer numbers in a predefined interval? • How to utilize the parallel device PRNGs for automatic cryptanalysis? A stand-alone CUDA application to automatically attack MAS ciphertexts We have all the necessary instruments to construct a complete stand-alone CUDA MAS cipher cryptanalysis tool. We will proceed as follows: 1. Setup the CUDA device and initialize a seed for each thread of the GPU. Make a local copy of the encrypted text inside each thread. 3. Initiate a simple hill climbing routine inside each thread by using bigrams. The PRNGs are launched inside the threads. All PRNGs are launched independently to guarantee a feed of unique values to the corresponding thread. 4. After passing some predefined threshold value to the hill climbing algorithm, the thread will report back the reached peak value. Then, the results are collected and reported back to the host. 5. The host compares the final scores of all candidates and announces the best one. Listing 17 contains the complete CUDA source code of the program. Listing 17: Parallel CUDA monoalphabetic solver with pseudo-random float number generator nested inside the device FILE * bigramsFile = fopen ( " bigramParsed " , " r " ) ; 19 while (1) Now, let's trace the program behavior by employing the useful memory snapshot approach. At lines 7-14 we define several constants we are going to use throughout the program logic. A summary of all the constants is given in Table 1. The current snapshot of the host and device memory blocks, up to line 123, is given in Figure 19. As usual, we setup the necessary pointers. For simplicity, we denote the variable type unsigned long long int with ulli. Furthermore, we denote the variable encryptedLen as LL. After that, as defined at lines 124-131 we first allocate and then transfer the data needed to be accessible through the GPU kernel routines (see Figure 20). We first allocate an ulli array, which is going to keep the scores inside the device memory as well. Then, we proceed with allocating two additional int arrays: the first will keep the encrypted text (the second row-block from the device memory block), while the second will keep all the decrypted candidates yielded by all the B*T threads (the third row-block in the device memory). Then, depicted by gray dotted arrows, we transfer the bigrams score from the host to the device , as well as the encrypted message itself. We are ready to setup the device built-in PRNG (lines 133-138). The current memory block snapshot of the host and the device is given in Figure 21. We allocate a curandState array, having size B*T equal to the number of device threads, which is going to hold down the initial seeds of the device built-in PRNG. Then, we launch the kernel setupKernel which populates the curandState device array with the provided by the kernel seeds. When a given thread with global index I needs a pseudo-randomly generated number, it directly communicates with cell indexed I inside the curandState device array. Moreover, after each call, the cell with index I is updated to guarantee an infinite stream of pseudo-randomly generated numbers. We are ready to launch the hill climbing routine device kernel K (line 142). Next, we transfer back (from the device to the host memory) the local optimums yielded by each device thread. The snapshot is given in Figure 22. Once the kernel setupKernel finished, we launch the kernel K. Each device thread is working with its own copy of the encrypted text and therefore each thread yields its own local optimum of the decrypted candidate. During the hill climbing routine, each thread is independently generating and utilizing pseudorandom numbers by querying its corresponding curandState cell. Once the thread finished, the respective local optimum is written down in the device memory block (the third row). Finally, all the local optimums are transferred back to the host, by using the dedicated integer array decrypted. Once the host integer array decrypted has been populated with the local optimums yielded by the device threads, we pick the best one and announce it as a solution. However, before doing that, we should make sure that all the threads had finished their hill climbing routines by synchronizing the threads (see line 143 from Listing 17). We are ready to un-allocate the used memory blocks (see Figure 23). We consequently un-allocate all the device memory blocks we have used throughout the automatic cryptanalysis routines (see lines 165-168 from Listing 17). However, as shown in line 169, we further need to free the dynamically allocated memory blocks from the host as well. Since the array decrypted was created dynamically (line 121) we further clean it by calling the host delete operator. Now, let's go back and inspect the new version of the kernel K. We start with copying the original encrypted message to the GPU thread (lines 52-54 from Listing 17). Then, we extract the unique thread identification number (line 55), initialize and load the seed from the device global memory (line 56), initialize some helper variables (lines 58-62) and initiate the hill climbing routines (lines 64-100). Each climbing try starts with resetting the delta variable to 0. Then, we generate a random float number by using the thread seed (line 66) and further convert it to an integer number in the interval [0,26)(line 67). We repeat the process to get another distinct integer number in the same interval (lines 68-72). Those numbers are saved in two variables named leftLetter and rightLetter. At lines 74-88 we interchange all occurrences of the letter leftLetter inside the encrypted text with the letter rightLetter and vice versa. The variable delta is tracing the score change when the old bigram (line 78) is interchanged with the new one (line 89). Indeed, if the overall sum of deltas is greater than zero, then the newly created candidate is better than the old one and we update it accordingly (that's why we have created a local copy of the encrypted text in the first place). This is visible at lines 92-100. We further save the best thread candidate to the device global memory, so we can later extract it by the host. Most of the routines inside the main function were already discussed in the previous sections. At the end of the function, at lines 148-158, we further extract the best candidate among all thread final candidates and print it to the terminal (line 162). The parallel hill climbing approach can be illustrated by the visualization shown in Figure 24. Each circle represents a local maximum, i.e. a peak, while the color of the circle corresponds to the peak altitude -lower values are colored in cold colors (nuances of blue), while the higher values are represented with hot colors (nuances of red). Figure 24: Example of CUDA parallel hill-climbing discretization Each thread of the CUDA device is to be attracted by some of those peaks. Then, the peaks are further collected to make measurements of their exact altitude in order to announce the best candidate. Now, let's compile and run the source code given in Listing 17. Can you decrypt the provded encrypted text? Who is the original author of the plain text? Finally, you could try to decrypt the message given in Listing 14 as well. What was the success ratio, i.e. the number of times you execute the program versus the number of times the global solution has been found? CUDA memory model and CUDA error handling This section briefly discusses a simple way to catch CUDA related errors after a given kernel launch. Besides that, we will take a look into the hierarchy of the CUDA memory model to emphasize its importance. There are several types of memory blocks inside a given CUDA device. Table 2 gives an overview of these types. A more detailed introduction to the CUDA memory hierarchy could be found in [NVIDIA, 2021a]. The variables defined inside the kernel are normally saved inside the local memory (composed of registers). Let's assume that we have a total of T threads to be launched by some CUDA kernel. Furthermore, each thread requires a total of X bytes to be allocated. So, we have a total of T * X bytes that need to be situated inside the CUDA core registers. If we have a total of C CUDA cores, each having a fixed number of available size of registers of Y bytes, the total available CUDA core register size is C * Y . What happen if T * X > C * Y ? This possible scenario is defined as registers spilling and the extra required memory is allocated on the global memory. However, this greatly reduce the software performance. During our construction of the CUDA cryptanalysis tools we ignored the memory model. We left that optimization to the CUDA compiler. Now, let's update the CUDA kernel of our previous example to benefit from the shared memory (see Listing 18). As shown at line 3 we declare an array named shared scores, which is going to be visible to all the threads inside the given block. At lines 10-13 we partially redistribute the transfer of the bigram scores from the global memory (d scores array) to the shared memory. Having this in mind, we need to synchronize all the threads before stepping into the next source code fragments (line 15). If we fail to do that, there is a great chance for some subset of threads to start reading parts of the shared memory block with undefined values. Then, we proceed with the bigram scoring routines as usual, but this time extracting the scores from the shared memory instead of the global one (lines 39 and 51). This simple utilization of the device shared memory resulted in considerable speed improvements. For example, during our experiments, an encrypted text with length 471 symbols is decrypted on average in 1.1 seconds by using a midrange GPU device and global memory only. However, by exploiting the shared memory utilization we did that in less than 0.5 seconds. In order to squeeze out the best of a given CUDA capable device we need to tune up the thread and number values. Now, introducing the shared memory concept it makes sense to organize the threads into blocks. However, we should pay attention to the technical parameters of the CUDA device. Otherwise, we can end up in the situation with unpredictable semantic errors. Nevertheless, we can catch some of those errors by using the following error handle fragment after each CUDA kernel launch (see Listing 19): Listing 19: Example of error handling in CUDA, optimized by shared memory utilization 1 cudaError_t error = cudaGetLastError () ; 2 if ( error != cudaSuccess ) { 3 printf ( " CUDA error : % s \ n " , cudaGetErrorString ( error ) ) ; 4 exit ( -1) ; 5 } 6 Dynamical vs static GPU memory allocation The automatic cryptanalysis tool presented in Listing 17 was constructed in a flexible way, which, for example, allows us to re-configure the source code to be applicable for another encrypted text with different length by minimum efforts. However, to achieve this we used a technique which is more common for CPU programming, not GPU, i.e. dynamic memory allocation. We can significantly improve the performance of the GPU device if dynamic memory allocation routines are avoided. Listing 20 contains the complete CUDA source code of the static implementation of the solver. This time the need of dynamically memory allocation is avoided by exploiting the C/CUDA preprocessor directives (see lines [7][8][9][10][11][12][13][14][15]. Compile and run the source code in Listing 20. Can you recover the plaintext of the encrypted message? Listing 20: Final CUDA implementation for automatic cryptanalysis of text encrypted by MAS, optimized by shared memory utilization 16) 8 # define T (( int ) 26) 9 # define THREADS (( int ) B * T ) 10 # define CLIMBINGS (( int ) 5000) 11 # define ALPHABET (( int ) 26) 12 # define totalBigrams (( int ) ALPHABET * ALPHABET ) 13 14 # define encrypted " d j r g g r y g d u d r d u n k l u e j q x g a h b v h x b i x n a d j n g → q n i q w d x q a q g u a e o u l u d b d n d j n g q s j n g q l n a e o u l h d u n k t q i u q g h k b n x t u → k h x b a q d j n t n i h k h o b g u g h k t s j n g q g n o r d u n k x q m r u x q g d j q e n g g q g g u n k → n i o n k z a q g g h z q g h k t a r l j d u a q h k t g d r t b i n x d r k h d q o b d j q a n x q t u i i u l → r o d g r y g d u d r d u n k l u e j q x g h x q x h x q o b r g q t i n x a u o u d h x b e r x e n g q g n k h → l l n r k d n i d j q d u a q h k t l h x q x q m r u x q t i n x q k l u e j q x u k z h k t t q l u e j q x u k z " 15 # define ENCRYPTEDLEN (( int ) sizeof ( encrypted ) -1) 16 17 __host__ void extractBigrams ( long long int * scores ) { 18 FILE * bigramsFile = fopen ( " bigramParsed " , " r " ) ; 19 while (1) Used principles and comparison to other implementations This Section summarizes the improvements made throughout the tutorial. Then, by following the design principles outlined during the previous sections, a standalone CUDA application for automatic cryptanalysis of ciphertexts encrypted by single-columnar transposition is presented. At the end the provided GPU routine is compared with CT2 [Esslinger, 2009]: a state-of-the-art cryptanalysis tool. Generalization of principles Throughout this tutorial, we addressed common issues, which arose from translating a given problem from the domain of classical cryptanalysis to the domain of general-purpose computations on graphics processing units (GPGPU). Step by step, by introducing technical optimization and mathematical insights, we built an efficient stand-alone framework. Let's summarize the most critical improvements we made: • Reducing the overhead caused by the host-device communication (Section 3.1): The first optimization issue was caused by the bandwidth overhead, generated from the communication between the host and the device. This, for example, is illustrated in Listing 7. We pick two different random letters, pass them to the GPU kernel, which logically divides the work to the 325 threads. Then, the best score is fetched and we repeat this process 500 times. However, this creates additional bandwidth overhead as we repeatedly send packets back and forth between the device and the host. To get rid of this undesired behavior, we made the following changes: -The PRNG was reallocated from the CPU to the GPU itself (Section 4.2). -The CUDA application was switched to an heuristic version and the host-device bandwidth was optimized. Due to this modification, the hill climbing routine is launched only once (Section 4.3). • Utilizing a metaheuristic approach, which is both: highly effective in solving the problem and is efficiently implemented to split the overall work between different GPU cores. Various nuances of metaheuristics were exploited throughout the tutorial. We first started with best neighbor hill climbing approach (Section 3.3). Then, we migrated to the better neighbor metaheuristic (Section 4.3). This migration was possible by encapsulating the PRNG to the GPU device. • Synchronizing the threads (Section 4.3): Synchronization was achieved by using the following techniques: -Each thread starts from a pseudo-random state. Hence, the key space is randomly crawled. -Each thread is a stand-alone, i.e. it is capable of performing the full instance of the algorithm entirely by itself. • Using faster memory when possible (Section 5): The most-frequently used memory read operations were migrated to the device shared memory space. A stand-alone CUDA application for automatic ciphertextonly attacks against single-columnar transposition Following these design principles, the last example (see Listing 20) was slightly modified to be launched on a more complex problem, the cryptanalysis of the Single-columnar transposition SCT cipher. An introduction, as well as an overview of the state-of-the-art attacks on the SCT cipher, can be found in the PhD thesis of G. Lasry [Lasry, 2018]. The source code in Listing 21 is a CUDA implementation for automatic cryptanalysis of encrypted by SCT. Listing 21: Automatic cryptanalysis of ciphertext encrypted by SCT The structure of the GPU implementation is following the same observations made at the beginning of this section (see subsection 7.2). However, since we are now dealing with the different encryption method, some modifications are required. The decrypt function of the SCT cipher is given in lines 58-82. It is visible for both the host and the device. Then, in lines 84-96, we introduce two helper functions swapElements() and swapBlock(). The swapElements() function provides an in-memory flip of two elements, while the swapBlock() function provides an in-memory flip of two continuous blocks, with restriction to overlapping. The wrapper of the major CUDA kernel is identical, in terms of logic and structure, to the wrapper of the MAS analyzer kernel. However, in order to improve the scoring function, some minor changes are introduced, which will significantly improve the success rate of plaintext recovery. As discussed in [Lasry, 2018] and [Antal et al., 2019], the metaheuristic plays an important role in the SCT cryptanalysis. In fact, for larger keys, specifically when combined with a short length of the encrypted message, it is difficult, if not impossible, to predict which metaheuristic strategy is going to be the most effective. However, as discussed in [Lasry, 2018], there are several search operators that appear to be usually highly efficient: • An inversion of two elements in the key. Labeled as operation I. • An inversion of two continuous and non-overlapping blocks in the key. Labeled as operation II. • A shift of a continuous block inside the key. Labeled as operation III. From a metaheuristic point of view, populating the algorithm routine with different search operators, each having a non deterministic path, rises many questions. For example, which variables dictate the behavior of a given search operator? How to orchestrate the search operators, i.e. how and when to switch from one search operator to another? In most cases, the answers of all those questions are correlated to the problem we are trying to solve. Hence, there is no single right answer. Listing 21 implements a metaheuristic apparatus similar to the one in Cryp-Tool 2 (CT2) (version 2.1, stable build 8853.1, 2020) -an e-learning software including several applied cryptanalysis components ([Kopal et al., 2014, Esslinger, 2009): • We have a total of 3 search operators, orchestrated by a variable named branch (line 139). In lines 21-22, two variables (both with the prefix HEUR THRESHOLD OP) could be tweaked by the user. Let's denote them as p 1 and p 2 , where p i corresponds to HEUR THRESHOLD OPi. The algorithm chooses the first operator with probability p 1 100 , and the second operator with probability p 2 −p 1 100 . Thus, the probability of choosing the third operator is 1 − p 2 −p 1 100 − p 1 100 = 1 − p 2 100 . • Operator I: This search operator (see lines 141-149) modifies the key by interchanging at least two elements inside it. Exactly which elements, and how many interchanges should occur, is dictated both by the PRNG and the OP1 HOP variable restriction -the total count of interchanges should not exceed this value (see line 4). • Operator II: This (see lines 151-159) corresponds to operation (2) defined in [Lasry, 2018] -we just interchange at least two continues and non-overlapping blocks sharing the same length. However, the total count of interchanges should not exceed the value of OP2 HOP. The user can tweak this value (line 25). • Operator III: This corresponds to operation (3) defined in the aforementioned work -we just slide a continues and non-self-overlapping block to the left or right (lines 161-180). • N-gram log2: The migration to N-gram log2 is introduced. More details could be found in [Náther, 2005]. In short, if we have a sentence "ABC", it is scored as P("AB") · P("BC"), i.e. the overall probability of occurrence of this specific composition of letters, given a pre-calculated word corpora. However, multiplication is a tedious operation. Hence, we can use a logarithm, with some arbitrary base, to utilize an additive operation instead. Indeed, let's denote "AB" as x and "BC" as y. Thus, for some N , the following equations hold true: P (x)P (y) = N log 2 (P (x)P (y)) = log 2 N log 2 P (x) + log 2 P (y) = log 2 N This allows to interchange the multiplication operator with the accumulation operator, when a probability space corpora is used. During our experiments, we used the Google N-gram corpus [Google, 2015]. We should note that the bigger the key is, the larger values of threads and climbings are needed. Thus, the time required to recover the message increases. Comparison with state-of-the-art SCT cryptanalysis During our experiments, and by using the above GPU implementation, we were able to successfully recover the plaintexts, corresponding to ciphertexts encrypted by columnar transposition cipher with unknown keys (with lengths no bigger than 40) in less than 20 seconds. For example, given 1120 CUDA threads (utilizing 1152 CUDA cores), a climbing constant of 15,000 and a ciphertext with 596 symbols, encrypted by a key with length 25, the unknown plaintext was successfully recovered in approximately 5.9 seconds. The search space of this problem, given a key of size 25, is 25!, which is approximately equal to 2 83 . Nevertheless, a general-purpose computer equipped with mid-range CUDA capable video card could recover, due to the heuristic nature of the algorithm, the plaintext in less than a minute. Table 3 compares the CUDA decryption routine with the hill climbing routine in the 2020 version of CT2. We used general-purpose hardware: a video card of NVIDIA 1060, 3 GB, and a CPU of Intel Celeron G1820 @ 2.7 GHz with 2 cores. The times shown in Table 3 correspond to the time needed for the given job to be completed (the time needed for all the threads to complete all the iterations). The numbers do not correspond to the first valid decryption of the encrypted text. The success ratio of the key recovery routine is entangled to the choice of metaheuristic, search operators or other parameters, which could orchestrate the trace direction through the search space. It would not be fair to compare the times needed for a given job to be completed, without consideration of the success ratio. Having this in mind, by examining the source code of the CrypTool hill climbing routine (CHC), we have further tuned the CUDA routine to exactly match the search operators and the magic constants to be found throughout CHC. This perfectly synchronized the success ratio of CUDA routine and CHC. However, it did not affect the time needed for completion of the CUDA algorithm. In overall, the CUDA routine is approximately 250 times faster than the CHC. During the comparison stage, we used the following options: Action: Decrypt, Read in: by row, Permutation: by column, Read out: by column, Function type: N-grams log 2, N-gram size: 2, Language: English. For more information regarding cryptanalysis of columnar transposition ciphers, we recommend section 5 of Lasry's thesis ( [Lasry, 2018]), the paper devoted to breaking historical ciphers from the Biafran war ( [Bean et al., 2020]), as well as the research of cryptanalysis of columnar transposition ciphers with long keys provided in [Lasry et al., 2016]. After completing this tutorial, you might want to look at the attacks presented in [Combes, 2021a, Combes, 2021c, Combes, 2021b, Combes, 2021d against the Sigaba cipher machine ( [Savard and Pekelney, 1999]) which was used by the United States for message encryption during the World War II. In the series of posts, Stuart Combes explains the details of his attack based on the improved method of Stamp and Chan (2007) ([ Stamp and On Chan, 2007]). By exploiting the fact that most of the computations needed for breaking Sigaba can be parallelized, the attack was practially implemented using CUDA. Understanding of this attack and how it was implemented can further deepen your knowledge and is a nice example of how CUDA could be used for cryptanalysis of a more enhanced cipher. Summary This tutorial outlined how multi-core GPU devices can also be beneficial to solve problems related to classical cryptanalysis. Starting from a complete blank project, we built up and efficiently implemented a full CUDA-based tool, which is able to automatically decrypt ciphertexts encrypted by MAS or columnar transposition ciphers. Some major questions raised in the process of designing a CUDA-based cryptanalysis tool are: • Are we going to use PRNGs? If so, should we initialize them on the device? • What metaheuristic are we going to apply? Are we going to use a hill climbing approach (straightforwardly accepting the first better candi-date)? Or maybe regular neighborhood search defined by some search operator I, by always choosing the best neighbor? Or some alternative metaheuristic such as simulated annealing [Van Laarhoven and Aarts, 1987] and tabu search [Glover and Laguna, 1998]? • What is the score, which we are going to use? Is it going to be single score (just bigrams) or multi objective score (maybe, bigrams plus trigrams plus pentagrams)? What is the size of the data related to the cost function, that we need to upload on the device? Is it small enough to be kept inside the registers? If not -is it small enough to be kept inside the shared memory? If not, maybe we can reshape the blocks or decrease the threads inside a given block, to further increase our chances to utilize the shared memory? • How often do we need to transfer data back and forth between the host and the device? Maybe, we can optimize the decryption process to keep such transfers at minimum? • How are we going to logically organize the threads? Are they going to be entirely undependable or mutually working on the same distinct subset of a given problem? How often do they need to be synchronized? Once we have answered the major questions, we can make a sketch of our implementation. During this sketching we can catch a bunch of design errors, which are not so easy to be revealed at the time of the initial design. Furthermore, by using diagrams we can proactively design the role and scope of each array and function. Moreover, we can estimate the overhead of our draft architecture and try to optimize our method before the actual implementation. Overview of the predefined constants . . . . . . . . . . . . . . . . 54 2 Overview of the types of memory blocks in a CUDA device . . . 62 3 Elapsed time comparison between CUDA and the hill-climbing solver in CT 2.1 (build 8853.1) . . . . . . . . . . . . . . . . . . . 81 Acronyms CPU central processing unit CT2 CrypTool 2 GPU graphics processing unit MAS monoalphabetic substitution PRNG pseudo-random number generator SCT single-columnar transposition
2021-03-26T01:16:23.230Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "845405c37ff89e4c51bc92fa0727048d34ad4744", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "845405c37ff89e4c51bc92fa0727048d34ad4744", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257087556
pes2o/s2orc
v3-fos-license
Cardiometabolic profile and leukocyte telomere length in a Black South African population Several studies have reported a possible association between leucocyte telomere length (LTL) and cardio-metabolic diseases (CMDs). However, studies investigating such association are lacking in South Africa despite having a very high prevalence of CMDs. We investigated the association between LTL and CMD risk profile in a black South African population. This was a cross-sectional study with participants > 21 years of age and residing in five townships in Cape Town. CMD markers were compared between men and women and across quartiles of LTL. Linear and logistic regressions relate increasing quartile and Log10LTL with CMD risk profile, with appropriate adjustment. Among 676-participants, diabetes, obesity and hypertension prevalence were 11.5%, 23.1% and 47.5%. Waist-circumference, hip-circumference and highly sensitive c-reactive protein values were significantly higher in women (all p < 0.001), while HDL-C (p = 0.023), creatinine (p = 0.005) and gamma glutamyl transferase (p < 0.001) values were higher in men. In age, sex and BMI adjusted linear regression model, Log10 of LTL was associated with low HDL-C (beta = 0.221; p = 0.041) while logistic regression showed a significant association between Log10LTL and prevalent dyslipidaemia characterised by high LDL-C. In this population, the relationship between LTL and CMD is weak given its association with only HDL-C and LDL-C. Results Out of the 1116 participants examined in this study, DNA extracted from stored samples for analysis were available for 676 participants. This number resulted from samples with LTL values which could only be quantified on good quality DNA sample (absorbance 260 nm/280 nm ratio between 1.7 and 2.0). The prevalence of the cardiometabolic disorders was 47.5% for obesity, 23.1% for hypertension and 11.5% for type 2 diabetes (Table 1). Weight, BMI, heart rate, WC, HC and hs-CRP (all p < 0.001) median values were higher in women than men while height (p < 0.001), HDL-C (p = 0.023), creatinine (p = 0.005) and GGT (p < 0.001) values were significantly higher in men than women (Table 1). Moreover, more women than men were obese by all parameters measured [BMI Table 1. Cardiometabolic characteristics of the study population. Legend: SBP systolic blood pressure, DBP diastolic blood pressure, WC waist circumference, WHR waist to hip ratio, WHtR waist to height ratio, HDL-C high-density lipoprotein cholesterol, LDL-C low-density lipoprotein cholesterol, GGT gamma glutamyl transferase, hs-CRP highly sensitive c-reactive protein. * means there is significant difference in the variable comparing men with women. (Fig. 1A). In order to have a normal distribution of TL, it was Log 10 transformed (Fig. 1B). The association between LTL and cardiometabolic profile was investigated by categorizing the LTL values into quartiles with the first being the lowest and the fourth being the highest ( Table 2). Amongst the different cardio-metabolic parameters investigated, quartiles of LTL were shown to be positively and significantly associated with HDL-C (Table 3). Linear regression carried out in age and sex adjusted model showed a significant association between Log 10 LTL and HDL-C (beta = 0.028; p = 0.041) ( Table 4) while increasing quartiles of LTL was not associated with HDL-C in linear regression models with similar levels of adjustment. Neither increasing quartiles of LTL nor Log 10 LTL was associated with other cardio-metabolic parameters in age and sex adjusted linear regression models. In logistic regressions, there were no associations between quartiles of LTL and diabetes, hypertension, any dyslipidaemia and obesity variables. However, when LTL was Log transformed, there was a significant association between Log 10 LTL and prevalent dyslipidaemia with High LDL-C > 3.0 mmol/L (OR = 0.41, p = 0.045) ( Table 5). Discussion This study examined the associations of LTL with cardiometabolic variables of adiposity, hypertension, type 2 diabetes and dyslipidaemia in a black urban South African population. Linear regression carried out in age and sex adjusted model showed a significant association between Log 10 LTL and HDL-C as continuous variable. In logistic regression model, Log 10 LTL was associated with prevalent dyslipidaemia characterized by high LDL-C. Correlation analysis showed that LTL was associated with Total Cholesterol and non-HDL-C in women and urea in men. However, neither quartiles of LTL nor Log transformed values were associated with hypertension, obesity and type 2 diabetes, which was surprising and warrants further exploration in this population. The association between LTL and some lipid parameters in the study suggests a possible but weak relationship between shortened TL and dyslipidaemia. Similar results were obtained in the United States 13 and in Iran 14 . Dyslipidaemia, characterised by altered serum lipid levels is associated with several disease conditions including coronary heart disease, hypertension, diabetes, obesity and oxidative stress which is related with LTL shortening. Although most published literature reported a positive association between short telomere length and a high prevalence of diabetes, obesity, hypertension and other cardiovascular diseases, in this study there were no associations between LTL with diabetes, hypertension or obesity. A meta-analysis of multiple studies found a significant negative association between LTL and diabetes 15 while a systematic review reported a weak to moderate association between obesity and telomere length 16 . Short TL was also positively correlated with high SBP and DBP 11 , high fasting glycaemia 17 , altered lipid profile markers 6 and cardiovascular diseases 10,18-20 . The proposed www.nature.com/scientificreports/ pathway of the association between shortened TL and CMDs reported is bi-directional with cardio-metabolic diseases causing shortening of telomere length and short telomere length increasing the risk of cardio-metabolic diseases. However, these studies have all been carried out in European, Asian and American population, with no studies from Africa, suggesting that the lack of association in our study could be as a result of population differences. Moreover, the sample size of these cross-sectional studies and systematic reviews/meta-analysis was www.nature.com/scientificreports/ large enough (minimum > 5000) compared to 676 in our study. Therefore, their studies had more power to detect differences compared to our study. It is possible that LTL could also be determined by the origin and evolution of individuals. Hansen reported shorter TL in Europeans and African Americans originating from Western Africa compared to those living in Africa originating from Tanzania (Eastern Africa) 21 . These results are consistent with other studies reporting longer telomere length in Black Africans compared to white Europeans and Americans in both children and adult [22][23][24][25][26][27] . However, TL was observed to be longer in white compared to black teachers in South Africa 28 . In this South African population, the risk of cardiovascular disease was higher in black teachers that white teachers 28 . These results shows that genetic differences between ethnic groups and environmental factors 'contribute' to overall telomere length. Table 3. Cardio-metabolic profile presented by telomere length quartiles. Legend: SBP systolic blood pressure, DBP diastolic blood pressure, WC waist circumference, WHR waist to hip ratio, WHtR waist to height ratio, HDL-C high-density lipoprotein cholesterol, LDL-C low-density lipoprotein cholesterol, GGT gamma glutamyl transferase, hs-CRP highly sensitive c-reactive protein. 29 . Another important factor that affects TL is sex with several studies showing TL to be longer in women than men 9,30-33 . However, in the present study there was no association between LTL and gender. Even though the present study was carried out in an African population which is different from studies reporting an association between TL and age/gender (Asia, Europe and USA) and could probably explain the difference in the results, further research is needed to explore these associations in Africans. Moreover, the black South African population in which the study was carried out is genetically diverse with some having gene flow from Europe, East Africa and South Asia 34 . This genetic diversity in the study population could be responsible for the difference in the results obtained. The cross-sectional design of the study prevents conclusions on a causal relationship between LTL and the CMD risk factors investigated. The small sample size of the study and the low sample realisation in men (34%) characteristic of epidemiological studies in this country and probably due to their reluctance to participate, particularly for the drawing of blood samples is another limitation. In this study, multiple hypothesis testing (MHT) was not performed. As such, the results would not withstand a stringent MHT based correction which constitute a possible limitation. In a black urban South African population, LTL was weakly associated with HDL-C and LDL-C, but not with diabetes, hypertension or obesity. Although the association of LTL with cardiometabolic risk factors have been reported in many populations, there is a paucity of data in African populations. Further research, particularly in longitudinal studies, is required to clearly elucidate the relationship between LTL and CMDs in Africans. Materials and methods Study site and population. Participants consisted of > 21 years old black men and women residing in Cape Town. This cross-sectional study titled Cardiovascular Risk in Black South Africans (CRIBSA) was conducted in 2008-2009 with data collected by a 3-stage cluster sampling as previously described 35 . This sampling technique was used with quotas, which were pre-specified by age and sex categories to ensure a representative sample, Recruitment took place during office hours and those excluded were the following: pregnant and lactating women, individuals who were bedridden, unable to give consent, on tuberculosis treatment, on antiretroviral Table 4. Linear regression models (coefficients and standard errors) for the associations of Log 10 telomere length with cardio-metabolic variables. Legend: SBP systolic blood pressure, DBP diastolic blood pressure, TC total cholesterol, TG triglyceride, HDL-C high density lipoprotein cholesterol, LDL-C low density lipoprotein cholesterol, BMI body mass index, TL telomere length, *** = p < 0.001, * = p < 0.05. Variable Age ( Table 5. Logistic regression models (odds ratios and 95% confidence intervals) for the associations of Log 10 telomere length with cardio-metabolic conditions. Legend: BMI body mass index, SBP systolic blood pressure, DBP diastolic blood pressure, TC total cholesterol, TG triglyceride, HDL-C high density lipoprotein cholesterol, LDL-C low density lipoprotein cholesterol, TL telomere length, *** = p < 0.001, * = p < 0.05. Data collection. Data, which were collected by trained fieldworkers, included administered questionnaires, clinical examinations and biochemical analyses. Clinical examinations comprised anthropometry (height, weight, and waist and hip circumferences) and blood pressure (BP) measured using standard techniques 36 . A calibrated scale was used to measure weight to the nearest 0.5 kg with each participant barefoot and in light clothing. A stadiometer was used to measure height to the nearest 0.1 cm. A flexible tape measured waist and hip circumferences to the nearest 0.1 cm. For waist circumference (WC), the tape was placed approximately 2 cm above the umbilicus while hip circumference (HC) was measured at the maximum posterior protuberance of the buttocks with the participant standing upright with feet together. Three BP measurements were taken at intervals of 2 min, using an Omron BP monitor after the participant had been rested for at-least 5 min. The average of the second and third BP measurements was used for analysis. After an overnight fast of approximately 10 h, blood samples were collected by venepuncture into EDTA and dry tubes and a portion processed for biochemical analysis. Plasma glucose (hexokinase) was measured using a colorimetric method according to the manufacturer's protocol. Total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C) and triglycerides were measured in serum using standard enzymatic techniques [37][38][39] . Lowdensity lipoprotein cholesterol (LDL-C) was calculated using the Friedewald formula 40 , while non-HDL-C was calculated using the formula: TC-HDL-C. An oral glucose tolerance test (OGTT) was administered with blood samples collected 2 h after a glucose load 41 . All colorimetric measurements were conducted using a Beckman Coulter AU 500 spectrophotometer. Serum creatinine (CAYMAN CHEMICAL), gamma glutamyl transferase (Abcam) and highly sensitive c-reactive protein (hs-CRP) (BIOMATIK ELISA) measurements were conducted on stored serum samples according to the manufacturer's protocol. TL assay was conducted from DNA samples extracted from whole blood stored at −80 °C in EDTA tubes using the salt extraction technique. Briefly, 5 mL blood samples in EDTA tubes were defrosted to room temperature and poured into a 50 mL centrifuge tube. Thirty mL lysis buffer (see supplementary material) was added, and red blood cells were lysed by incubation on ice and vortexing. After lysis of red blood cells, the pellets were washed thrice with phosphate buffered saline (see supplementary material) which was later discarded. The pellets were then incubated with nuclear lysis buffer (see supplementary material) overnight at 60 °C. The next day, the supernatant was collected, and the proteins precipitated using 1 mL saturated sodium chloride (6 M) solution. The supernatant containing the DNA was collected into new 15 mL centrifuge tubes and absolute ethanol added to precipitate the DNA by inversion. Precipitated DNA was removed and washed with 70% ethanol. After washing, the precipitate was dissolved in Tris Ethylene Diamine Tetra-Acetate buffer (see supplementary material) and the concentration and quality of the DNA measured using a Nano drop. All samples with absorbance 260 nm/280 nm ratio from 1.7 to 2 were diluted to 5 mg/mL using polymerase chain reaction (PCR) grade water and TL measured by quantitative real time PCR using the method described by O'Callaghan and Fenech 42 . Serial dilutions of the telomere standard and the single copy gene (36B4) standard were made as described by O'Callaghan and Fenech 42 . A master mix solution containing Power SYBR I (AmpliTaq Gold DNA polymerase, dNTPs, SYBR I Green Dye, optimised buffers and passive reference dye (ROX) (10μL, 1×)), forward primer (1μL, 0.1 μM), reverse primer (1μL, 0.1 μM) and ddH 2 O (4uL) was prepared, mixed well and briefly centrifuged. Using a multichannel pipetted, 16 μL master mix solution were pipetted into each well of a 96 well plate. Into the corresponding wells were added 4 μL each of DNA sample, standards, positive and non-template control (distilled water) in duplicates. The plate was sealed with an optical clear film, centrifuged briefly and run in a QuantStudio 7 Flex Real Time PCR Thermocycler using the following PCR conditions; 10 min at 95 °C, followed by 40 cycles of 95 °C for 15 s 60 °C for 1 min, followed by a dissociation (or melt) curve. At the end of the run, the plate was removed and discarded. Each sample was amplified twice, using telomere forward and reverse primers and the single copy gene forward and reverse primers. After amplification was completed the AB software produced a value for each reaction that is equivalent to kb/reaction based on the telomere standard curve values. The kb/ reaction for telomere and genome copies/reaction for diploid genome copy values were exported and used to calculate the LTL in kilobase (kb) as follows; LTL = telomere kilobase per reaction value diploid genome copy number . Statistical analysis. Data www.nature.com/scientificreports/ linear trend in CMD profile (continuous variables) across the different quartiles of TL was computed using the median test. Similarly, chi square test was computed and the linear-by-linear association used to compare the trend in proportions of disease conditions (categorical variable) across the quartiles of TL. Spearman correlation was used to assess the association between quartile of TL and cardio-metabolic parameters. The interactions between TL categories and cardio-metabolic risk profile were tested using linear and logistic regressions, by incorporating in the same model the main effects of the variables of interest as well as their interaction term with TL. In linear and logistic regression analyses, TL was log transformed. A p value < 0.05 was considered statistically significant. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2023-02-23T14:09:39.470Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "3965465e1d4aad3c384d78db7ed0a4ec99048056", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-07328-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "3965465e1d4aad3c384d78db7ed0a4ec99048056", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
7852267
pes2o/s2orc
v3-fos-license
Sensitization of Cervical Carcinoma Cells to Paclitaxel by an IPP 5 Active Mutant Reversible protein phosphorylation regulates the biological activity of many protein complexes, and is regarded as a major mechanism for the control of cell cycle progression. It has been reported that the semi-selective inhibitors of PPases, such as okadaic acid, cantharidin, and fostriecin, influence several aspects of cell cycle progression (Cohen, 2002). An important Ser/Thr protein phosphatase, protein phosphatase-1 (PP1), regulates a series of physiological events, such as cell cycle, gene expression, protein synthesis, glycolipid metabolism and memory formation (Ceulemans and Bollen, 2004). Its critical function in mitosis is evidenced by the occurrence of metaphase arrest in various eukaryotic cells with PP1 mutation or inhibition (Booher and Beach, 1989; Kinoshita et al., 1990). The microtubule dynamics experiments also indicate that PP1 activity is necessary for the completion of mitosis (Cheng et al., 2000). Furthermore, abnormally high expression of PP1 was observed in some tumor cells, indicating that PP1 might promote the growth of malignant tumors (Sogawa et al., 1996). Protein phosphatase inhibitor-1 (PPI-1) is the first endogenetic molecule found to inhibit PP1 activity, when phosphorylated by protein kinase A (PKA) at Thr-35 (Nimmo and Cohen, 1978). Introduction Reversible protein phosphorylation regulates the biological activity of many protein complexes, and is regarded as a major mechanism for the control of cell cycle progression.It has been reported that the semi-selective inhibitors of PPases, such as okadaic acid, cantharidin, and fostriecin, influence several aspects of cell cycle progression (Cohen, 2002).An important Ser/Thr protein phosphatase, protein phosphatase-1 (PP1), regulates a series of physiological events, such as cell cycle, gene expression, protein synthesis, glycolipid metabolism and memory formation (Ceulemans and Bollen, 2004).Its critical function in mitosis is evidenced by the occurrence of metaphase arrest in various eukaryotic cells with PP1 mutation or inhibition (Booher and Beach, 1989;Kinoshita et al., 1990).The microtubule dynamics experiments also indicate that PP1 activity is necessary for the completion of mitosis (Cheng et al., 2000).Furthermore, abnormally high expression of PP1 was observed in some tumor cells, indicating that PP1 might promote the growth of malignant tumors (Sogawa et al., 1996).Protein phosphatase inhibitor-1 (PPI-1) is the first endogenetic molecule found to inhibit PP1 activity, when phosphorylated by protein kinase A (PKA) at Thr-35 (Nimmo and Cohen, 1978). 1 Department of Biochemistry and Molecular Biology, 4 School of Preclinical Medicine, Guangxi Medical University, Nanning, Guangxi, Sensitization of Cervical Carcinoma Cells to Paclitaxel by an IPP5 Active Mutant Qi-Yan Zeng 1& *, Yu Huang &2 , Lin-Jie Zeng 3 , Min Huang 4 , Yong-Qi Huang 1 , Qi-Fang Zhu 1 However, when Thr-35 is mutated to Asp, PPI-1 could inhibit the activity of PP1 without phosphorylation by PKA.The active mutant of IPP5 (8-60hIPP5 m ), the latest member of the inhibitory molecules for PP1, has been demonstrated to inhibit the activity of PP1 in Thr-40dependent manner in vitro with a similar IC50 as PPI-1 (Wang et al., 2008).Previous studies from our laboratory have shown that 8-60hIPP5 m significantly inhibited the growth of human cervix carcinoma cells (HeLa) by inducing apoptosis (Zeng et al., 2009) and G2/M arrest (Zeng et al., 2012). Carcinoma of the cervix is considered relatively resistant to chemotherapy.It is the second most common cancer in women worldwide and one of the most important causes of cancer-related death, especially in developing countries (Jemal et al., 2008).It has been known that the high prevalence of HPV infection occurs in cervical cancer, and the two commonest HPV genotypes in cervical cancer were HPV 16 and 18 (Wang et al, 2013).Despite the declining mortality rate for cervical cancer through the last decade, advanced or recurrent disease remains a major cause of death (Long, 2007).Current treatment modalities such as surgical ablation and/or external radiotherapy intervention remain largely palliative for cervical cancer patients because the disease recurs in a refractory form. Paclitaxel (Taxol) is capable of inhibiting microtubule depolymerization and arresting the cell cycle at the G2/M phase, which leads to apoptotic cell death (Horwitz, 1992).In the clinic, paclitaxel has been shown to have significant therapeutic benefits on ovarian, breast, non-small cell lung, and head and neck cancers (Sparano et al., 2008).However, the success of paclitaxel chemotherapy in cervical cancer patients is limited by the myelotoxicity and neurotoxicity (Zanetta et al., 2000).Furthermore, tumors tend to acquire resistance to cytotoxic chemotherapeutic agents, including paclitaxel (Koshiyama et al., 2006).The molecular basis of resistance to paclitaxel is not well understood.Paclitaxel has been implicated in regulating targeted cellular proteins that promote cell survival and block apoptosis (such as Bcl-2 and Bcl-XL) (Chun and Lee, 2004).Another mechanism of chemoresistance involves enhanced phosphorylation of protein kinase B/ Akt (Page et al., 2000).Furthermore, NF-κB promotes cell survival and up-regulates genes which are important for tumor proliferation and metastasis (Pahl, 1999).So blocking NF-κB activation may augment cancer chemotherapy.Therefore, combination with agents that could inhibit agents that induce apoptosis and stimulate NF-κB activity may be effective. Based on the finding that 8-60hIPP5 m induces apoptosis and G2/M arrest in HeLa cells, we hypothesized that it may sensitize cervical cancer cells to paclitaxel.In this study, we examined the effects of 8-60hIPP5 m overexpression in HeLa cells in the presence of paclitaxel, and found that the combination was very effective at inhibiting proliferation and inducing mitotic arrest and apoptosis.We also investigated the potential molecular mechanism underlying this synergistic effect, and determined whether the combination enhanced the inhibition of antiapoptotic signal transducers Akt and NF-қB, which leads to the increased activation of caspase-mediated apoptosis. Cell transfection The expression vectors phIPP5-B and p8-60hIPP5 m -B were transfected into HeLa cells using Lipofectamine 2000 reagent (Invitrogen, Carlsbad, CA, USA) with pcDNA3.1/myc-His(-)B as a mock control.Stable cell lines overexpressing hIPP5 or 8-60hIPP5 m were selected by 600-1000 μg/ml G418 for 2-3 weeks, and cloned by limiting dilution.These stable cell lines were designated as HeLa-hIPP5 and HeLa-8-60hIPP5 m , respectively.The established stable cell lines were maintained in the same culture medium as used for parental HeLa cells.The stable expression of hIPP5 or 8-60hIPP5 m was confirmed by RT-PCR and Western blot.The stably transfected cells were treated with 10 nM paclitaxel (Concord Pharmaceutical, China) for different periods of time and then subjected to Western blot analysis, DNA analysis, and apoptosis assay. [ 3 H]Thymidine incorporation HeLa-hIPP5, HeLa-8-60hIPP5 m , HeLa-mock or parental HeLa cells (5×10 3 /well) were seeded into 96well plates and cultured in 10% FCS-DMEM containing 0.05 nM paclitaxel for 72 h.After removing the medium, [ 3 H] Thymidine (Amersham Pharmacia Biotech, Little Chalfont, Buckinghamshire, UK) at a dose of 0.5 Ci/ well was added, and the cells were cultured in10% FCS-DMEM for another 18 h.Then the cells were harvested onto glass fibers with a multiple cell harvester, and the proliferation of HeLa cells was assessed by [ 3 H] thymidine incorporation using a β-Scintillation Counter (Wallac, Milton Keynes, Bucks, UK).Results of [ 3 H] thymidine incorporation (cpm) were expressed as means±SD. Colony formation in soft agar Single-cell suspensions of HeLa-hIPP5, HeLa-8-60hIPP5 m , HeLa-mock or parental HeLa cells (500 cells) were cultured using 0.3% type II agarose in 10% FCS-DMEM containing 0.05 nM paclitaxel on 6-well plates that were previously coated with 0.6% type II agarose and incubated under standard culture conditions.After 7-10 days, the numbers of colonies (>50 cells) were counted under an inverted microscope. Cell cycle analysis Cells were harvested and washed in PBS, then fixed in 75% alcohol for 30 min at 4°C.After washing in cold PBS for three times, cells were resuspended and incubated in 1 ml of PBS containing 40 μg of propidium iodide (PI, Sigma Chemical Co., St.Louis, MO, USA) and 100 μg of RNase A (Sigma Chemical Co.) for 30 min at 37°C.Samples were then analyzed for DNA contents using a fluorescence-activated cell sorter FACSCalibur (Becton Dickinson, Mountain View, CA, USA). Apoptosis assay Cells were exposed to 10 nM paclitaxel for 24 h, harvested, and stained with PI, Rhodamine 123 (R-302, Molecular Probes, Eugene, OR, USA) according to the manufacturer's instructions.Stained cells were analyzed using FACSCalibur. Assay of caspases After treatment with 10 nM paclitaxel for 8 h, cells were collected and lysed.Caspase-3 activity was assayed using Caspase-3 Colorimetric Assay kit (BD Pharmingen, San Diego, CA, USA), and caspase-8 activity was assayed using Caspase-8 Activity Assay kit (Chemicon International, Temecula, CA, USA) according to manufacturers' instructions. Measurement of cytochrome c release Cells were lysed in lysis buffer (10 mM HEPES [N-2hydroxyethylpiperazine-N'-2-ethanesulfonic acid, pH 7.5], 10 mM KCl, and 1 mM EDTA [ethylenediaminetetraacetic acid]) supplemented with protease inhibitor cocktail (Sigma).The cells in lysis buffer were frozen and thawed 3 times, and spun at 2000g for 5 min.The supernatant was further centrifuged at 60 000g for 30 min at 4°C, and used for cytochrome c content analysis by Western blot. Assessment of IҝBa degradation and NF-ҝB nuclear translocation Cytoplasmic and nuclear extracts were prepared using NE-PER nuclear and cytoplasmic extraction reagents (Pierce Biotechnology, Rockford, IL, USA), and the protein concentrations were determined using the BCA protein assay kit.IҝBα in cytoplasmic extracts and NF-ҝB subunit p65 in nuclear extracts were detected by Western blot using specific antibodies. Assay of PI3-K activity PI3-K activity was assayed with a PI3-K ELISA kit (Echelon Biosciences Inc., Salt Lake City, UT, USA) according to the manufacturer's instructions.In brief, cells were treated with 10 nM paclitaxel for 1 h.Cell lysates were prepared, and PI3-K protein was immunoprecipitated with an antibody against the p85 subunit and incubated with PI (4, 5)P2.The reaction products were incubated with a PI (3, 4, 5)P3 detector protein before adding to PI (3, 4, 5)P3-coated microplates for competitive binding.A peroxidase-linked secondary antibody and colorimetric detection reagents were used to detect PI (3, 4, 5)P3 detector protein binding to the plate.The colorimetric signal was inversely proportional to the amount of PI (3, 4, 5)P3 produced by PI3-K activity. Statistical analysis All experiments were repeated a minimum of three times.Pairwise comparisons were conducted using Student's t test.p values of less than 0.05 were considered statistically significant. 8-60hIPP5 m enhances the inhibition of cell proliferation by paclitaxel To investigate the biological functions of human IPP5, human cervical carcinoma HeLa cells were transfected with hIPP5 or 8-60hIPP5 m expression vector.We have previously demonstrated that hIPP5 or 8-60hIPP5 m gene can be efficiently transfected into HeLa cells by RT-PCR and Western blot (Zeng et al., 2009).In order to investigate whether there is any synergism between paclitaxel and 8-60hIPP5 m , [3H] thymidine incorporation was used to evaluate the DNA synthesis, which reflects the cell proliferation.The result showed that overexpression of 8-60hIPP5 m enhanced the inhibition of HeLa cell proliferation by paclitaxel.As shown in Figure 1a, when cells were treated with paclitaxel, [3H] thymidine incorporation level in HeLa-8-60hIPP5 m cells was about 25% of that in mock-HeLa cells (p < 0.01), which had a similar proliferative rate as parental HeLa cells.We also performed colony formation assays and found that clonal growth of HeLa-8-60hIPP5 m cells was also significantly inhibited compared to mock-HeLa cells or parental HeLa cells in the culture medium containing 0.05 nM paclitaxel (Figure 1b).Taken together, these results suggest that 8-60hIPP5 m synergizes with paclitaxel to inhibit HeLa cell proliferation. 8-60hIPP5 m synergizes with paclitaxel to induce apoptosis We further investigated whether 8-60hIPP5 m could Figure 2 showed that the combination of 8-60hIPP5 m with paclitaxel increased apoptosis to 48.82%, which is significantly higher than 8-60hIPP5 m or paclitaxel alone (p <0.01).Thus, although 8-60hIPP5 m had some effect on its own, it greatly enhanced the ability of paclitaxel to induce apoptosis in HeLa cells.One mechanism of apoptosis is related to the activation of the caspase cascade.So we examined caspase-3 and caspase-8 activities in paclitaxel-treated cells.As shown in Figure 3a, treatment of HeLa-8-60hIPP5 m cells with paclitaxel significantly increased caspase-3 activity.Caspase-8 activity was slightly higher in HeLa-8-60hIPP5 m cells treated with paclitaxel, but the difference was not statistically significant.The activation of caspase-3 by paclitaxel in HeLa-8-60hIPP5 m cells suggests that the apoptosis could be mediated, at least in part, through cytochrome c release from the mitochondria in these cells.To test this hypothesis, we examined the effects of 8-60hIPP5 m and paclitaxel treatments on cythochrome c release. Figure 3b showed that 8-60hIPP5 m or paclitaxel alone did not affect the accumulation of cytochrome c in the cytosol.However, the combination of 8-60hIPP5 m and paclitaxel resulted in a significant increase of cytochrome c release.Cytochrome c release to the cytoplasm could be resulted from an increase in proapoptotic Bcl2 family members or a decrease in prosurvival Bcl2 family members. 8-60hIPP5 m synergizes with paclitaxel to induce G2/M phase arrest Paclitaxel stabilizes microtubules and causes mitotic arrest, which is important to paclitaxel-induced apoptosis (Woods et al., 1995;Jordan et al., 1996).We examined the effect of combination of both 8-60hIPP5 m and paclitaxel on cell-cycle progression.As shown in Figure 4, 8-60hIPP5 m or paclitaxel alone induced G2/M arrest to some degree respectively.However, the combination doi.org/10.7314/APJCP.2014.15.19.8337 8-60hIPP5m Potentiates Chemosensitization of Cervical Cancer Cells to Paclitaxel of 8-60hIPP5 m with paclitaxel significantly decreased the proportion of HeLa cells in the G0/G1 phase (from 69.83% to 27.31%, p<0.01) and increased the proportion of cells in the G2/M phase (from 18.69% to 57.12%, p<0.01). Paclitaxel-induced NF-қB activation and IқBα degradation are inhibited by 8-60hIPP5 m Under normal conditions, the majority of NF-қB subunits are sequestered in the cytoplasm by I-қBα, and translocated into the nucleus following I-қBα degradation.Paclitaxel activates NF-қB in several cell lines through the degradation of IқBα (Das and White, 1997;Lee and Jeon, 2001;Smitha et al, 2005).To investigate whether paclitaxel-induced NF-қB activation is through I-қBα degradation, western blot was performed against anti-IқBα using the cytoplasmic protein extracts.As shown in Figure 5, we observed the activation of NF-қB in HeLa cells by paclitaxel, and noticed that 8-60hIPP5 m down-regulated the nuclear accumulation of NF-қB p65 subunit induced by paclitaxel.The nuclear extract of HeLa cells treated with tumor necrosis factor-α, a well known activator of NF-қB, was used as the positive control.Interestingly, consistent with the inhibition of NF-қB activation in HeLa-8-60hIPP5 m cells in response to paclitaxel, the total I-қBα level in paclitaxel-treated HeLa-8-60hIPP5 m cells showed no significant change while that of paclitaxel-treated control cells was decreased, indicating that 8-60hIPP5 m could inhibit paclitaxelinduced I-қBα degradation in HeLa cells.These results suggest that 8-60hIPP5 m inhibits NF-қB activation likely through blocking the degradation and/or inducing the synthesis of the inhibitory protein IқBα. 8-60hIPP5 m inhibits paclitaxel-induced PI3-K/Akt activation Akt is a survival signal mediator that in many cases is regulated by NF-қB (Ozes et al., 1999;Pianetti et al., 2001).As shown in Figure 6a, after treatment with paclitaxel, Akt phosphorylation was clearly observed in control cells, while 8-60hIPP5 m almost completely abolished the phosphorylation of Akt, indicating a possible role for Akt in the synergistic effect of paclitaxel and 8-60hIPP5 m .Similarly, 8-60hIPP5 m decreased PI3-K activity in paclitaxel-treated cells (Figure 6b).Taken together, these results suggest that the enhancement of paclitaxel-induced apoptosis by 8-60hIPP5 m is likely through the inhibition of NF-қB activation, IқBα degradation, and PI3-K/Akt activity, resulting in the chemosensitization of HeLa cells to paclitaxel. Discussion Cervical cancer is one of the common gynecologic malignancies.Recent studies reported that docetaxel and cisplatin in concurrent chemoradiotherapy in advanced cervical cancer has a good short-term effect, but increased the toxicity, so the long-term effect needs further observation (Ke et al., 2012).Ionizing radiation can induce many base alteration, so increasing the sensitivity of tumor cells to chemotherapy would improve outcome in patients with cervical cancer. Protein phosphatase 1 (PP1) is a major eukaryotic protein serine/ threonine phosphatase that regulates a variety of cellular functions through the interaction of its catalytic subunit with different regulatory subunits.IPP5, the latest member of PP1 inhibitory molecules, contains 116 amino acids.IPP5 shares significant homology to IPP1, especially in the two conserved motifs of the N terminus, KIQF and Thr-40.8-60hIPP5 m , the constitutively activated form of hIPP5, which contains 8-60 residues in the N terminus, with an Asp substituting for Thr-40 to mimic the functional effects of phosphorylation, could inhibit PP1 activity without phosphorylation by PKA. In this study, we found that 8-60hIPP5 m could synergize with paclitaxel to arrest human cervical cancer cells at G2/M phase.We also found that 8-60hIPP5 m This study is an extension of our earlier work in which it was discovered that 8-60hIPP5 m caused HeLa cell growth retardation in vitro and in vivo by inducing G2/M arrest and apoptosis (Zeng et al., 2009;2012). In eukaryotic organisms, cell cycle progression is regulated to a large extent by the reversible phosphorylation of various proteins.PP1 plays an important role during cell cycle progression, especially during mitosis.Accumulating evidences demonstrate that both phosphorylation and dephosohprylation are involved in the control of cell cycle.Neutralizing PP1 by anti-PP1 antibodies, of PP1, or treatment with various natural phosphatase inhibitors (such as okadaic acid, calyculin-A, tautomycin, microcystin-LR, fostriecin and cantharidin) have been shown to interfere with cell cycle progression and checkpoint abrogation, resulting in multiple aberrant mitotic spindles and apoptotic cell death (Kinoshita et al., 1990;Van Dolah and Ramsdell, 1992;Sakoff et al., 2002).In human cells, PP1 could act as a histone H1 phosphatase, which is required for chromatin decondensation during the exit from mitosis (Paulson et al., 1996).The subcellular localizations of PP1 isoforms change at mitosis.PP1α is located at the centrosome, while PP1γ and PP1δ are associated with mitotic spindles and chromosomes, respectively (Andreassen et al., 1998).This might explain why mutation or inhibition of PP1 can cause complex abnormal phenotypes, including delayed transition of metaphase to anaphase, condensed chromosomes, formation of abnormal spindles, microtubule dynamics and chromosome separation malfunction, and defect in cytokinesis. One of the mechanisms of paclitaxel-induced G2/M arrest is that paclitaxel can induce CDK1 activation, inhibit the expression of cyclin A and cyclin B1 proteins in a time-dependent manner, and up-regulate the Cdk inhibitor, p21WAF1/CIP1 (Choi and Yoo, 2012).Our previous research also demonstrated that 8-60hIPP5 m can induce CDK1 activation in HeLa cells transfected with 8-60hIPP5 m , delay the expression of cyclin A and cyclin B1 in the cell cycle, and up-regulate p21WAF1/CIP1 expression (Zeng et al., 2012).These findings suggest that CDK1, cyclin A, cyclin B1 and p21WAF1/CIP1 may be the common target molecule for paclitaxel and 8-60hIPP5 m , and both paclitaxel and 8-60hIPP5 m have the same effects on these molecules.These may help to explain why 8-60hIPP5 m can enhance G2/M arrest by paclitaxel.However, further research is needed to investigate the exact mechanism about how 8-60hIPP5 m synergizes with paclitaxel to induce G2/M arrest. Tumor cells often evade apoptosis by overexpressing antiapoptotic proteins such as Bcl-2, NF-қB, and Akt, which provide them with survival advantages (Wang et al., 1999;Vivanco and Sawyers, 2002).Paclitaxel activates NF-қB in several cell systems, probably through the principal kinase IKK-β (Lee and Jeon, 2001).In the present study we observed that paclitaxel-induced NF-қB activation was down-regulated in HeLa-8-60hIPP5 m cells, which may contribute to the sensitization of HeLa-8-60hIPP5 m cells to paclitaxel-induced apoptosis.Studies published previously describe paclitaxel as an activator of Akt, which is a serine/threonine protein kinase and a downstream target of phosphoinositide 3-kinase (Mabuchi, 2002).We observed that paclitaxel-induced Akt activation was suppressed in HeLa-8-60hIPP5 m cells.It is known that Akt suppresses apoptosis by activating NF-қB (Ozes et al., 1999;Pianetti et al., 2001).According to a recent report, treatment with LY294002, a specific inhibitor of phosphoinositide 3-kinase, resulted in the enhancement of paclitaxel-induced cytotoxicity.This process was followed by the inhibition of NF-қB transcriptional activity, indicating that NF-қB may be the crucial intermediary mediator connecting Akt with the intrinsic susceptibility of cancer cells to chemotherapeutic agents (Nguyen et al., 2004). In conclusion, we report a novel role for 8-60hIPP5 m in sensitizing cervical carcinoma cells to paclitaxel.We found that 8-60hIPP5 m synergizes with paclitaxel to induce G2/M arrest and apoptosis, which may involve the inhibition of NF-қB activation, IқBα degradation, and PI3-K/Akt activity.These results suggest that 8-60hIPP5 m might be explored for therapeutic uses in combination with chemotherapy. Figure 1 . 8 - Figure 1.8-60hIPP5 m Enhances the Inhibition of Cell Proliferation by Paclitaxel.a, Stably transfected HeLa cells and parental HeLa cells were treated with paclitaxel as indicated.Cell proliferation was determined by (3H) thymidine incorporation assay.Values shown are means±SD of quadruplicate cultures from one experiment, which is representative of four independent experiments conducted.b, Stably transfected HeLa cells and parental HeLa cells (5×10 3 / well) were cultured in type II agarose medium containing 0.05 nM paclitaxel.Colonies (>50 cells) were counted after 7 days of incubation.Values are expressed as means±S.D. of triplicate cultures.**, p<0.01 versus paclitaxel-treated parental HeLa cells or paclitaxel-treated HeLa-mock cells Figure 2 . 8 - Figure 2. 8-60hIPP5 m Synergizes with Paclitaxel to Induce Apoptosis.Stably transfected HeLa cells were treated with paclitaxel (10 nM) for 24 h.Cells were labeled by green fluorescent cationic dye (Rhodamine 123, R-123) and PI.The percentages of lower left and upper left represent the early and the late apoptotic cells, respectively Figure 5 . 8 - Figure 5. 8-60hIPP5 m Inhibits Paclitaxel-Induced NF-қB Activation and IқBα Degradation.Stably transfected HeLa cells in 6-well plates were treated with paclitaxel (10 nM) for 60 min.Tumor necrosis factor (0.1 nM) treatment was used as a positive control.Cells were collected and proteins were extracted using NE-PER® Nuclear and Cytoplasmic Extraction Reagents.NF-κB p65 in nuclear extract and IκB-α in cytosol were detected by Western blot.Nucleoporin p62 and actin were used as the nuclear and cytosol protein loading controls, respectively Figure 6 . 8 - Figure 6.8-60hIPP5 m Inhibits Paclitaxel-Induced PI3-K/Akt Activation.a, paclitaxel-induced Akt activation is down-regulated by 8-60hIPP5 m in HeLa cells.Stably transfected HeLa cells were treated with paclitaxel (10 nM) for 1 h.Whole cell lysates were resolved on a 10% PAGE gel and blotted against phospho-Akt serine 473.b, 8-60hIPP5 m inhibits paclitaxel-induced PI3-K activity.Stably transfected HeLa cells were treated with paclitaxel (10 nM) for 1h.Cell lysates were prepared for PI3-K activity assay.Similar results were obtained in three separate experiments.*p<0.05versus parental HeLa cells or HeLa-mock cells
2018-04-03T00:00:34.947Z
2014-10-23T00:00:00.000
{ "year": 2014, "sha1": "76994862dccca08ff157f9a001546125a508cf65", "oa_license": "CCBY", "oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201435053629163&method=download", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "76994862dccca08ff157f9a001546125a508cf65", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12351966
pes2o/s2orc
v3-fos-license
Embodied cognition, abstract concepts, and the benefits of new technology for implicit body manipulation Current approaches on cognition hold that concrete concepts are grounded in concrete experiences. There is no consensus, however, as to whether this is equally true for abstract concepts. In this review we discuss how the body might be involved in understanding abstract concepts through metaphor activation. Substantial research has been conducted on the activation of common orientational metaphors with bodily manipulations, such as “power is up” and “more is up” representations. We will focus on the political metaphor that has a more complex association between the concept and the concrete domain. However, the outcomes of studies on this political metaphor have not always been consistent, possibly because the experimental manipulation was not implicit enough. The inclusion of new technological devices in this area of research, such as the Wii Balance Board, seems promising in order to assess the groundedness of abstract conceptual spatial metaphors in an implicit manner. This may aid further research to effectively demonstrate the interrelatedness between the body and more abstract representations. INTRODUCTION Imagine you are reading a story in which someone turns up the volume on his car radio while in real life you are closing the top of a soda bottle. Would these two things (reading about an action and performing a very similar action) influence each other? Research suggests they would. When you are closing the bottle, you are likely to read the words "turn up" faster than if you were opening the bottle (Zwaan and Taylor, 2006). It appears that a rotation of your hand that is congruent with an implied rotation in a sentence facilitates the speed with which relevant parts of the sentence are being processed. This happens because readers make an elaborate mental representation of what they read that is similar to their experience in real life. How close is this connection between actions on one hand and cognitive processes, such as reading, on the other? Does it only apply to identical actual and implied movements, or does the connection extend beyond these mappings, for example to abstract concepts that do not imply movement at all? According to the embodied cognition approaches such connections also exist. Theories on embodied cognition are gaining importance in the field of psychology and beyond (Pecher et al., 2011;Wilson-Mendenhall et al., 2011;Dijkstra and Zwaan, 2014;Glenberg et al., 2014). In contrast to earlier theories on cognition that consider processing and storage of incoming information to take place in an abstract, symbolic manner, embodied cognition theories focus on the body as being central to shaping the mind (Wilson, 2002). Specifically, cognitive processes are presumed to depend on the sensory-motor system in the brain that reactivates earlier experiences, a process called sensory-motor simulation (Barsalou, 1999). When such an experience is retrieved, neural states are re-enacted from the systems that were relevant for the original experience, such as action and perception systems. Cognition is therefore grounded through simulation (Barsalou et al., 2003;Dijkstra and Zwaan, 2014). Based on the available empirical evidence that has accumulated over the past decade or so, this link between actions and representations of concrete concepts has been well established. KEY CONCEPT 3 | Concrete concepts Concrete concepts refer to something that is present in the physical world, such as a tree in a forest. This means that these concepts have physical or spatial constraints. A tree can grow in a forest but not on the moon. Concrete concepts include but are not limited to physical objects in the world. Concrete actions, such as kicking or smiling, are also examples of concrete concepts. Recently, more critical points of view have been articulated regarding the specificity of the embodied cognition approach and the boundaries of phenomena that can be explained with this approach. One argument is that embodied cognition research has merely demonstrated that "thoughts and actions go together" but not that the body is essential in carrying out cognitive tasks (Mahon and Caramazza, 2008;Wilson and Golonka, 2013). Another argument is that any effects of grounding are taken as positive evidence for embodiment even if they are different or oppose one another (Willems and Francken, 2012). Rather than making general predictions regarding the involvement of sensorymotor systems in cognitive processes that back all findings, the hypotheses should be more specific and the explanation should focus more on underlying mechanisms of embodiment. A third argument concerns support for the claim that similar connections exist between actions and representations of abstract and concrete concepts (Mahon and Caramazza, 2008;Pecher et al., 2011;Maglio and Trope, 2012). This claim has been challenged because abstract concepts, in contrast to concrete concepts, refer to entities that have no physical or spatial constraints, hence a direct mapping of an abstract concept, such as "democracy," with a sensory-motor domain is problematic. If abstract concepts without a direct representation in the physical world cannot be physically interacted with, how can they ever be represented through simulation (Mahon and Caramazza, 2008)? Addressing these arguments with research involving specific hypotheses regarding the role of the body in the way we think and how abstract concepts are grounded is essential in order to be able to "take the next step" in embodied cognition research. The role and impact of the body on cognitive processing have to be specific enough in order to test falsifiable hypotheses. One way to do this, is to specify in each study when and how embodiment occurs (Willems and Francken, 2012). Moreover, it is important to determine the role of the body in cognitive processes in as much detail as possible. This can be done by asking questions, such as: Are sensory-motor processes necessary for cognitive processing, sufficient for cognitive processing, neither necessary nor sufficient or are sensory-motor processes only needed for deep conceptual processing (Fischer and Zwaan, 2008)? The answers may differ depending on the task being used, and the frame of mind an individual has on a given point in time (Maglio and Trope, 2012). The groundedness of abstract concepts can be evaluated with empirical evidence from studies that have examined abstract concepts as instantiations of concrete concepts in a situation Johnson, 1980, 1999). Abstract concepts, such as "democracy" are considered to have an indirect basis in the sensory-motor system as representations of situations that are created from different individual experiences, such as "voting in a voting booth" (Barsalou and Wiemer-Hastings, 2005). Thus, abstract concepts can be understood in terms of concrete concepts through metaphorical associations with concrete domains of experiences. Research has provided empirical evidence for this mapping between abstract concepts and concrete experiences. Orientational metaphors KEY CONCEPT 4 | Orientational metaphors Orientational metaphors are metaphors in which concepts are spatially related to one another. For example, when we speak of feeling "up" or "down," or when we think of the future being "in front of" and the past "behind" us. Orientational metaphors are exceptionally useful in research because they are plentiful, used in a variety of ways, and easy to represent and manipulate in experiments. provide a spatial orientation for an abstract concept, which can be vertical (down-up), horizontal (left-right) or sagittal (frontback). For some metaphors, there is a physical basis, for example "more is up" because stacking items vertically coincides with a higher quantity of those items. The metaphor "power is up" has a more indirect physical basis, as a result of experiences of statistical regularities that one encounters from infancy onward where power is exerted by someone with greater height (parent-infant, teacher-child). Other mappings between abstract concepts and concrete experiences have been established by bodily manipulations as well. The conceptual metaphors "right is more" and "up is more" were Frontiers in Psychology www.frontiersin.org August 2014 | Volume 5 | Article 757 | 2 KEY CONCEPT 5 | Conceptual metaphors Abstract concepts are understood in the context of concrete experiences. For example, the metaphor "Life is a journey" connects the abstract concept of life to experiences during which one went on a journey and knowing that it has an element of time and destination to it. Because of such a concrete experience, the sensory-motor system forms the basis of the representation of the abstract concept. activated by having participants move in a chair along x-and y-axes with higher numbers being generated when moving leftto-right and upwards (Dehaene et al., 1993;Hartmann et al., 2012). Other studies examined how orientational metaphors could be combined with the way emotions are represented as "positive is up" (Crawford et al., 2006;Casasanto and Dijkstra, 2010). In those cases, the mapping between the abstract concept and concrete domain is even more complex, because it is based on an association of emotional life experiences and vertical motion. Participants who moved marbles upward or downward with their hands activated the metaphor of "positive is up" and "negative is down" by retrieving positive memories when moving upward and negative when moving downward (Casasanto and Dijkstra, 2010). The "positive is up," "negative is down" representation of the conceptual emotional metaphor has its origin in life experiences, where we cheer when we are happy and sit down with our head down in our hands when we are sad. It is therefore remarkable that this metaphor is still activated when an unrelated movement (depositing marbles upward or downward in a container) is being performed. Another abstract concept for which complex mappings with concrete experiences exist is "time." Time has been represented along the horizontal axis with the past being associated with left and the future with right (Santiago et al., 2007;Ulrich and Maienborn, 2010). Santiago et al. (2007) found that people are faster to categorize words as belonging to the past or the future when words are categorized as the past with a left-hand and future words with a right-hand response. This time metaphor can be modulated by cultural differences in language representations. For example, speakers of Mandarin use vertical terms to talk about time and therefore responded more quickly to vertical representations of time compared to speakers of English who think about time in horizontal spatial terms (Boroditsky, 2001). Abstract concepts may not always be directly grounded through interactions with the world but have their basis in instantiations of concrete experiences and co-occurrences with a certain representation. These mappings between abstract concepts and their more concrete representations are dynamic and can not only be learned over time but also change when co-occurrences change. This discussion of research that demonstrates how abstract concepts are grounded in action and perception provides insight into the what and when conditions under which grounding occurs. All studies conducted from an embodied cognition perspective, demonstrated an effect of the body (whole body or hand movement) on task performance with the response following the body manipulation, not the other way around. The question remains, however, if there are boundaries with regard to the groundedness of abstract concepts. It is feasible that the sensory-motor system becomes less involved with a higher abstractness of a concept (Clark, 1999). We will address this issue in more detail by reviewing several studies on a particularly abstract conceptual metaphor that may pose a challenge to demonstrate effects of embodiment with: the spatial political metaphor. SPATIAL ORIENTATION, BODY MANIPULATION, AND POLITICAL METAPHORS The representation of politics along a horizontal axis originates from the way the French Legislative Assembly (established in 1791) was spatially organized in the assembly room with conservatives situated on the right and liberals on the left. This spatial organization has resulted in the construction of the abstract political concept of the right equaling the conservative end of the spectrum and the left as the liberal or progressive end. The abstractness of this metaphor may differ depending on the country it is being used in. In the United States, political debates are broadcasted with liberals on the viewer's left and conservatives on the viewer's right. In the Netherlands, on the other hand, the political left/right distinction is represented as a continuum of several parties, suggesting a more subtle spatial array of left to right. Therefore, even as the actual left-right seating arrangements have largely been abandoned the metaphor for political left and right remains in most western countries (Goodsell, 1988). Can this abstract political concept with such an obscure experiential basis still be activated with a manipulation of the body? Several studies have examined the possibility of this activation in several countries that vary in the way in which political parties are represented (Oppenheimer and Trail, 2010;Van Elk et al., 2010;Dijkstra et al., 2012;Farias et al., 2013). Oppenheimer and Trail (2010) demonstrated the activation of the political metaphor in three experiments with different body manipulations (squeezing a hand-grip with the right or left hand, sitting on a chair tilted to the left or right, and clicking on visual targets on the left or right side of a screen). A manipulation to the left resulted in a higher agreement with Democrats on political issues but a manipulation to the right did not result in higher agreement with Republicans. Van Elk et al. (2010) found support for the groundedness of political metaphors in the Netherlands, where not two, KEY CONCEPT 6 | Political metaphors "Politics" is an abstract concept that can be understood in the context of more concrete experiences. For example, the terms "left" and "right" are used in politics as equivalent of liberalism (or progressiveness) and conservatism respectively. This representation of politics on a horizontal axis originates from the way the French Legislative Assembly was spatially organized in the assembly room. but ten political parties are represented in the political landscape, indicating a true continuum of left to central, and from central to rightwing parties. Participants were manipulated to use their left or right hand to respond to acronyms of political parties and Dutch broadcasting companies, or to respond with the same hand and to stimuli presented on the left or right side of the monitor. Overall, the authors found that participants were faster to respond with their hand that was congruent to the political affiliation of a shown acronym of a political party than with their incongruent hand. However, the effects varied across experiments, sometimes demonstrating congruency effects only for rightwing and sometimes only for leftwing parties. In other words, the association between spatial orientation and politics affected online judgments of political acronyms but the effects were not consistent across experiments. In a third study, researchers demonstrated that the political conceptualization of left to right is also apparent when tested with an auditory measure (Farias et al., 2013). Participants judged conservative words to be louder to the right ear than to the left ear and socialist words to be louder to the left ear than to the right ear. These studies all demonstrated an association between the abstract concept of the political right and left and the concrete concept of spatial right and left. Although the studies support the idea that the relationship between abstract concepts and concrete domains is integrated in multiple modalities, a problematic element in several of the experiments was that the congruency effects were not consistent for both leftwing and rightwing affiliations. Could this be indicative of a boundary limitation of the groundedness of abstract concepts? Not necessarily. Effects were demonstrated, even if they were asymmetrical. Also, an alternative explanation of these findings could be that participants were aware of the manipulation, which could have affected their response. The use of the hands, ears, and visual fields could have been obvious to them as left/right manipulations and thus have given away what the experimenters were after. Another possibility is that political metaphors are more complex in countries without a political dichotomy, such as the Netherlands, because the political landscape consists of multiple parties that form a left-toright continuum, rather than a left and a right pole. The political metaphor may still be activated then but constitute a more complex mapping with parties in the continuum and therefore yield more inconsistent results. Given these problematic aspects of the studies discussed above, a better, more subtle way to manipulate the body, is needed. A more implicit manipulation of the body may yield more consistent results and hide the true purpose of the task at the same time. A particularly effective way to do that is to make use of devices that enable such implicit manipulations. A promising line of investigation in this respect is research that uses new technology to measure changes in the body in a surreptitious manner. This technology can overcome some of the problems associated with the earlier studies on political metaphors and provide an effective tool to both manipulate and assess the activation of conceptual metaphors. THE WII BALANCE BOARD AS A TOOL TO IMPLICITLY MANIPULATE THE BODY New technologies used for leisure time activities at home, such as the LEAP-motion (i.e., an infrared device that detects and reacts upon hand movements; Weichert et al., 2013), the Wiimote (i.e., a wireless input device that uses Bluetooth technology), and the Wii Balance Board (WBB), have become increasingly popular among researchers to empirically assess behavior in the lab. Their popularity is based on the facts that most people are already familiar with them, they are relatively inexpensive, and provide precise and useful data as outcome measures. Of these new technologies, the WBB has been used the most in research so far (see Figure 1). The board is 20.1 inches wide, 12.4 inches long, and weighs approximately eight pounds. The four transducers, one in each corner of the board, are used to assess weight distribution and to detect even very small changes in the distribution of participants' center of pressure (COP). Data are sampled at a rate of 33 Hz and the WBB connects to a PC via Bluetooth. The measurements of participants' COP produced by the WBB are as reliable and valid as those produced by other platforms commonly used to assess posture (Clark et al., 2010). In contrast to these platforms, the WBB is far less expensive, and portable, and therefore an attractive alternative for researchers interested in measuring posture. The WBB data contain the COPs of every sample point in the form of X (right-left) and Y (frontback) coordinates. Positive X and Y coordinates indicate that the COP is more to the right and front than the middle of the WBB, while negative X-and Y-values indicate that the COP is more to the left and back. Depending on what a researcher wants to measure, these coordinates can be used to calculate a dependent measure in a program such as Excel or SPSS. For example, if one wants to know whether participants lean to a side, one can look at the corresponding X or Y coordinates and compare COPs across experimental conditions. Another possibility is to see whether people move more in a certain condition (stability of or sway in posture) by measuring the shifts in direction of change in COPs along one of the axes (X-axis for left-right, Y-axis for front-back). There are several ways in which the WBB has been utilized in behavioral research. These studies vary in the population being tested, from healthy younger adults (see discussion below) to healthy older adults (Koslucher et al., 2012), individuals with Autism Spectrum Disorder (Travers et al., 2013), and stroke victims (Nijboer et al., 2014). They also vary in what is being measured, such as postural stability, sway, or the influence of posture on cognitive processes, such as the activation of conceptual metaphors . Posture has been investigated as a WBB outcome measure in several studies (Eerland et al., 2011;Zwaan et al., 2012;Schneider et al., 2013). In the study by Eerland et al. (2011), participants moved sideways in reaction to an arrow that appeared on the screen. Additionally, the researchers examined whether people leaned more forward in reaction to pleasant pictures (approach behavior) and more backward in reaction to unpleasant pictures (avoidance behavior). This turned out to be the case. In another study , participants moved sideways to indicate whether the sentence they just read was sensible or not. Some action sentences implied a forward-leaning body posture (e.g., The man petted the little dog), while other sentences implied a backward-leaning body posture (e.g., The boy looked up at the clock tower). The hypothesis that posture would be influenced by the described actions as assessed with the WBB was supported. More recently, the WBB was used in a study to assess the influence of ambivalence on body movements (Schneider et al., 2013) based on the idea that when people have ambivalent feelings, they hold positive as well as negative evaluations of an object or issue. Indeed, participants were found to engage in side-to-side movements when experiencing ambivalence. These results suggest that the WBB is a useful instrument as an outcome measure that also provides very precise data for analysis. Basically, any deviation from the COP is recorded and can be included in the data set, no matter how subtle this pressure shift is. Moreover, the WBB can be used to manipulate posture. This means that it is possible to trick people into believing that they hold a certain position while they are actually not. This is accomplished by having participants think that they are standing upright while in fact they are standing slightly tilted to the right or the left. This manipulation is so subtle that participants are not aware at all that they are standing sideways instead of upright. The first study to demonstrate such an implicit activation of abstract concepts, built on the mental number line theory (Restle, 1970) as an example that people mentally represent numbers along a line, with small numbers on the left and large numbers on the right. The idea behind the study was that having people lean slightly to the left or the right would activate this mental number line. Indeed, support was found for the idea that surreptitiously leaning to the left activated relatively smaller numbers than surreptitiously leaning to the right (Eerland et al., 2011). The second study examined the effect of an implicit body posture manipulation with the WBB, in a similar way as in the previous study but then with the political affiliation metaphor along a horizontal axis (Dijkstra et al., 2012). Specifically, the subtle body manipulation on the WBB was examined to assess its effects on political party evaluations. This was done in a Dutch political party environment of 10 parties that can be placed on a left-right continuum. Just as in the Eerland et al. (2011) study, participants thought they stood upright on a WBB while in fact they stood slightly tilted to the left or the right. They were then asked to ascribe general political statements (that could not directly be attributed to one of the political parties in the Dutch House of Representatives) to one political party. The manipulated body position was expected to affect one's political attribution such that standing somewhat to the right would result in attribution of the statement to a political party on the right and an attribution to a party on the left when leaning to the left. The results (see Figure 2) indicated that there was an interaction of leaning direction with party attribution as expected, such that there was a congruence effect of leaning direction with left-wing party attribution. This study is a good demonstration of how an abstract metaphor that has a complex mapping with concrete associations can be activated effectively with a sensori-motor manipulation of the body. The use of the Wii Balance Board facilitated the use of the whole body, rather than parts of the body, as was the case in earlier studies on political metaphors. Moreover, the manipulation was implicit because participants were under the impression that they were standing in the middle of the balance board even though they were not. They were not aware of the fact that their body posture could and would affect their evaluations and most likely did not consciously perceive proprioceptive feedback that they were leaning to the left or the right. Instead, they relied on the visual feedback from the screen. None of the participants noticed the manipulation. Since they were under the impression that they had to maintain their balance in the middle of the board. Such a manipulation makes the WBB an appropriate device for research on the implicit activation of concepts. Another important component of the study was that the political statements were general and could be attributed to either conservative, liberal, or progressive parties. The attribution was affected therefore by the manipulation of the body, not by partybased content of the statements. The task was also very different from the ones in the previous studies on the political metaphor. In the Oppenheimer and Trail study (2010), the tasks involved a rating of the level of agreement with the leftwing or rightwing political party whereas the tasks in the experiments of Van Elk et al. (2010) involved response times for which the answers were either correct or incorrect. In contrast, the studies conducted by Farias et al. (2013) and Dijkstra et al. (2012) had more subtle task demands for which the answer clearly depended on the manipulation of the body and no answer could be correct. Moreover, pretests were conducted to create a reliable set of stimuli, socialistreferent words and conservative-referent words in the Farias et al. (2013) study, and a set of statements that were equally likely to be attributed to left-wing or right-wing parties in the Dijkstra et al. (2012) study. Filler questions regarding television programs were added in the latter study to steer the focus away from an exclusive political theme. We can conclude that the Nintendo Wii-Balance Board (WBB) seems to be a promising tool to investigate left-to-right oriented linguistic metaphors, such as the political metaphor. A main advantage of the WBB is that people can lean one way or the other without even noticing it. This provides the opportunity to investigate other left-to-right metaphors such as emotional valence, the mental number line, or time, while participants are not aware of the fact that their posture is being manipulated. It is entirely possible that given neutral prompts, people might report more positive memories or judge pictures to be more positive when leaning to the right than when leaning to the left. It is conceivable, that these effects are bidirectional. Previous research has demonstrated changes in body posture when participants were primed with concepts, such as "pride" and "shame" (Oosterwijk et al., 2009). Similarly, for the political metaphor, bidirectionality could be demonstrated if participants would lean to the right when being primed with statements reflecting right-wing political issues and to the left when prompted with statements reflecting left-wing issues. The two studies on conceptual metaphors manipulating posture with the WBB not only provide evidence that this device can be used effectively to manipulate people's judgments, it also supports the idea that even abstract conceptual metaphors are activated when manipulating body position. Apparently, we understand abstract concepts both through concrete experiences and learned associations even if the experiential basis is limited and the mapping is complex. Subtle manipulations of the body without participants being aware of it may work for other conceptual metaphors as well that have no or a limited experiential basis. Given the promising possibilities of the WBB to manipulate posture, further empirical research could address the issue of learning. If leaning to a side can influence judgments about left/right-statements because the metaphor is grounded (Dijkstra et al., 2012), then it may also work in another direction. Perhaps learning ambiguous material can be directed a certain way by manipulating posture in a congruent direction. Leaning to the left could, for example, facilitate learning which political parties are left-wing parties. Research is needed to investigate this influence of posture on learning, because it might very well be that the subtle manipulation is too implicit to promote explicit learning. It is also interesting that the effect of posture manipulation was only found for the actual left and right positions of parties, and not for what people thought were left and right parties (Dijkstra et al., 2012). The most logical explanation for this finding is that people had difficulties in reporting the position of the party on a complex grid (see Dijkstra et al., 2012). However, it could also mean that this is an issue that still has to be investigated. If it is important for the influence of posture what people do or do not explicitly know about a subject, it will probably have implications for studies using this manipulation to examine learning outcomes. CONCLUSIONS One of the main contributions of research on the groundedness of abstract concepts using new technologies such as the WBB, is that body manipulations can be implemented in a subtle manner that do not alert participants as to what is happening or is supposed to happen. The WBB provides very precise measurements of participants' center of pressure. This can be kept constant during a manipulation by having participants look at their body as a mark on the screen which helps them to keep themselves in the required location. This affords very specific and credible feedback to the participant that their body position is where it should be and how to remain in this position even though in fact their body is tilted to the right or the left. Inclusion of these technologies in future research may be valuable because the activation of other complex metaphors could be assessed this way, both as a tool to collect posture data (along the x-and y-axes), and as a tool to manipulate posture in different ways (along the sagittal axis or by creating imbalance). We do not claim that these abstract concepts are grounded in the sense that motor activation is necessary for understanding the concept, but research with the WBB does show that it is more than just co-activation. When encountering an ambiguous situation, one uses all available resources to resolve the ambiguity. For instance, when reading an ambiguous political sentence (i.e., it is not clear to which party the statement belongs), people might use the unconscious proprioceptive feedback of their body when evaluating the statement and attributing it to a political party. The body clearly plays a role here, moving beyond the enrichment of those concepts by facilitating a choice within the relational context in which conceptual processing takes place. How far do these effects go? According to Maglio and Trope (2012), there are boundaries to such effects of embodiment. They stipulate that these effects only occur when a certain frame of mind is created in the participant. Participants who were manipulated to think at a higher, abstract level were less responsive to contextual bodily cues than when they were encouraged to think in a more concrete manner. The abstract thinking manipulation thus prevented the contextual proprioceptive feedback from affecting judgments. The issue we encountered in the discussion of research on political metaphors, however, was that the effects are not always consistent. The participant's frame of mind (abstract vs. concrete) did not seem to be the issue here. It was rather the effectiveness of the manipulation and possible awareness among participants that may have influenced some of the outcomes. In our view, future research should therefore focus more on examining in detail how and when this activation takes place Frontiers in Psychology www.frontiersin.org August 2014 | Volume 5 | Article 757 | 6 (Willems and Francken, 2012), and for which tasks specifically. Possibly, effects are stronger when the whole body shifts to the left or the right instead of parts of the body. Future research should also investigate if these patterns replicate for similar manipulations but different concepts, or similar concepts but different tasks. The next step would be to do this for other metaphors with complex mappings. The outcomes of these studies should not be merely an addition to the current pile of evidence, but can instead bring us closer to a deeper insight into the mechanisms underlying the grounding of abstract concepts. Sensory-motor activation is as applicable to abstract concepts as to concrete concepts, particularly for tasks that involve a certain level of ambiguity. Sensory-motor processes do not seem to "tweak" the results of cognitive processing but are part of the decision process that leads to a response. So far, sensory-motor grounding has been reliably demonstrated for abstract concepts. Further research could reach the limitations of embodiment or support the view that sensory-motor representations are necessary and/or sufficient for cognitive processing. Either way, it will narrow down what the role of the body in conceptual processing is.
2016-05-04T20:20:58.661Z
2014-08-19T00:00:00.000
{ "year": 2014, "sha1": "17aaa11932ce2517c2835fe179a5d4fee7c1ecc4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2014.00757/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "17aaa11932ce2517c2835fe179a5d4fee7c1ecc4", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
55496516
pes2o/s2orc
v3-fos-license
Optimization principle of operating parameters of heat exchanger by using CFD simulation Design of effective heat transfer devices and minimizing costs are desired sections in industry and they are important for both engineers and users due to the wide-scale use of heat exchangers. Traditional approach to design is based on iterative process in which is gradually changed design parameters, until a satisfactory solution is achieved. The design process of the heat exchanger is very dependent on the experience of the engineer, thereby the use of computational software is a major advantage in view of time. Determination of operating parameters of the heat exchanger and the subsequent estimation of operating costs have a major impact on the expected profitability of the device. There are on the one hand the material and production costs, which are immediately reflected in the cost of device. But on the other hand, there are somewhat hidden costs in view of economic operation of the heat exchanger. The economic balance of operation significantly affects the technical solution and accompanies the design of the heat exchanger since its inception. Therefore, there is important not underestimate the choice of operating parameters. The article describes an optimization procedure for choice of cost-effective operational parameters for a simple double pipe heat exchanger by using CFD software and the subsequent proposal to modify its design for more economical operation. Introduction Heat exchangers belong to the most important elements of energy facilities and they are used in a wide field of industry [1].They are devices for heat transfer among heat carrying fluids.They can be classified according to numerous criteria e.g. according to way of use (heaters, coolers, evaporators, condensers, etc.), design arrangement (heat transfer between two or more fluids, etc.), way of the heat exchange (without or with phase change) or according to contact of one and other fluid (mixer-heat exchanger -it does not have a heat exchange surface and the fluids are mixed together; regenerator -it have one heat exchange surface, which is alternatively flowed around by hot and cold fluid stream and the exchangers use heat accumulation (it is also referred to as direct transfer type); finally there is recuperator, which is referred to as indirect transfer type, because a wall separates the fluid streams) [2]. The technical level of heat exchanger construction significantly adjudicates on the effectiveness and economic return of investments.Minimization of costs are therefore an important aim both for the designers, but mainly for users.Optimal design and operation of the heat exchanger require understanding of heat transfer issues, design and operational requirements.The traditional approach to the design of the heat exchanger is based on an iterative process.It contains gradually changed design parameters until a satisfactory solution for assigned specification is reached [3].However, these methods, besides that they are time consuming, they do not guarantee optimum economic solution.The calculation process can be supported by specific software and with some constrains also by CFD (Computational Fluid Dynamics) software.The paper shows the optimizing principle of the operating parameters, which is based on the condition of minimal operating cost. Design of heat exchanger The mathematical description of some occurring processes in the heat exchanger is so complex that there is not possible do it without similarity theory and practical experience [4].The structural design of the heat exchanger is essentialy dependent on the experience of the engineer and the use of computational software is major advantage from timesaving point of view.Typically, the first there is selected geometry of the device.Subsequently, there are defined value of the design variables according to the specifications and several provided mechanical and thermodynamic parameters in order to obtain a satisfactory heat transfer coefficient.Individual engineer`s options are according to customer requests and depending on the iteration procedure until an acceptable design of the heat exchanger complies to the specification with satisfactory compromise from efficiency point of view [3]. Sizing of heat exchanger Calculating method of heat exchanger depends on case whether new heat exchanger is designed (primary result are design and size of the heat transfer surface) or control calculation (size of heat transfer surface is known and usually there are calculated the output or input temperature, transferred heat flux or other operating parameters).The choice of a suitable type of heat exchanger precedes the sizing of heat exchanger, which is dependent on many parameters [5].The next paragraphs are focused on the design calculation.Thermal calculation of the heat exchanger has a number of phases: -Gathering of initial data: kind and physical properties of hot -heating fluid (index 1) and cold -heated fluid (index 2), flow rates and temperatures of both fluids, optionally heat exchanger performance, requirements of the pressure and permissible pressure drop. -A draft of heat exchanger type, its layout (parallel flow, contraflow, cross-flow) and dimensions. -Thermal calculation, which is comprised of heat balance of the heat exchanger. -Technical and economical optimization and specification of the parameters in order the heat exchanger keeps economic requirements. Among two most common methods for calculation of heat exchanger is included LMTD method (Logarithmic Mean Temperature Difference), which is based on the known inlet and outlet temperatures of both media.However, if some temperature is unknown, there is necessary to interpolate it.The second method, which is based on heat exchanger efficiency for transferring a certain amount of heat (NTU -Number of Transfer Units) is preferable to LMTD method.NTU method is suitable for the control calculation and comparison of various types of heat exchangers [6].In the paper, there is below, partly mentioned only LMTD method, but NTU method is not described in detail. Thermal balance of the heat exchanger The basic equation for thermal calculating is enthalpy balance of heat exchanger.For illustration, we consider a simple recuperator, concretely the double pipe heat exchanger (Figure 1).For the layout of the heat exchanger in Figure 2 is the conservation of energy for the heat exchanger expressed where H enthalpy flow, W; loss Q is heat loss to the environment, W; and superscripts ´ and ´´ mark the inlet and outlet of fluid in the heat exchanger.When the heat loss in common systems with insulation does not exceed 5% [7], it is possible to simplify calculations by neglecting of heat loss or it can be estimated. Furthermore, equation ( 1) can be itemized to individual enthalpy flows as equation which for two-phase flow of fluid after treatment and expression through m mass flow, kg m -3 ; and h the specific enthalpy, J kg -1 has the form ( ) ( ) If the total heat flux is denoted Q , W simple modification of equation ( 3) gives the next expression (4) for the enthalpy balance of one fluid.Its heat flux is given by the enthalpy loss of hotter fluid.The heat flux causes an increasing enthalpy of the heated fluid and increasing enthalpy of environment around the heat exchanger (heat loss to the environment). where p c specific heat capacity, J kg -1 K -1 is considered at main temperature of fluid (hotter or colder).For the control calculation, there are three unknown variables: outlet temperatures ´1 T , ´2 T and Q heat output, W. On the contrary, at design calculations, there are given these temperatures and the heat performance.There is necessary to calculate the heat exchange surface S , m 2 according to the equation for heat transfer calculation ), where k heat transfer coefficient, W m -2 K -1 and the heat flux Q , W can be estimated from enthalpy balance, which is expressed by the previous equation ( 4).The temperatures of both liquids are during fluid flow through the heat exchanger changed, thereby there is changed also the temperature difference between them.mean T Δ is the mean temperature gradient, which is defined by integrals ( ) where the temperature difference ( ) is the local value at the location.Mean temperature gradient, which is valid for the heat exchanger, must be obtained by integration of equation ( 6) over the entire surface of the heat exchanger.The result of this integration is the equation for the medium logarithmic temperature difference (it is valid not only for a parallel flow, but also for contraflow), which is marked LMTD and defined by equation where T Δ and ´T Δ are the extreme temperature differences (gradients) at the inlet and at the outlet of heat exchanger ( ) , which mean inlet and outlet temperature differences of the fluid flows.If there is changed the temperature gradient along the heat transfer surface a little, there is possible to calculate an average temperature gradient as the arithmetic average of the two extreme temperature differences (gradients) ) The mean arithmetic gradient is always greater than the mean logarithmic temperature difference.For is the deviation less than 4%.In contrast to the pure parallel flow or contraflow, which includes also the case of double pipe heat exchanger, the calculation of the mean temperature gradient in the heat exchangers with the cross-flow or a combination flow of both fluid is difficult.Therefore, for the case of conventional practice, the results are presented graphically or by equations.By use of these parameters for case of the contraflow adjustment.The correction factor F is for the structural layout of the degree of deviation from the maximum possible LMTD value.The reciprocal value of the heat transfer coefficient k 1 is the total thermal resistance of the heat transfer from hot to cold fluid, which is expressed by the equation (10).Total thermal resistance c R is calculated according to serial alignment.Thus, it is sum of the individual thermal resistances, which are caused by heat transfer on the both sides of the wall, the heat resistances of layers of heat transfer wall and the heat resistance of fouling on both sides.The total thermal resistance of wall which is composed from different layers, is solved where subscript i and e represent the internal and external heat exchange surface.Heat transfer coefficient Į , W m -2 K -1 can be determined by similarity theory in the form of Nusselt number for the design of heat exchanger, the hydrodynamic conditions and the fluid characteristic.The determination of heat transfer coefficient is quite large task, but there is enough literature sources for individual assessment of different cases. From the economy view, for design calculation, there is important the determination of the parameters that directly affect investments and operating costs.Investments depend on the size of the heat transfer surface and on the complexity of production (type) of the heat exchanger, which is reflected in the price for m 2 heat transfer surface.Operating costs include energy, maintenance and repair services.The increasing velocity of hot or cold fluid causes that the heat transfer coefficient Į increases, too.Besides, it also allows to decrease investment costs due to the size reduction of the heat transfer surface, but simultaneously operating costs rise significantly, due to increased pressure loss.In design calculations, there have to be determinate the flow velocity (flow rate) so that the total costs, which can be expressed as the sum of investment and operation, were minimal. In order to sizing of devices for circulation of working fluid, there is necessary the calculation of the flow resistance (pressure loss) in the heat exchanger.The total pressure drop of the fluid flow Task of the heat exchanger calculation Operating efficiency of heat exchanger is established on the electricity costs associated with forced circulation of heat transfer liquid and on assumption of known incomes from distribution of heat.Capital costs are neglected in this case, because there was considered a heat exchanger in operation and it could be affected only its operation setting.There was necessary to establish flows of both media so that the operation of the device was as economically as possible. For heat exchanger was set the cold (heated) water with temperature of 30 °C flows to the straight tube and through the pipe interspace flowed the hot (heating) water with inlet temperature of 110 °C.Input parameters for the boundary conditions were flows of heat transfer fluids or amended geometry of the heat exchanger.The model of flow and heat transfer in the heat exchanger`s geometry was simulated in a commercial CFD software Ansys Fluent 15.For evaluation of the parameter`s impact was changed only one parameter while others remained constant.The main assessment detail from CFD analyses was the performance of heat exchanger, which was expressed by the ratio between incomes from the heat distribution and operating costs for circulation of heat transfer fluid.In the calculation were considered the cost of electricity 150 € MWh -1 [9] and income from heat distribution 100 € MWh -1 (approximately 27.80 € GJ -1 ) from heat distribution [10,11,12]. principle of optimization Double pipe heat exchangers are often used in special operating conditions such as low flow volume, for large temperature differences, at desired residence time, for fast temperature changes or at high pressure operation and if there is a requirement for pure parallel flow or counterflow [2].The heat exchanger consists of simple concentric arrangement of the two pipes with different diameter.One of the fluid flows in the inner tube and the second into the created annulus.In this model case, there was not assumed a phase change of heat transfer fluid (water).Double pipe heat exchanger is one of the simplest, therefore it was selected for example.The optimization process was focused on the adjustment of cost-effective flow rate of heat transfer fluids for given dimensions of the heat exchanger and the inlet temperatures of the both streams. The goal of paper is not to describe the settings of the model in detail because it is a very large task.Therefore, the description of the model is mentioned only briefly.Since the model heat exchanger was symmetric, there was appropriate to model only half of the heat exchanger in order to speed up the calculation.From an actual geometry could be removed insignificant volumes as heat exchanger walls, there was left only partition -heat exchanger surface and volumes of liquids.The heat transfer thorough the outer walls was included in the model.But on other hand the outer wall was not physically modelled, because temperature field in the direction of the wall thickness was not significant.Simplified computational domain was then filled by mesh, which was near the walls and in areas with expected intensive flow smoother, but on other hand, in the direction, in which there were not expected substantial changes of flow were mesh cells more stretched.Finally, this approach contributes to more accurate and less time consuming calculation.Therefore, the mesh quality was acceptable.For solution, there was implemented the Realizable k-epsilon turbulent model with Enhance wall treatment that describes the flow near the walls of heat exchanger [13].Boundary condition on inlet was defined by mass flow rate in normal direction to boundary.Outlet boundary condition was pressure-outlet type.Calculation also includes heat loss to the environment and for roughness was assumed that height of roughness peaks does not exceed the hydrodynamic boundary layer.For chosen, above-mentioned boundary conditions at inlet were specified input values and the calculation was made subsequently.When tasks were converged, the results were plotted in temperature, speed and pressure profiles.The flow was drawn by velocity vectors and streamlines.Subsequently, the required data were shown in a table and exported for further analysis of costs in the spreadsheet program Excel.Electricity costs did not include efficiency of pumps, since there were other parameters more or less estimated.The results of this process were graphs of various analysed parameters.In the graphic data (Figure 5, Figure 6), there was found the maximum of efficiency curve which shows the optimum flow. For determination of mentioned variables were decisive data about pressure loss for calculation of consumed energy and heat fluxes in the heat exchanger for determination of the heat output.In order to determine the most economically advantageous operating parameters (flow) of fluid was sought global maximum of reference indicator (efficiency -the ratio of energy prices) by use of regression analysis.Several values, which were calculated by numerical calculation in CFD software, were for properly selected input parameters approximated by a polynomial function [14].The emphasis in approximation of values was given on the accuracy of the maximum values.Greater variation in the limit values of the reference interval was not important then, because the approximation function was decisive for determination of the maximum.The approximation function in the limit values describes mostly the trend line.From shape of velocity profiles is visible their tendency to form the parabolic profile, since in distance of about 8 times the pipe diameter leads to stable profile.Subsequently, when the flow rate of one fluid was specified (Figure 5), there was implemented the same analysis for the heated fluid in the inner pipe.For this case of the heated fluid, there was set optimum flow from previous calculation (Figure 6).For verification of the calculated optimum flows were CFD simulations realized for these values again.The results in the case of parallel flow adjustment were almost identical to contraflow adjustment, but the efficiency of the contraflow adjustment should be significantly better.The reason for this discrepancy was very little difference of output temperatures in parallel flow and contraflow adjustment in Figure 7 and Figure 8.The heat transfer can be increased by adding a suitable number of ribs (Figure 11, Figure 12).However, in this respect, there are structural impacts and costs of materials and manufacturing becoming more significant.There was achieved more guided flow by addition of fins in the longitudinal direction, as illustrate velocity profiles (Figure 13, Figure 14). Conclusions The calculation was unable point out more pronounced difference between parallel flow and contraflow arrangement of flows.Mentioned output temperatures showed that the contraflow arrangement is only a little advantageous as the parallel flow.The optimal flow rate of heated fluid was determined, for both flow arrangement, by a maximum of polynomial functions, are specified by approximation of calculated values.For parallel flow arrangement was determined optimum flow rate approximately 0.191 kg s -1 and for the contraflow adjustment was optimal flow 0.190 kg s -1 , which are almost the same values.The determined flow rate in annulus space, in case contraflow arrangement of the heat exchanger, was eked by the optimized flow in a straight (inner) pipe (0.109 kg s -1 ).In the above mentioned processes was determined the best fluid flow from the economic point of view.For a comprehensive analysis should be taken into account also other operating parameters in the calculation.Similarly, it would be appropriate to assess adding fins, which were proposed for the intensification of heat transfer.Then there was necessary to assess besides desired height, width and number of fins also costs of material and manufacture. DOI: 10 .1051/ C Owned by the authors, published by EDP Sciences, Figure 1 . Figure 1.Model of heat exchanger with computational mesh. Figure 2 . Figure 2. Temperature progress in parallel flow arrangement of heat exchanger. are Ȝ coefficient of friction, -, depending on the Reynolds number and the wall roughness; L the considered length of channel through which fluid flows, m; d equivalent diameter (cross section of the channel), m; ȟ shape coefficient of resistance, -, which primarily depends on the shape and less on the character of flow (Reynolds number); v mean velocity of flow, m s -1 ; and ρ density of the fluid flow, kg m -3[8].e P total demand for electric power of the circulating pump, W is determined by equation Figure 3 4 3. 1 Figure 3 shows lines in given distance from inlet, where were drawn velocity profiles.They are shown in next Figure 4. Figure 5 . Figure 5. Chart for determination of flow rate (kg s -1 ) of heating (hot) fluid. Figure 6 . Figure 6.Chart for determination of flow rate (kg s -1 ) of heated (cold) fluid in parallel flow adjustment. Figure 7 . Figure 7. Inlet and outlet temperatures (°C) of heating (hot) fluid in both cases of adjustment. Figure 8 . Figure 8. Inlet and outlet temperatures (°C) of heated (cold) fluid in both cases of arrangement. Figure 9 and Figure 9 and Figure 10 display the temperature profiles in the symmetry plane, respectively in the crosssection, approximately in the middle of the heat exchanger. Figure 9 . Figure 9. Temperature field (°C) in the cross-section -24 ribs in the symmetry plane. Figure 10 . Figure 10.Temperature field (°C) in the cross-section, approximately in the middle of the heat exchanger.
2018-12-07T00:51:11.644Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "9476915e8b3f66bee2abb7a56733a35dbe3a8cf1", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/09/epjconf_efm2016_02074.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9476915e8b3f66bee2abb7a56733a35dbe3a8cf1", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
216446406
pes2o/s2orc
v3-fos-license
Prevalence and Serotype Determination of Streptococcus agalactiae Isolated From Non-Pregnant Women in Tehran, Iran Consequence of Streptococcus agalactiae, Group B Streptococcus (GBS) relating infant’s diseases are well documented. Although many women carry this bacterium in their vagina, they may transfer to their infant during delivery and may result in different neonatal invasive diseases. The aim of this study was to determine the prevalence of GBS and serotyping the isolated species among un-selective non-pregnant women who attended two gynecology clinics in Tehran. In this cross-sectional study, a total of 560 vaginal samples collected from non-pregnant women. Following inoculation of the specimen on Blood Agar, the standard technology was applied for the final identification of GBS. Detected GBS species were further confirmed using specific PCR directed on dlts gene. Capsular serotyping was done by using the multiplex PCR method. The chi-square method was used for statistical analysis. Fifty (8.9%) out of 560 non-pregnant women were carriers of GBS. The most common types were III (36%), followed by type II (32%), Ia (26%), and Ib (6%), respectively. Results represent that the prevalence rate of GBS in non-pregnant women was reliable and similar to what obtained from pregnant women. In addition, the serotype III was found the most dominant types, as well as other investigations in the Tehran area. Therefore, vaccine designation based on type III is recommended. Introduction Streptococcus agalactiae or Streptococcus group B (GBS) is a facultative gram-positive bacterium and is recently known as an important cause of neonatal diseases in both adult and children. Reports from different sources represent that 15-45% of women harbor the GBS in their vagina and may transfer to their newborn during delivery (1). Meningitis, sepsis, and pneumonia are those important infectious diseases that new-born babies may suffer from contaminated mothers (2). Infants diseases caused by GBS may be classified into two groups; early-onset diseases (EOD, from birth to day 6 th ) and late-onset diseases (LOD, from day 7 to 89). The EOD is usually manifested in the first week of birth, whereas LOD appears between the second two and third month (3,4). Although different virulence factors have been recognized in GBS, polysaccharide capsule seems the most important factor. Based on antigenic properties, capsular polysaccharide (CPS), the GBS has been classified into ten serotypes; Ia, Ib, and II-IX (5,6). However, different investigations in Iran revealed that serotype III is the most common GBS detected from both pregnant and non-pregnant women (7). Investigations revealed that type III is the main causative of meningitis in LOD, while the GBS serotype Ia is usually common in EOD. Since the GBS serotypes distribution is geographic differences, the determination of serotype is necessary for vaccine designation in a different population (8). The aim of this study was to determine the prevalence of GBS among non-pregnant women who attended the gynecology clinic at Javaheri and Shohada hospitals in Tehran, Iran. Following isolation of GBS from vaginal specimens, all have further processed for capsular type using multiplex PCR. Sample collection and culture A total of 560 swab samples from the vagina of nonpregnant married women aged 15-45-year-old referred to Javaheri and Shohada gynecology department (2015-2016) were collected and subjected to culture using sheep blood agar medium (Merck, Germany). All selected cases were not used any antibiotic 2 weeks prior to sampling. Note that sampling was done by an expert obstetrician. Following overnight incubation, at 37˚ C, the suspected beta-hemolytic colonies were further tested for exact recognition of GBS, including CAMP and hippurate hydrolysis. All isolated species were then frozen at -20˚ C for later type determination. Genomic DNA extraction and molecular detection of GBS The extraction of genomic DNA from S. agalactiae was performed using a bacterial extraction kit (Gene All, South Korea). In order to corroborate the isolated GBS, the PCR method was programed using primers dlts-F, dlts-R (Takapouzist, Iran) ( Table 1) targeted dlts gene. The reaction mixture was prepared in a final volume of 20 µl containing 2.5 µM of each primer with the final concentration of 10 Pmol, 10 µl of 2X PCR master mix (Amplicon, Denmark) and 3 µl of template DNA. Amplification was carried out in an automated PCR machine, as follows: The reaction mixture was first raised to 94˚ C for 5 min, followed by 30 cycles for 1 min at 94˚ C, annealing for 1 min at 55˚ C and 1 min at 72˚ C for elongation. The whole process was completed with a final elongation cycle of 5 min at 72˚ C. Molecular serotyping In order to perform molecular serotyping, two sets of the multiplex PCR reaction were taken ( Table 1). Amplification of both multiplex reactions with the final volume of 20 μl for PCR reaction (2 μl of water, 10 μl of 2X PCR Master Mix) (Amplicon, Denmark), 5 μl of working primers with the final concentration of 10 pmol, and template DNA (3 μl). The PCR processing was achieved by 94˚ C denaturation for 5 min, followed by 1 min at 94˚ C for 35 cycles as denaturation. Then, the mixture was subjected to the annealing process at 49.5˚ C and 60˚ C for the first and second set, 1 min, followed by 1 min at 72˚ C for the extension. The whole process was completed with a final extension at 72˚ C for 5 min. analyzing of amplicons was achieved using 1% agarose gel and then visualized with gel imagers (Life Technologies, USA). Statistical analysis In this survey, the chi-square test was applied to evaluate the prevalence of isolates, and P≤0.05 was considered significant by SPSS 16. Results Among 560 swab samples collected from rectovaginal of non-pregnant women, 50 (8.9%) samples were identified as GBS positive. Figure 1 disclosed the specific PCR for the exact detection of GBS using dlts. As table 2 indicates, the range of ages among participated cases was 25-39; When the data was analyzed, it was found that the serotype III was absent in women under 25 years old, but, However, it is higher in aged 30-34-year-old (Table 2). Discussion In the present study, 560 recto-vaginal swab specimens were collected for the detection of GBS. Overall, 50 samples were found positive for GBS, reflecting the prevalence of 8.9% for this unselected population referred to two gynecology clinics in Tehran. In general, data reported from WHO revealed that about 15-45% of women carry the GBS in their genitalurinary system (1). When our findings were compared to other studies reported from several populations in Tehran, it was found that different study has revealed a variable rate of colonization. For example, in an investigation conducted by Hadavand et al., in Tehran showed that among 210 vaginal samples only 3.3% were positive for GBS (9), whereas Fatemi et al., (10), Aali et al., (11), Jahed et al., (12) reported 20,6%, 9.2% and 5.3% respectively (Table 3). Different similar investigations concerning the prevalence and type distribution of GBS performed by our microbiology team in Yazd showed the frequency of GBS as follows: Absalan et al., (13) tested 250 vaginal samples and found 19.6% of cases were GBS positive. Later, Sadeh et al., (8) reported that among 650 swab samples, 15.4% and Najarian et al., (3) indicated that among 346 specimens, 16.7% were positive for GBS. Our finding (8.9% rate) was consistent with the experience of Aali et al., (11) but not in concur with others (8,13). When we reviewed the prevalence of GBS in some countries, it was found controversy in the rate of GBS colonization among their reports. In Chaina (14) (Table 3). In general, there is controversy in regard to the prevalence rate, and there is no absolute reason to explain the subject. However, different geographical location, cultural technique, the experience of technical stuff, the concentration of specimen, transfer medium, amount of time elapsed before inoculation and using antibiotic by the participant before sampling may all be important factors influence the isolation of GBS (3,7,13). In another survey carried out by Beigverdi et al., (19) in Tehran presented that type III with 65.8% was predominant, followed by Type II (14,6%), Ib (7.3%) and V (4.9%). In a similar study conducted by Nahaei (20) et al., in Tabriz revealed that serotype V was the most common. This, however, may reflect the necessity of specific clones in different geographical areas or sources of bacterial detection. Although the percentage of serotypes isolated in the present study almost correspond with other findings reported from Tehran (19), Hamadan (21), and our previous works in Yazd (3,8), the serotype V was not detected among our isolated GBS. This, however, may be due to the limited number of cases we investigated; another possibility is that the specimen might have been collected from a group of women in a specific geographical area. When our findings were compared with some other studies published from different countries, we found that the serotype distribution in Australia (22), serotype III, Ia, and V were the most common serotypes, and in Turkey (23) serotypes Ia, Ib, II, III, and IV were common, but in Japan (24) serotypes VI and VIII were predominant among Japanese population. In conclusion, results obtained from this study, together with many other publications, revealed that serotype III is predominant among pregnant and nonpregnant women. Hence this serotype is directly related to LOD in infants. Therefore establishment of an adequate screening program seems critical to identify and treat the infected pregnant women before delivery (8).
2020-04-09T09:06:13.998Z
2020-04-07T00:00:00.000
{ "year": 2020, "sha1": "4436249291acc0a7dea0d68aee0b1973aae62c1d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/acta.v57i9.2641", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "723e463e78e6a54821702d0ed0cb8f20f8d26e7f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4998166
pes2o/s2orc
v3-fos-license
Acid–base status and its clinical implications in critically ill patients with cirrhosis, acute-on-chronic liver failure and without liver disease Background Acid–base disturbances are frequently observed in critically ill patients at the intensive care unit. To our knowledge, the acid–base profile of patients with acute-on-chronic liver failure (ACLF) has not been evaluated and compared to critically ill patients without acute or chronic liver disease. Results One hundred and seventy-eight critically ill patients with liver cirrhosis were compared to 178 matched controls in this post hoc analysis of prospectively collected data. Patients with and without liver cirrhosis showed hyperchloremic acidosis and coexisting hypoalbuminemic alkalosis. Cirrhotic patients, especially those with ACLF, showed a marked net metabolic acidosis owing to increased lactate and unmeasured anions. This metabolic acidosis was partly antagonized by associated respiratory alkalosis, yet with progression to ACLF resulted in acidemia, which was present in 62% of patients with ACLF grade III compared to 19% in cirrhosis patients without ACLF. Acidemia and metabolic acidosis were associated with 28-day mortality in cirrhosis. Patients with pH values < 7.1 showed a 100% mortality rate. Acidosis attributable to lactate and unmeasured anions was independently associated with mortality in liver cirrhosis. Conclusions Cirrhosis and especially ACLF are associated with metabolic acidosis and acidemia owing to lactate and unmeasured anions. Acidosis and acidemia, respectively, are associated with increased 28-day mortality in liver cirrhosis. Lactate and unmeasured anions are main contributors to metabolic imbalance in cirrhosis and ACLF. Electronic supplementary material The online version of this article (10.1186/s13613-018-0391-9) contains supplementary material, which is available to authorized users. Background Derangements in acid-base balance are frequently observed in critically ill patients at the intensive care unit (ICU) and present in various patterns [1][2][3][4]. Severe acid-base disorders, especially metabolic acidosis, have been associated with increased mortality [5,6]. As a consequence, acid-base status in critically ill patients with various disease entities has been extensively studied. Yet, only a few studies assessed the impact of underlying chronic liver disease on acid-base equilibrium in critical illness [7,8]. While a balance of offsetting acidifying and alkalinizing metabolic acid-base disorders with a resulting equilibrated acid-base status has been described in stable cirrhosis [9], severe derangements with resulting net acidosis owing to hyperchloremic, dilutional and lactic acidosis were observed when cirrhosis was accompanied by critical illness [7,8]. Acute liver failure (ALF) is characterized by a different acid-base pattern with dramatically increased lactate levels [10]. The acidifying effect of this increase in lactate was neutralized by hypoalbuminemia in non-paracetamol-induced ALF [11]. Despite advantages in intensive care medicine, which have led to an improved outcome over the last decade [12], mortality in cirrhotic patients admitted to ICU is still high [13][14][15]. Measurement and knowledge of specific acid-base patterns and their implications in critically ill patients with liver cirrhosis may help to improve patient management, especially in the ICU setting [16]. However, to our knowledge, the acid-base profile of critically ill cirrhotic patients with acute-on-chronic liver disease (ACLF) has not been compared to critically ill patients without acute or chronic liver disease. Most information on the acid-base status of critically ill patients with cirrhosis was obtained by comparing these patients with healthy controls [8]. Yet, part of metabolic disturbances in critically ill patients with liver cirrhosis may be attributable to critical illness per se, rather than to the presence of chronic liver disease. The aim of this study was to assess acid-base patterns of critically ill patients with liver cirrhosis and ACLF, respectively, in comparison with critically ill patients without acute or chronic liver disease. Patients All patients admitted to 3 medical ICUs at the Medical University of Vienna between July 2012 and August 2014 were screened for inclusion in the study. For the present study, only patients who had arterial blood samples drawn within 4 h after ICU admission were eligible for inclusion. Patients with acute liver injury in the absence of chronic liver disease were excluded. One hundred and seventy-eight patients with liver cirrhosis were identified as eligible for inclusion. The control group of 178 critically ill patients without acute or chronic liver disease was selected by propensity score matching (PSM). All patients were screened for the presence of acute kidney injury (AKI) defined by urine output and serum creatinine according to the Kidney Disease: Improving Global Outcomes (KDIGO) Clinical Practice Guidelines for Acute Kidney Injury [19]. The presence of liver cirrhosis was defined by a combination of characteristic clinical (ascites, caput medusae, spider angiomata, etc.), laboratory and radiological findings (typical morphological changes of the liver, sings of portal hypertension, etc., in ultrasonography or computed tomography scanning), or via histology, if available. ACLF was identified and graded according to recommendations of the chronic liver failure (CLIF) consortium of the European Association for the Study of the Liver (EASL) [20]. CLIF-SOFA score [20] and CLIF-C ACLF score [21] were calculated. Septic shock was defined according to the recommendations of the Surviving Sepsis Campaign [22]. Twenty-eight-day mortality and 1-year mortality were assessed on site or by contacting the patient or the attending physician, respectively. This study is based on a post hoc analysis of prospectively collected data [23]. The Ethics Committee of the Medical University of Vienna waived the need for informed consent due to the observational character of this study. Sampling and blood analysis On admission, arterial blood samples were collected from arterial or femoral artery and parameters for the assessment of acid-base status were instantly measured. Quantitative physical-chemical analysis was performed using Stewart's biophysical methods [27], modified by Figge and colleagues [28]. Apparent strong ion difference (SIDa) was calculated: Effective strong ion difference (SIDe) was calculated in order to account for the role of weak acids [29]: The effect of unmeasured charges was quantified by the strong ion gap (SIG) [30]: Based on the concept that BE can be altered by plasma dilution/concentration reflected by sodium concentration (BE Na ), changes of chloride (BE Cl ), albumin (BE Alb ), lactate (BE Lac ) and unmeasured anions (BE UMA ), the respective components contributing to BE were calculated according to Gilfix et al. [31]. The detailed formulae for the BE subcomponents are shown in "Appendix. " Thus, total BE is calculated by the sum of the BE subcomponents: Reference values were obtained from a historical cohort of healthy volunteers, as published elsewhere [8]. Acidemia and alkalemia were defined by pH < 7.36 and > 7.44, respectively. HCO3 − < 22 and > 26 mmol/l, respectively, defined metabolic acidosis and alkalosis [2]. Respiratory acidosis and alkalosis were identified by PaCO 2 > 45 and < 35 mmHg, respectively. BE Na < − 5 and > 5 mmol/l defined dilutional acidosis and alkalosis, respectively. Hyperchloremic acidosis and hypochloremic alkalosis were defined by BE Cl < − 5 and > 5 mmol/l, respectively. BE Alb > 5 mmol/l identified hypoalbuminemic alkalosis. Lactic acidosis was defined by BE Lac < − 1.1 mmol/l (calculated BE Lac for lactate at the upper limit of normal) and metabolic acidosis owing to unmeasured anions by BE UMA < − 5 mmol/l. Statistical analysis Data are presented as median and interquartile range (25-75% IQR), if not otherwise specified. PSM was used to minimize the confounding effect of severity of disease on acid-base status when comparing cirrhosis to noncirrhosis patients. One-to-one PSM (1:1) was done by cirrhosis versus non-cirrhosis based on the following variables: SOFA score, need for mechanical ventilation and the presence of AKI. IBM SPSS 22 (with SPSS Python essentials and FUZZY extension command) was used for PSM. McNemar test was used for the comparison of binary and Wilcoxon's signed-rank test for the comparison of metric variables between cirrhosis and matched controls. Nonparametric one-way ANOVA (Kruskal-Wallis test) with Dunn's post hoc analysis was performed to assess differences in acid-base parameters between matched controls, cirrhosis patients without ACLF and ACLF patients. Within each group, comparisons were made using Chi-squared test or Mann-Whitney U test, as appropriate. Spearman's rank correlation was used to assess correlations between metric variables. A receiver operating curve (ROC) analysis was performed, and the area under the ROC curve (AUROC) was calculated to evaluate the prognostic value of different metric variables. Impact of acid-base disorders on mortality was assessed using Cox regression. A p value < 0.05 is considered statistically significant. Statistical analysis was conducted using IBM SPSS Statistics version 22. Patients' characteristics One hundred and seventy-eight patients had liver cirrhosis, and 157 of these patients (88%) were admitted with ACLF. The remaining cirrhosis patients (n = 21, 12%) were admitted to ICU due to isolated non-kidney organ failure (n = 9), isolated cerebral failure (n = 4), bleedings (n = 4), infections (n = 3) and after surgery (n = 1); all of which did not fulfill criteria for ACLF. The control group consisted of 178 critically ill patients without acute or chronic liver disease. SAPS II score and SOFA score did not differ between patients with and without cirrhosis ( Table 1). Clinical and laboratory features of critically ill patients with and without cirrhosis are shown in Table 1. Acid-base disorders in critically ill patients with and without cirrhosis Disturbances of acid-base balance were evident in the vast majority of our critically ill patients, irrespective of cirrhosis (Tables 2, 3). Critically ill patients (irrespective of cirrhosis) showed coexisting hyperchloremic acidosis and hypoalbuminemic alkalosis, mostly antagonizing each other in their contribution to total BE. In ACLF, we observed a marked metabolic acidosis owing to increased lactate levels, unmeasured anions and (to a lesser extent) dilutional acidosis. Both BE UMA and SIG differed significantly between critically ill patients with ACLF and without liver disease, respectively, although the small difference in SIG may be clinically negligible ( Table 2). In cirrhosis patients without ACLF, BE UMA was significantly higher compared to patients with ACLF. The resulting metabolic acidosis in ACLF was partly compensated by coexisting respiratory alkalosis in its contribution to pH; however, increasing net metabolic acidosis is resulted in acidemia in patients with ACLF grade III (62%, Table 3). Metabolic differences between critically ill patients with and without cirrhosis tended to increase with the severity of disease, as indicated by SOFA score (Additional file 1: Figure S1). Metabolic acid-base characteristics of critically ill patients with and without liver disease are illustrated in Fig. 1 and Additional file 1: Figure S1. Similarly, BE showed a strong association with 28-day mortality (Additional file 3: Table S1). Analysis of the BE subgroups revealed that the impact on mortality in cirrhosis was primarily caused by lactate and unmeasured anions (Table 4). This effect remained significant after correction for demographics, ACLF grade and the presence of infection/sepsis (Table 4). AUROCs for admission lactate/BE Lac and BE UMA in prediction of 28-day mortality in critical ill patients with liver cirrhosis were 0.744 (95% CI 0.671-0.816) and 0.692 (95% CI 0.613-0.770), In our matched controls, we observed no significant effect of acidemia, alkalemia, lactic acidosis and net metabolic acidosis, respectively, on 28-day mortality. Yet, pH values differed significantly between noncirrhosis 28-day survivors and non-survivors [7.37 (IQR Table S1]. Acidosis attributable to unmeasured anions was associated with 28-day mortality in our propensity score-matched controls; however, BE UMA did not differ significantly between non-cirrhotic 28-day survivors and non-survivors (Additional file 3: Table S1). Moreover, admission arterial lactate levels differed significantly between non-cirrhosis 28-day survivors and nonsurvivors [1.4 (IQR 0.9-2.4) mmol/l vs. 1.7 (IQR 1-4.1) mmol/l; p < 0.05]. Yet, the association between metabolic derangement and outcome was more distinct in cirrhosis patients (Additional file 3: Table S1). Discussion Disturbances in acid-base equilibrium are common in critical illness [16]. In this study, we demonstrate that critically ill patients with cirrhosis and ACLF, respectively, differentiate considerably from patients without hepatic impairment in terms of acid-base balance. In accordance with earlier reports, we observed in our cohort a marked hyperchloremic acidosis with coexisting hypoalbuminemic alkalosis [8,9,11]. This phenomenon, however, was not limited to patients with cirrhosis and should therefore not be considered an exclusive acidbase pattern of liver disease. Instead, this seems to be a characteristic pattern of critical illness per se [3]. Yet, hypoalbuminemia and resulting alkalosis were most pronounced in patients with ACLF. However, the main distinguishing metabolic acid-base characteristic between critically ill patients with and without cirrhosis was a marked metabolic acidosis attributable to an increased lactate (and unmeasured anions). In cirrhosis, coexisting respiratory alkalosis partly compensated for metabolic acidosis, thereby resulting in almost normal pH values. However, respiratory alkalosis failed to compensate for net metabolic acidosis in patients with ACLF. Increased lactate levels in critically ill patients can result from both increased production (e.g., tissue malperfusion, impaired cellular oxygen metabolism during sepsis, hypermetabolic states) and reduced lactate clearance (e.g., loss of functioning hepatocytes in acute hepatic injury or chronic liver disease) [32][33][34]. The liver Fig. 1 Disequilibrium in acid-base status in critically ill patients with liver cirrhosis, acute-on-chronic liver failure (ACLF) and without chronic liver disease. Results displayed as median and 95% CI; associations of base excess and its subcomponents with ACLF stage in cirrhosis patients assessed by univariate ordinal regression: BE p < 0.001, BE Na p = 0.074, BE Cl p = 0.728, BE Alb p = 0.295, BE Lac p < 0.001, BE UMA p < 0.05. Differences between cirrhosis and control patients are illustrated in Table 2 not only is a crucial player in the disposal of lactate, but may also become a net producer of lactate, especially during hepatic parenchymal hypoxia. Although lactic acidosis has been described in the literature in critical ill patients with cirrhosis [7,8], this is the first study investigating the association of metabolic disturbances with ACLF compared to a matched cohort of critically ill patients without liver disease. Indeed, the extent of lactic acidosis was directly associated with ACLF grade. Accordingly, lactic acidosis was present in almost 80% of all patients with ACLF grade III. Moreover, lactate levels were correlated with INR and bilirubin, thereby suggesting that lactate levels are directly related to liver function. Vasopressor support and severity of disease (as reflected by SOFA score) were also significantly associated with increased lactate levels. In sum, our data suggest that a combination of hepatic impairment and tissue hypoxia may contribute to lactic acidosis in critically ill patients with liver cirrhosis. Great effort has been put in revealing the nature of unmeasured anions in critical illness [2,[35][36][37][38]. Still, source and clinical implications of unmeasured anions Fig. 2 Association of bicarbonate (a) and pH (b) with 28-day mortality in critically ill patients with liver cirrhosis. Black dots: observed 28-day mortality rate; gray area: 95% confidence interval. *p values calculated by Chi-square test are incompletely understood [39,40]. Recently, it was shown in a large cohort of critically ill patients that increased concentrations of unmeasured anions were independently associated with increased mortality [41]. Citrate, acetate, fumarate, α-ketoglutarate and urate have been identified as potential candidates contributing to acidosis associated with high SIG in hemorrhagic shock [36]. Apart from states of shock, renal failure has been linked to increased levels of unmeasured anions in several studies [8,42,43]. As compared to non-ACLF cirrhosis patients, the presence of ACLF was associated with an increase in unmeasured anions, as reflected by BE UMA and SIG. Both variables were strongly associated with acute kidney injury. Patients with liver cirrhosis are especially susceptible to renal failure [44][45][46][47], and renal impairment constitutes a central criterion for ACLF [20]. In sum, our findings indicate that impairment of renal function, rather than "hepatic failure, " may be responsible for the increase in levels of unmeasured anions observed in patients with ACLF. In the present study, metabolic acidosis and acidemia, respectively, were associated with increased 28-day mortality in liver cirrhosis. Accordingly, 28-day mortality rate was 91% in cirrhosis patients with arterial pH values < 7.2 and 86% in those with arterial HCO 3 − values < 15 mmol/l. Lactic acidosis and acidosis attributable to unmeasured anions were identified as main contributors to acid-base imbalance in critically ill patients with liver cirrhosis. Earlier studies have challenged the prognostic value of unmeasured anions or lactate in critically ill patients [40]. Yet, the relationship between lactate levels, unmeasured anions and mortality and poor outcome has been described multiply in the literature [7,8,32,33,48], and lactate levels have recently been suggested as a parameter, indicating severity of disease in patients with chronic liver disease [49]. In our critically ill cirrhosis patients, we observed a dramatic independent impact of both lactate and BE UMA on 28-day mortality. Thus, acid-base status in critically ill patients with cirrhosis and ACLF, respectively, is an early and independent predictor of outcome (Fig. 2). By contrast, acid-base status was of poor prognostic value in our propensity scorematched controls. This may be attributable to the fact that our control patients were matched to critically ill cirrhosis patients, thereby resulting in the exclusion of less severely ill non-cirrhosis patients with better acid-base profiles and lower mortality rates. This study has strengths and limitations. First, this is a post hoc analysis; however, our study comprises structured acid-base analyses from a large cohort of critically ill patients stratified according to the presence of liver cirrhosis. Second, this study was performed in patients admitted to the ICU. Thus, our findings may not entirely reflect acid-base status of cirrhotic patients treated at normal wards. However, our study also incorporates cirrhosis patients without ACLF and patients of all ACLF categories. Third, there are pros and cons of propensity score matching. In this study, we have decided to use propensity score-matched controls in order to minimize the confounding effect of severity of disease on acid-base balance. Although we were able to achieve good comparability, inherent differences between cirrhotic and noncirrhotic patients affecting acid base balance cannot be entirely abolished by matching procedures. Moreover, the loss of heterogeneity (by selection of the most severely ill patients) hampers survival analyses in the control group. Fourth, residual confounding is, as always, a matter of concern and cannot be entirely excluded. Future studies should confirm these results and focus on therapeutic implications for patients with liver disease at the ICU. Conclusions In conclusion, we could demonstrate that hyperchloremic acidosis and hypoalbuminemic alkalosis coexist in critically ill patients, including those with liver cirrhosis. In cirrhosis, but particularly in ACLF, net metabolic acidosis was caused by lactate and unmeasured anions. Lactate was linked to liver function and vasopressor use, whereas unmeasured anions were strongly related to acute kidney injury. Metabolic differences between cirrhosis and noncirrhosis critically ill patients increase with the severity of disease, resulting in pronounced acidemia in cirrhosis patients with ACLF. Acidemia and metabolic acidosis, respectively, were associated with poor outcome in cirrhosis patients. Lactate and BE UMA were identified as independent predictors of 28-day mortality in critically ill patients with liver cirrhosis and ACLF.
2018-04-24T23:17:08.725Z
2018-04-19T00:00:00.000
{ "year": 2018, "sha1": "048bd1e26a9378980ce0aed80ca267f12512465f", "oa_license": "CCBY", "oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/s13613-018-0391-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "048bd1e26a9378980ce0aed80ca267f12512465f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118930714
pes2o/s2orc
v3-fos-license
The effect of radiation pressure on spatial distribution of dust inside HII regions We investigate the impact of radiation pressure on spatial dust distribution inside H$_\mathrm{II}$ regions using one-dimensional radiation hydrodynamic simulations, which include absorption and re-emission of photons by dust. In order to investigate grain size effects as well, we introduce two additional fluid components describing large and small dust grains in the simulations. Relative velocity between dust and gas strongly depends on the drag force. We include collisional drag force and coulomb drag force. We find that, in a compact H$_\mathrm{II}$ region, a dust cavity region is formed by radiation pressure. Resulting dust cavity sizes (~0.2 pc) agree with observational estimates reasonably well. Since dust inside an H$_\mathrm{II}$ region is strongly charged, relative velocity between dust and gas is mainly determined by the coulomb drag force. Strength of the coulomb drag force is about 2-order of magnitude larger than that of the collisional drag force. In addition, in a cloud of mass $10^5$ $M_{\odot}$, we find that the radiation pressure changes the grain size distribution inside H$_\mathrm{II}$ regions. Since large (0.1 $\mathrm{\mu m}$) dust grains are accelerated more efficiently than small (0.01 $\mathrm{\mu m}$) grains, the large to small grain mass ratio becomes smaller by an order of magnitude compared with the initial one. Resulting dust size distributions depend on the luminosity of the radiation source. The large and small grain segregation becomes weaker when we assume stronger radiation source, since dust grain charges become larger under stronger radiation and hence coulomb drag force becomes stronger. INTRODUCTION Radiation from young massive stars plays a crucial role in star forming regions, and its effect on spatial dust distribution inside H ii regions is also non-negligible. O'dell & Hubbard (1965) firstly observed dust inside the H ii region and many other observations found dust in H ii regions (O'dell et al. 1966;Ishida & Kawajiri 1968;Harper & Low 1971). O'dell & Hubbard (1965) observationally estimated the distribution of dust inside H ii regions, concluding that gas-todust mass ratio decreases as a function of distance from the centre of the nebulae. Nakano et al. (1983) and Chini et al. (1987) observationally suggested the existence of dust cavity regions. There have been some theoretical attempts to reveal dust distribution inside H ii regions (Mathews 1967;Gail & Sedlmayr 1979a,b). Gail & Sedlmayr (1979b) suggested that a dust cavity can be created by radiation pressure. E-mail: ishiki@astro1.sci.hokudai.ac.jp Radiation pressure may also produce spatial variations in the grain size distribution inside H ii regions as suggested by recent observational data of IR bubbles. From the Galactic Legacy Ingrared Mid-Plane Survey Extraordinaire (GLIMPSE; Benjamin et al. 2003), Churchwell et al. (2006) found that about 25% of IR bubbles are associated with known H ii regions and they claimed that the IR bubbles are primarily formed around hot young stars. Deharveng et al. (2010) then pointed out that 86% of IR bubbles are associated with ionzed gas. Since Churchwell et al. (2006) missed the large (> 10 arcmin) and small (< 2 arcmin) bubbles, Simpson et al. (2012) presented a new catalogure of 5106 IR bubbles. Paladini et al. (2012) found that the peak of 250 µm continuum emission appears further from radiation source than that of 8 µm continuum emission. Since they assumed that 250 µm continuum emission traces the big grains (BGs) and 8 µm continuum emission traces the polycyclic aromatic hydrocarbons (PAHs), they argued that the dust size distribution depends on the distance from a radiation source. Inoue (2002) argued the presence of the central dust depleted region -dust cavity -in compact/ultra-compact H ii regions in the Galaxy by comparing the observed infrared-to-radio flux ratios with a simple spherical radiation transfer model. The dust cavity radius is estimated to be 30% of the Stromgren radius on average, which is too large to be explained by dust sublimation. The formation mechanism of the cavity is still an open question, while the radiation pressure and/or the stellar wind from the excitation stars have been suggested as responsible mechanisms. We will examine whether the radiation pressure can produce the cavity in this paper. By considering the effect of radiation pressure on dust and assuming steady H ii regions, Draine (2011) theoretically explained the dust cavity size that Inoue (2002) estimated from observational data. Akimkin et al. (2015Akimkin et al. ( , 2017) estimated dust size distribution by solving motion of dust and gas respectively, and they concluded that radiation pressure preferentially removes large dust from H ii regions. Their simulations have, however, assumed a single OB star as a radiation source. As mentioned by Akimkin et al. (2015), grain electric potential is the main factor that affects the dust size distribution. If we assume a stronger radiation source, such as a star cluster, dust would been more strongly charged and their conclusions might change. In this paper, we investigate the effect of radiation pressure on spatial dust distribution inside compact H ii regions and compare it with the observational estimates (Inoue 2002). In addition, we perform multi-dust-size simulations and study the effect of the luminosity of the radiation source on dust size distribution inside H ii regions. The structure of this paper is as follows: In Section 2, we describe our simulations. In Section 3, we describe our simulation setup. In Section 4, we present simulation results. In Section 5, we discuss the results and present our conclusions. METHODS We place a radiation source at the centre of a spherically symmetric gas distribution. The species we include in our simulations are H i, H ii, He i, He ii, He iii, electrons, and dust. We assume the dust-to-gas mass ratio to be 6.7 × 10 −3 corresponding to a half of the abundance of elements heavier than He (so-called 'metal') in the Sun (Asplund et al. 2009). We neglect gas-phase metal elements in this paper. We solve the radiation hydrodynamic equations at each timestep as follows: The methods we use for radiation transfer, chemical reactions, radiative heating, cooling and time stepping are the same as Ishiki & Okamoto (2017, hereafter paper I). Dust model We include absorption and thermal emission of photons by dust grains in our simulations. To convert the dust mass density to the grain number density, we assume a graphite grain whose material density is 2.26 g cm −3 (Draine & Salpeter 1979). We employ the cross-sections of dust in Draine & Lee (1984) and Laor & Draine (1993) 1 . Dust sizes we assume are 0.1 µm or 0.01 µm. Dust temperature is determined by the radiative equilibrium, and thus the dust temperature is independent from gas temperature. We assume that the dust sublimation temperature is 1500 K; however, dust never be heated to this temperature in our simulations. We do not include photon scattering by dust grains for simplicity. Grain electric potential In our simulations, we solve hydrodynamics including the coulomb drag force which depends on grain electric potential. In order to determine the grain electric potential, we consider following processes: primary photoelectric emission, auger electron emission, secondary electron emission, and electron and ion collisions (Weingartner & Draine 2001;Weingartner et al. 2006). The effect of auger electron emission and secondary electron emission is, however, almost negligible in our simulations, because high energy photons (> 10 2 eV) responsible for the two processes are negligible in the radiation sources considered in this paper. Since the time scale of dust charging processes is so small ( 1 yr), we integrate the equation of grain electric potential implicitly. Dust drag force In our simulations, we calculate the effect of drag force F drag on a dust of charge Z d and radius a d (Draine & Salpeter 1979) as follows: where si ≡ miv 2 /(2kTg), k is the Boltzmann constant, Tg is the temperature of gas, ni is the number density of ith gas species, ne is the number density of electron, zi is the charge of ith gas species (i = H i, H ii, He i, He ii, He iii), and mi is the mass of ith species. dust and gas dynamics In this section we describe the procedure to solve the set of hydrodynamic equations: where ρg is the mass density of gas, ρ d is the mass density of dust, vg is the velocity of gas, v d is the velocity of dust, agra is the gravitational acceleration, f rad,g is the radiation pressure gradient force on gas, f rad,d is the radiation pressure gradient force on dust, Pg is the gas pressure, eg is the internal energy of gas, hg is the enthalpy of gas, and K d is the drag coefficient between gas and dust defined as follows: where n d is the number density of dust grains. In order to solve the dust drag force stably, we use following algorithm for the momentum equations: where ∆t is the time step, ρ t i is the mass density of ith species at time t, p t i is the momentum of ith species at time t, e t g is the internal energy of gas at time t, FX,i is the advection of the physical quantity X of the ith species, f d is the force on dust (f d = f rad,d ), fg is the force on gas (fg = f rad,g − ∂Pg/∂x), and the inverse of the drag stopping time, t d , is Equation (2) that determines the relative velocity between dust and gas is the exact solution of the following equations: Momentum advection and other hydrodynamic equations are solved by using AUSM+ (Liou 1996). We solve the hydrodynamics in the second order accuracy in space and time. In order to prevent cell density from becoming zero or a negative value, we set the minimum number density, nH 10 −13 cm −3 . We have confirmed that our results are not sensitive to the choice of the threshold density as long as the threshold density is sufficiently low. In order to investigate whether our method is reliable, we perform shock tube tests in Appendix A. ln Appendix B, we describe how we deal with the dust grains with two sizes. SIMULATION SETUP In the first simulation, in order to investigate whether our simulation derives a consistent result with the observational estimate for compact/ultra-compact H ii regions (Inoue 2002), we model a constant density cloud of hydrogen number density 4 × 10 5 cm −3 and radius 1.2 pc. As a radiation source, we place a single star (i.e. black body) at the centre of the sphere. Since we are interested in the formation of a dust cavity, we neglect the gravity which does not affect the relative velocity between dust grains and gas (see equation (2)). We assume a single dust grain size in this simulation. In the second set of simulations, in order to investigate the effect of radiation pressure on the dust grain size distribution inside a large gas cloud, we model a cloud as a Bonner-Ebert sphere of mass 10 5 M and radius 17 pc. As the radiation source, we consider a single star (black body, BB) or a star cluster (a simple stellar population, SSP) and we change the luminosity of the radiation source to investigate the dependence of the dust size distribution on the luminosity of the radiation source. We compute its luminosity and spectral-energy distribution as a function of time by using a population synthesis code, PÉGASE.2 (Fioc & Rocca-Volmerange 1997, 1999, assuming the Salpeter initial mass function (Salpeter 1955) and the solar metallicity. We set the mass range of the initial mass function to be 0.1 to 120 M . Materials at radius, r, feel the radial gravitational acceleration, where M(<r) represents the total mass of gas inside r and Mstar is the mass of the central radiation source, which is 50 M for the single star case and 2 × 10 3 or 2 × 10 4 M for the two star cluster cases. Since the gravity from the radiation source has a non-negligible effect on simulation results and causes numerical instability in the case of SSP, we need to introduce softening length, r soft . We set it to 0.5 pc for the SSP. Since the gravity from a single star is negligible effect on simulation results, we set 0 pc for the single star. Table 1. Initial conditions and numerical setup for the simulations. The radius of each cloud are shown as r cloud . The number densities, n H , n He , and n d indicate the initial number densities of hydrogen, helium, and dust in the innermost cell, respectively. Spatial distribution of dust and gas are indicated by 'C' (constant density) and 'BE' (Bonnor-Ebert sphere) in the 7th column. The spectrum of a radiation source is shown in the 8th column, where 'SSP' and 'BB' respectively indicate a simple stellar population with solar metallicity and the black body spectrum of given temperature.Ṅ ion represents the number of ionized photon emitted from a radiation source per unit time. The initial temperature of gas and dust are represented by Tg and T d , respectively. The mass of a central radiation source is indicated by Mstar. The number of dust sizes is shown in the 13th column. When 'Gravity' is on/off, we include/ignore gravitation force in the simulations. Cloud Following the dust size distribution of Mathis et al. (1977), so-called MRN distribution, we assume two dust size in these simulations. We assume the initial number ratio of large to small dust as n d,Large : n d,Small = 1 : 10 2.5 , where n d,Large and n d,Small are the number density of dust grains of 0.1 µm and 0.01 µm in size, respectively. The details of initial conditions are listed in Table 1. We use linearly spaced 128 meshes in radial direction, 128 meshes in angular direction, and 256 meshes in frequency direction in all simulations to solve radiation hydrodynamics. Dust cavity radius We present density, gas temperature, dust-to-gas mass ratio, grain electric potential (V d ≡ eZ d /a d ), and relative velocity between dust and gas as functions of radius in Fig 1. In the top panels, the hydrogen number density is indicated by the red solid line. The number density of H ii is indicated by the blue dash-dotted line. The initial state of the simulation is shown by the black dotted line. The average electron number density within an H ii region, ne, the H ii region radius, rH ii, the dust cavity radius, r d , and the ratio between the radius of the dust cavity to the Strömgren radius, y d , obtained by our simulation (t = 0.42 Myr) and the observational estimate are shown in Table 2. We find that our simulation results are in broad agreement with the observational estimate. The dust cavities, hence, could be created by radiation pressure. The parameter y d obtained by the simulation is somewhat smaller than the observational estimate. However, we could find a better agreement if we tuned the initial condition such as the gas density. In addition, the agreement would be better if we included the effect of stellar winds, which was neglected in this paper. Since dust inside the H ii region is strongly charged, relative velocity between dust and gas is determined by coulomb drag force. Magnitude of the coulomb drag force is about 2-order of magnitude larger than that of the collisional drag force. The relative velocity, thus, becomes largest when the dust charge is neutral. Grain electric potential gradually decreases with radius and then suddenly drops to negative value. Near the ionization front, the number of ionized photons decreases and hence collisional charging becomes important. This is the reason behind the sudden decrease of the grain electric potential. In the neutral region, there is no photon which is able to ionize the gas and hence there is few electrons that collide with dust grains. On the other hand, there are photons that photoelectrically charge dust grains. Therefore, the grain electric potential becomes positive again at just outside of the H ii region. Then, the UV photons are consumed and the electron collisional charging becomes dominant again in the neutral region. Spatial distribution of large dust grains and small dust grains We present densities, gas temperature, dust-to-gas mass ratios for large and small grains, large-dust-to-small-dust mass ratios (ρ d,Large /ρ d,Small ), the grain electric potential, and relative velocity between dust and gas as functions of radius in Fig 2. In order to compare the simulation results on the dust size distribution with each other, we present the results at the time when the shock front reaches to ∼15 pc. In order to study the dependence of the dust size distribution on time and the luminosity of the radiation source, we also present the simulation result of Cloud 2 at t =1.1 My: the same irradiation time as Cloud 4. In the top panels, the hydrogen number density is indicated by the red solid lines and that of H ii is indicated by the blue dot-dashed lines. Initial states of the simulations are shown by black dotted lines. In the fifth row, the charges of dust grains with size 0.1 µm and 0.01 µm are indicated by the red solid and blue dotdashed lines, respectively. The black dotted lines show the initial profiles (i.e. 0 V). In the bottom panels, the relative velocity between dust grains with size 0.1 µm and gas and between dust grains with size 0.01 µm and gas are indicated by the red solid and blue dot-dashed lines, respectively. Note that the radiation source becomes stronger from Cloud 2 to Cloud 4. We find that radiation pressure affects the dust distribution within an H ii region depending on the grain size. In Fig 2, we divide them into the following four regions: (a) From the central part, radiation pressure removes both large and small dust grains and creates a dust cavity (the yellow shaded region). Figure 1. Density (top-row), gas temperature (second-row), dust-to-gas mass ratio (third-row), grain electric potential (fourth-row), and relative velocity between dust and gas (bottom-row) profile at t = 0.42 Myr. The black dotted lines show the initial profiles. The red solid lines represent the simulation results. The blue dashed lines in the top panel shows the ionized hydrogen density profile. Table 2. Comparison between the simulation results (t = 0.42 Myr) and the observational estimates. The number densities of ionized electrons inside H ii regions are represented by ne. The number of ionized photons emitted from radiation sources per unit time is represented byṄ ion . The radius of H ii region is represented by r H ii . The radius of dust cavity radius is represented by r d . The parameter y d is defined by y d ≡ r d /R St , where R St is the Strömgren radius. Since, observationally, the number density is driven from the column density, the electron number density in the simulation is defined as by ne ≡ r H ii 0 nedx/r H ii . In addition, in order to match the definition of r d with Inoue (2002), the dust cavity radius is defined by represents the initial dust-to-gas mass ratio. Inoue (2002) 1200 ± 400 6.8 ± 3.9 0.72 ± 0.098 0.28 ± 0.13 0.30 ± 0.12 (b) Within an H ii region, ρ d,Large /ρ d,Small has a peak. Between the region 'a' and this peak, there is a region where ρ d,Large /ρ d,Small takes the local minimum value (the cyan shaded region), for example, at r ∼ 4 pc in Cloud 2 at t = 2.9 Myr. (c) The region that contains the peak mentioned above is shaded by magenta. (d) The ρ d,Large /ρ d,Small is also reduced just behind the ionization front (the gray shaded region), for example, at r ∼ 6 pc in Cloud 2 at t = 2.9 Myr. We find that the dust cavity radius becomes larger as the radiation source becomes brighter (the regions 'a'). The reasons are as follows. grain electric potential of the dust grains with the same size within r = 2 pc is almost the same among all simulations and the number density of the gas becomes smaller for stronger radiation source. Since the dust drag force strongly depends on the grain electric potential, the number density of gas, and radiation pressure on dust, relative velocity between dust and gas becomes larger for the brighter source. In the region 'b' and 'd', the ratio ρ d,Large /ρ d,Small is decreased from the initial condition when the radiation source is a single OB star (Cloud 2). Except in the ionization front (vertical brown dashed lines) which is contained in the region 'd', radiation pressure preferentially removes large dust grains from these regions. The photoelectric yield of the large dust grains is smaller than that of the small dust grains, and hence grain electric potential of the large dust grains becomes smaller than that of the small dust grains. Coulomb drag between large dust grains and gas therefore becomes weaker than that between small dust grains and gas. On the other hand, since Cloud 4 has the strongest radiation source and hence it makes grain electric potentials largest among the simulations, the dust size segregation in the regions 'b' and 'd' is less prominent. Even when we compare Cloud 2 and 4 at the same irradiation time, t =1.1 Myr, the dust size distributions inside H ii regions are different. Luminosity of the radiation source must be the main cause of the dust size segregation. The ratio ρ d,Large /ρ d,Small in all simulations has a peak in the regions 'c'. Since dust grains have large negative charge in the regions 'c' and 'd', the coulomb drag force between dust and gas is strong and hence dust and gas are tightly coupled each other. Large dust grains are, therefore, removed from the regions 'a' and 'b' and gathered in the regions 'c'. At the ionization front and the shock front (vertical green dot-dot-dashed lines), the relative velocity, v d − vg has downward peaks. In theses fronts, gas pressure force exceeds radiation pressure force. Since the dust drag time depends on the dust grain size, the dust-gas relative velocity also depends on the grain size. As a result, ρ d,Large /ρ d,Small is slightly reduced in these fronts. DISCUSSION AND CONCLUSIONS We have investigated radiation feedback in dusty clouds by one-dimensional multi-fluid hydrodynamic simulations. In order to study spatial dust distribution inside H ii regions, we solve gas and dust motion self-consistently. We also investigate dust size distribution within H ii regions by considering dust grains with two different sizes. We find that radiation pressure creates dust cavity regions. We confirm that the size of the dust cavity region is broadly agree with the observational estimate (Inoue 2002). We also find that radiation pressure preferentially removes large dust from H ii regions in the case of a single OB star. This result is almost the same as in Akimkin et al. (2015). The dust size distribution is, however, less affected when the radiation source is a star cluster, in other word, a more luminous case. Resulting dust size distributions largely depend on the luminosity of the radiation source. We assume dust is graphite. There are, however, other forms of dust such as silicate. Since the photoelectric yield and the absorption coefficient depend on a dust model, spatial dust distribution of dust grains may become different when we use a different dust model. For example, since silicate has a larger work function and a smaller absorption coefficient than graphite, the cavity size in the silicate case may become larger than that in the graphite case (see Akimkin et al. 2015Akimkin et al. , 2017 for details). In our simulations, we neglect the effect of sputtering that changes the dust grain size. We estimate this according to Nozawa et al. (2006), and confirm that sputtering effect is negligible in our simulations. However, if we consider the smaller dust grains, we may have to include the sputtering. APPENDIX A: SHOCK TUBE TESTS In order to investigate whether our method is reliable, we perform shock tube tests. Since the effect of the dust becomes almost negligible in shock tube tests if we assume dust-to-gas mass ratio as 6.7×10 −3 (the value we assume in our simulations) and hence we will not be able to investigate whether the numerical code is reliable or not, we assume dust-to-gas mass ratio as 1 in the shock tube tests. The initial condition of the shock tube problem is as follows: where γ is heat capacity ratio. Since the analytic solutions are known for K d = 0 and ∞, we perform test calculations for K d = 0 and K d = 10 10 (∆tsim where ∆tsim is the time scale of the shock tube problem and t d is the drag stopping time. We use linearly spaced 400 meshes between x = 0 and 1. Time steps we use for these simulations are ∆t = 2.5 × 10 −4 for K d = 0 and ∆t = 4.2 × 10 −4 for K d = 10 10 . The results are shown in Fig A1. We confirm that the numerical results agree with the analytic solutions. APPENDIX B: DUST GRAINS WITH TWO SIZES AND GAS DYNAMICS In order to investigate the spatial variation of the grain size distribution inside H ii regions, we solve following hydrodynamics equations, where we consider dust grains with two sizes (dust-1 and dust-2): where ρ d1 is the mass density of dust-1, ρ d2 is the mass density of dust-2, v d1 is the velocity of dust-1, v d2 is the velocity of dust-2, f rad,d1 is the radiation pressure gradient force on dust-1, f rad,d2 is the radiation pressure gradient force on dust-2, K d1 is the drag coefficient between gas and dust-1, and K d2 is the drag coefficient between gas and dust-2. In order to solve dust drag force stably, we use following algorithm for the equation of momentum: where f d1 is the force on dust-1 (f d1 = f rad,d1 ), f d2 is the force on dust-2 (f d2 = f rad,d2 ), and y = (ad + ac + bd) x . As in section 2.4.1, in order to determine the relative velocity between gas and dust, we use equation (B2) exact solution of the following equations: In order to solve momentum equations, we therefore first solve the momentum advection (B1), and then we solve the exact solution of the equation (B3) by equation (B2). In the case for |x|∆t 1 or |y|∆t 1, we use Taylar expansion, e x∆t ≈ 1+x∆t or e y∆t ≈ 1+y∆t, and prevent the numerical error in calculating (e x∆t − 1)/x from becoming too large. APPENDIX C: THE TERMINAL VELOCITY APPROXIMATION We here show that the terminal velocity approximation may give an unphysical result when the simulation time step ∆t is shorter than the drag stopping time t d . In order to derive dust velocity and gas velocity, we have used the equation (2). On the other hand, Akimkin et al. (2017) used the terminal velocity approximation. When we employ the terminal velocity approximation, the equation (2) is transforms into the following form: The advantage of the equation (2) is that it is accurate even for ∆t < t d . In contrast, the equation (C1) becomes inaccurate for ∆t t d , since the relation of ∆t and t d should be ∆t t d in order the terminal velocity approximation to be valid. For example, the direction of f d on gas and that of fg on dust in the equation (C1) becomes opposit for ∆t < t d . We perform simulations by using equation (C1) in stead of equation (2) and compare the simulation results. Simulation results do not largely change for Cloud 2, 3, and 4. The numerical simulation of Cloud 1, however, is crashed, since the timestep becomes ∆t t d at some steps. The relation between ∆t and t d becomes ∆t < t d when the drag stopping time t d is larger than the chemical timestep ∆t chem or the timestep ∆tCFL defined by Clourant-Friedrichs-Lewy condition. The chemical timestep is defined in equation (7) in paper I. In Fig C1, we present the condition for tCFL(≡ α∆x/v) > t d in the case of the intergalactic medium (IGM), the H ii region, and the H i region, where α is constant (we assume α = 0.1), ∆x is the mesh size, and v is velocity. The details of numerical setup for the IGM, the H ii region, and the H i region are listed in Tab C1. The green, red, and blue hatched regions represent the condition for tCFL > t d for the IGM, the H ii region, and the H i region, respectively. If Table C1. Numerical setup for the IGM, the H ii region, and the H i region. The number densities of hydrogen and ionized hydrogen are represented by n H and n H ii . The temperature of gas is represented by Tg. The radius of a grain is represented by a dust . The grain electric potential of dust grains is represented by V d . The relative velocity between a dust grain and gas is represented by ∆v. IGM HII region HI region Figure C1. The green, red, and blue hatched regions represent the condition of t CFL > t d for the IGM, the H ii region, and the H i region, respectively. the relation between tCFL and t d becomes tCFL t d , the simulation may become unstable. This paper has been typeset from a T E X/L A T E X file prepared by the author.
2017-11-20T12:24:51.000Z
2017-08-23T00:00:00.000
{ "year": 2018, "sha1": "24203760dfe0a05c01facd32867abe1614adc326", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/474/2/1935/22589590/stx2833.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "24203760dfe0a05c01facd32867abe1614adc326", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
197192916
pes2o/s2orc
v3-fos-license
Potent CYP3A4 Inhibitors Derived from Dillapiol and Sesamol Synthesis of 50 analogues of the natural insecticide synergists, dillapiol and sesamol, is reported. These were evaluated as potential insecticide synergists based on their inhibition of human CYP3A4. The most potent inhibitors have a relatively large hydrophobic substituent at either position 5 or 6 of these molecules. For example, 5-(benzyloxy)-6-(3-phenylsulfonyl)propyl)benzo[d][1,3]dioxole (18) and the diphenyl acetate of (6,7-dimethoxybenzo[d][1,3]dioxol-5-yl)propan-1-ol (5n) show inhibitory concentrations for 50% activity IC50 values of 0.086 and 0.2 μM, respectively. These compounds are 106 and 46 times more potent than dillapiol whose IC50 for the inhibition of CYP3A4 is 9.2 μM. The ortho-chloro analogue (8f), whose activity is 86 times the activity of dillapiol, is the most potent of the fourteen 5-(benzyloxy-6-(2 propenyl)benzo[d][1,3]dioxoles prepared for this study. ■ INTRODUCTION In our study of Mesoamerican and African traditional medicines, 1,2 we have observed the use of plants including the genus Piper as nontargeted adjuvant agents combined with other targeted plant therapies. Extracts of these plants were found to be a potent inhibitor of enzymes of metabolism including CYP3A4, 3A5, and 3A7, which suggests that they could inhibit the metabolism at phase I of conventional drugs in vivo as well as act as pharmacoenhancers in herbal mixtures. An active inhibitory principle of the neotropical Piper aduncum was identified as the neolignan dillapiol. Dillapiol is a potent inhibitor of cytochrome CYP3A4. 3 Pharmacoenhancement by dillapiol was demonstrated in mice acutely infected with Piper berghei, where we found that dillapiol has no antimalarial activity but greatly enhanced the plasma levels and efficacy of the antimalarial compound gedunin. 4 The synergistic potential of dillapiol combined with pyrethroids against mosquitoes had also been described. 5 Dillapiol has also been long known as an insecticide synergist. A recent study reported that dillapiol effectively synergized the efficacy of pyrethrin in resistant strains of Colorado potato beetle. 6 A pilot study of a small set of dillapiol analogues with the botanical larvicide, α-terthienyl, showed that modification of the molecule could lead to more effective synergists, but the quantitative structure−activity relationship (QSAR) models were not very robust because of uncertainties in the LC 50 (lethal concentration) values. 7 Piperonyl butoxide (PBO) has been used as an insecticide synergist for more than 60 years and is today by far the most commonly used insecticide synergist. Typical databases indicate that it is a component in more than 1500 products where the active ingredient is either a synthetic or naturally occurring insecticide. 8 The ratio of PBO to active ingredient is variable depending on the application. It can range from 3:1 to 20:1 with the typical ratio being 5:1. Although PBO is generally viewed as nontoxic to humans and assigned to category IV by the US EPA, recent reports focusing on neurological developments indicate that potential problems have generated some concerns. 8−10 The European Union has recognized the need for improved insecticide synergists and announced in 2013 an award of 1 million Euros to a consortium of companies and research organizations to "develop insecticide synergists for agricultural, household, and public health use based on the knowledge of the interaction of PBO with metabolic enzymes in the insect pests". 11 To gain further insight into dillapiol and its analogues as insecticide synergists, an expanded series of dillapiol analogues have been prepared for the present study by exploring a range of modifications of the parent molecule. Sesamol was also used as a starting material for the preparation of related derivatives missing one of the two methoxy groups present in dillapiol and its analogues. All analogues were evaluated in a CYP3A4 inhibition assay using a highly uniform and purified cloned commercial enzyme preparation. The inhibitory concentrations for 50% activity (IC 50 values) obtained from this method had tight confidence intervals (CIs). As the human CYP3A4 enzyme has considerable similarity with the insect versions, 12 the results from these studies should be translatable to the application of these compounds as potential insecticide synergists. Indeed, we have shown this to be the case. 13 A number of ethers derived from sesamol which are significantly more potent CYP3A4 inhibitors than dillapiol have been evaluated as potential insecticide synergists of different insecticides by BASF scientists in various locations throughout the world. The results were sufficiently promising and have prompted a European patent application. 14 ■ MATERIALS AND METHODS CYP3A4 Inhibition Assay. Enzyme inhibition assays were conducted with cytochrome P450-BD Gentest CYP3A4 (City, State) with the CytoFluor 4000 fluorescence measurement system (Applied Biosystems, Foster City, CA) as described by Budzinski et al. 7 Briefly, assays were performed in clearbottom, opaque-welled, 96-well microtiter plates (Corning Costar, Corning, NY) using dibenzyl fluorescein (DBF, Sigma-Aldrich, Milwaukee, WI) as a substrate. Wells were designated as either "control", "control blank", "test", or "test blank". Control wells consist of ddH 2 O and β-nicotinamide adenine dinucleotide phosphate (NADPH) solution; control blank wells consist of H 2 O and buffer solution; test wells consist of the dillapiol analogues at a particular concentration and NADPH solution; and test blank wells consist of the corresponding analogue compounds and buffer solution. Enzyme solution was added to all wells. Fluorescence was measured at 485 nm excitation with a 20 nm bandwidth filter. All measurements were carried out in triplicate. Statistical Analysis. Percent inhibition values were calculated based on differences in fluorescence between the control/control blank wells and test/test blank wells by the formula where t = X and t = 0 are the reading values of the control/ control blank/test/test blank wells at the end and beginning of running X min, respectively. The median inhibition percent (IC 50 ) of each analogue was determined with logarithmic curves plotted by different concentration and percent inhibition. The dillapiol relative inhibition activity was obtained by the formula IC of dillapiol/IC of tested compound 50 50 As the aim of the research was to find the analogues possessing significantly higher CYP3A4 inhibition activity than dillapiol (positive control), the analogues which showed more than two times the IC 50 value higher than dillapiol were labeled as "/" and no data are shown for such compounds in Tables 1 and 2. Statistical analysis of CIs and range of IC 50 values observed were carried out using GraphPad Prism 5; these are shown in the table of the Supporting Information. Synthesis of Dillapiol and Sesamol Analogues. Dillapiol was obtained via steam distillation of the fruit of P. aduncum collected in the Sarapiqui region of Costa Rica. A typical distillation of 3 kg of fruit when steam distilled with 3 L of water yielded between 30 and 35 g (1 to 1.2%) of essential oil whose proton NMR indicated dillapiol with more than 95% purity. This material was considered sufficiently pure for transformation to the various intermediates and final products. Dillapiol can also be isolated by steam distillation of the leaves and branches of P. aduncum; the typical yields were in the 0.3 and 0.15% range for leaves and branches, respectively. The Indicates that the compound inhibited CYP 3A4 to a lesser degree than dillapiol. latter materials tended to have greater amounts of impurities relative to dillapiol compared to that obtained from the fruit. It is well recorded in the literature 15 that the composition of the essential oil obtained from P. aduncum varies greatly for different geographical locations. The dillapiol content of our Costa Rican sample was as high as that obtained from one of the samples [sample G, collected at Road Manaus-Caracarao, km 30 (AM), 97.3%]. Our yield of oil was only 1−1.2% versus 3.0% reported by Maia et al. 15 Sesamol was purchased from Sigma-Aldrich and used as such. Standard, well-known chemical transformations were employed to produce various analogues. A group of 17 esters was prepared via Scheme 1. Hydroboration of dillapiol, 1, with borane-dimethyl sulfide in tetrahydrofuran gave, after reductive workup, mainly the expected primary alcohol 2 along with minor amounts of the secondary alcohol 3. These isomers were separated via flash silica gel chromatography. Both isomers were esterified with different-sized aliphatic and aromatic acids either by reaction with the respective acid chloride or by coupling the acid and alcohol with the help of the coupling reagent DCC. An additional 10 esters and 14 ethers were prepared starting with sesamol in order to investigate the importance of the 4methoxy group in dillapiol. Sesamol, 6, was O-allylated by the reaction with allyl bromide in the presence of potassium carbonate. The resultant allyl ether underwent a clean Claisen rearrangement upon heating to 190°C in decalin to give the ortho-allylated phenol 7. Subsequent O-alkylation provided ethers 8. Hydroboration of 8b (R = CH 3 ) and 8d (R = CH 2 Ph) yielded the primary and secondary alcohols 9 and 10, respectively. Acylation of these gave compounds with the general formula 11 and 12, Scheme 2. All compounds were characterized by both 1 H and 13 C NMR and high-resolution mass spectrometry, which assured their structure assignments (see the Supporting Information). The above compounds were tested in the CYP3A4 assay. The results are presented as inhibition activity relative to dillapiol activity = 1. The primary data set including mean IC 50 values and 95% CIs for esters, 17 prepared from dillapiol and 10 from sesamol, are provided in Tables 1 and 2, respectively. Although the raw IC 50 values of 27 ester analogues showed a lower trend (i.e., higher inhibition) than that of dillapiol (9.2 μM), statistical analysis revealed that only 15 analogues possessed significantly greater CYP3A4 inhibition activity than dillapiol (df = 47, 96; F = 106.5; P < 0.05). These are highlighted with * in these tables. For the remaining compounds, Tables 3−5, only their inhibition activity relative to dillapiol is presented. To ensure consistency, dillapiol was included as the standard in each set of measurements, typically 5 to 8 compounds. The IC 50 value for dillapiol was consistent in the 8.9−9.2 μM range. ■ RESULTS AND DISCUSSION Much of the impetus of this research came from an earlier observation by our group that the tertiary alcohol 13 obtained by condensation of the allyl anion derived by the reaction of dillapiol with n-BuLi in THF at −78°C with benzophenone strongly inhibited CYP3A4. 16 The synthesis of 13 was repeated, and its CYP3A4 inhibition was found to be 5.8 times more potent than that of dillapiol. The hydrogenolysis product 14 and the alkene metathesis product 15 had similar potency, while the dillapiol dimer 16 was 13 times more potent than dillapiol 17 (Table 3). Both the alcohols 2 and 3 were less potent than dillapiol in inhibiting CYP3A4. The data in Table 1 show that for the esters derived from the primary alcohol 2, an increase in size of the R group, particularly if near the ester function itself, resulted in increased CYP3A4 inhibition. For example, the acetate 4a (R = CH 3 ) has essentially the same inhibitory effect as dillapiol. The change in R from CH 3 (4a) to nC 5 H 11 (4d) to cyclohexyl (4e) to Bu t (4a) resulted in increased inhibition by factors of 2.8, 3, 4, and 4.8, respectively. The benzoyl ester (4h) is 4.4 times more potent inhibitor than the methyl ester. Chlorine substituents on the aromatic ring further increase the potency of these compounds to statistically significant values of greater than 6 (4h, 4i, 4j). The greatest change was observed when a hydrogen in the benzyl group (4m) was replaced by a second aromatic ring to yield R = benzhydryl (4n, 5n). This increased inhibition by a factor of 5 makes 4n 23 times more potent than dillapiol. The benzhydryl ester, 5n, of the secondary alcohol 5 is twice as active as 4n (46 times dillapiol). On the basis of this result, several other benzhydryl esters were prepared. The esters of the secondary alcohol were more potent than those of the isomeric primary alcohol. The benzhydryl ester 12, derived from the secondary alcohol 10 (R = CH 2 Ph) (Scheme 2), gave an IC 50 value of 0.16 μM making it almost 60 times more potent than dillapiol. It is among the most potent CYP 3A4 inhibitors reported in this study (Scheme 3). Comparison of the same esters derived from dillapiol and sesamol suggests that the removal of the 4-methoxy group in dillapiol does not have a significant effect on the inhibition of the CYP3A4 enzyme. For example, the benzoyl esters 4g and ACS Omega Article 9a inhibit this enzyme to essentially the same extent. The same conclusion can be drawn when one compares data for the benzhydryl esters 4n and 9f; each of these compounds is approximately 20 times as effective as dillapiol. The replacement of the 5-methoxy group in the sesamol derivatives by benzyloxy groups (Scheme 4) (9b and 9g) lowers the inhibitory activity of these compounds by a factor of 2−3. This is opposite to the results observed for the corresponding 6-allyl derivatives where the benzyl ether 8c was significantly more active than the methyl ether 8b. The in vivo bioassays reported by Liu 13 showed that only the highly hindered benzhydryl esters had good synergistic activity as anticipated by their in vitro CYP3A4 inhibition. The less hindered esters such as the methyl ester 4a and the benzoyl ester 4g showed no insecticide synergism of pyrethrum against the Colorado potato beetle likely due to insect esterase activity, which would convert these compounds back to the inactive alcohols 2, 3, 9, and 10. The metabolically more stable ethers, for example, 8b showed the increased in vivo synergism relative to dillapiol expected on the basis of its increased inhibition of CYP3A4. A series of benzylic ethers 8b−8o were prepared from sesamol via the allylated phenol 7 (Table 5). A number of these compounds where R is C 6 H 13 and CH 2 Ph showed modest enhancements in potency in comparison to dillapiol with the hexyl ether showing 4.8-fold and the benzyl ether 6.8fold increases, respectively. Most of the other analogues had similar CYP3A4 inhibitory activity with values ranging from 0.9 times the activity of dillapiol for the O-methoxy derivative 8g to 8 times for the O-methyl compound 8e. Several derivatives stood out, in particular, the O-chloro analogue 8f and the 1-naphthyl ether 8m, which inhibited CYP3A4 86 and 26 times more effectively than dillapiol. These compounds have inhibitory activity comparable to the most potent benzhydryl esters 5n and 11i and the sulfones 18 and 19. The benzyl ethers 8n and 8o, which have 2-methylpropenyl and 2-chloropropenyl side chain at C6, respectively, were 15 and 12.6 times more potent inhibitors than dillapiol, suggesting that a combination of a more hindered side chain at C6 coupled with appropriate ortho-substitution in the 5-benzyloxy moiety could lead to even more potent CYP3A4 inhibitors. Replacement of the benzoyl ester function in 4g by a benzyl ether yielding 17a and 17b resulted in a decrease of the in vitro potency (Table 4). A group of benzylic ethers 8 were selected by BASF for evaluation as insecticide synergists against economically important insects in combination with common insecticides and their potential to be registered by Environmental Protection agencies. The initial results are encouraging and will be reported when the evaluation is complete. 18 ■ CONCLUSIONS Qualitative analysis of the inhibition data for a variety of esters indicates that the larger acyl group attached to alcohols 4, 5, 11, and 12 gives higher in vitro inhibition of CYP3A4. Additionally, it appears that the same esters derived from the secondary alcohols, for example, 5n and 12, are approximately twice as potent as those obtained from the isomeric primary alcohols, for example, 4n and 11i. In the case of ethers 8b−8o, the replacement of the remaining 5-methoxy group by other larger substituents such as O-hexyl, O-benzyl, in particular, several ortho-substituted benzyl analogues, and O-1-naphthyl resulted in significant increases in the inhibitory property of these compounds relative to dillapiol. The replacement of the diphenylacetic acid moiety in compounds such as 10i and 12 by a phenylsulfonyl group resulted in the most potent CYP3A4 inhibitors we have produced thus far with 18 and 19 (Scheme 5) having IC 50 values of 0.086 and 0.13 μM, respectively. Thus, 18 is 107 times and 19 is 70 times more potent than dillapiol as inhibitors of this CYP 3A4 enzyme. The sulfones, unlike the esters, are not subject to esterase metabolism in the insects and therefore more likely to be effective in field trials. Additional structure modifications, potentially guided by a QSAR developed from the available data, 16,17,19 designed to enhance the potency of sesamol and dillapiol derivatives as CYP3A4 inhibitors, in particular, various ether and sulfone analogues, and the testing of these compounds in in vitro and the field will be reported in the future.
2019-07-17T21:04:15.620Z
2019-06-24T00:00:00.000
{ "year": 2019, "sha1": "0ee8b9724f681d47010244591fe4fd57753b35de", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.9b00897", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef6992de2f4b55d4d3820bf4812fa3cc4b5c0c04", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
234015381
pes2o/s2orc
v3-fos-license
Analysis of Scour Depth Around Bridge Piers With Round Nose Shape by HEC-RAS 5.0.7 Software Local scour of bridge piers is the main reason for the failure of a hydraulic structure like abutment, bridge piers, etc. Local scours is a complex phenomenon that depends on the discharge, depth of flow, shapes of the pier and distribution of sediment particle. The problems of local scours occurred in Krueng Ineng river, Nagan Raya Regency will cause a structural collapse which has the impact of decreasing the stability of the bridge structure. In this study, the software of Hydrologic Engineering Center River Analysis System (HEC-RAS) 5.0.7 is used to evaluate local scour around the bridge pier which employs the Colorado State University method as a default equation. Flow conditions were simulated using HEC-RAS flow modeling software estimated for 100-year flood. The Results of the analysis with used the peak discharge (Qp100) that occurs in the Krueng Seunagan watershed is 1513m3/sec, pier width of Round Nose shape is 4m, and average grain size analysis D50 and D95 is 0.91mm and 4.35mm, show a maximum scour depth obtained is 5.04m. The results of this study will be a reference for the local government to planning appropriate handling for minimalizing local scours in the study area. Introduction Bridges are critical structures that require large investments and have an important role in economic development. Major damage to the bridge that occurred during the flood was caused by various reasons. The reason is the local scour that occurs due to the influence of pier and abutment. Local scour are occurs directly from the structure in the river channel. The process of local scour is usually triggered by the detention of sediment transport that is carried along with the flow by the building structure (Cambodia, 2018). Large amounts of local scour are dangerous to the bridge pier and cause a collapse of the bridge structure. Many researchers have investigated the phenomenon of local scouring around the bridge pier (Ahmad, 2017). The collapse of bridges results in costly repairs, disruption of traffic and possible death of passengers traveling on the bridge when a collapse occurs. The scour phenomenon experimentally or theoretically around bridge piers and considered some parameters affected the phenomenon. There are generally three types of scours that affect the performance and safety of bridges, that is local scours, contraction scours and degradation scours. Factors affecting local scours development are flow intensity, flow shallowness, sediment coarseness, time and velocity distribution (Akib & Rahman, 2013). One of the areas experiencing local scouring problems is the Alue Buloh bridge at a river crossing on the Krueng Ineng river, this bridge is located in the Latong area of Seunagan District, Nagan Raya Regency precisely in the Krueng Seunagan watershed. The construction of this steel frame bridge is one of the access links between the two villages namely Alue Buloh Village and Latong Village. Judging from the conditions at the study site, local scours cause the bridge pillars to have decreased even though on a small scale and if this problem is left then it can have an impact on the failure of the larger bridge structure. Based on the condition of the above problems, it is necessary to studying and identifying the bridge located in the study location. The analysis in this study was simulated using the HEC-RAS 5.0.7 software. HEC-RAS 5.0.7 software created by the Hydrologic Engineering Center (HEC) which is a work unit under the US Army Corps Engineering (USACE). HEC-RAS 5.0.7 can analyze twodimensional counts on the profile of a permanent water level (Steady Flow), simulation of nonpermanent flow (Unsteady Flow), count of sediment transport, analysis of water quality, and hydraulic design features (FDOT, 2005). The importance of finding the right calculation parameters to predict the amount of local scouring that occurs in the Krueng Ineng river due to the construction of water structures is expected to be a benchmark in planning the prevention of the structure of the bridge. Research on local scour on water structures, especially on bridges, needs to be done, because the impact of local scours can reduce the security of bridge structures. So that identification and analysis are needed in predicting the local scour that is around the bridge to minimize the impact that might occur. The Colorado State University (CSU) equation is the most widely used equation in America. The CSU equation is used to predict the maximum pier scour depths for both live-bed and clear-water scour conditions (by Richardson and Davis, 2010 in Osman Akan, 2006). Experimental/Methods The location of this research is carried out only in areas that experience local scouring problems under the bridge in Alue Buloh area, Nagan Raya Regency ( Figure 1). The research period is conducted for 6 months starting from August 2019 -January 2020 and the type of research is quantitative and survey methods. The data that will be used in this study are primary and secondary data. The primary data obtained from direct observation in the field while secondary data is obtained from relevant agencies where the data is needed to support the results of the research later. The primary data in this study will be obtained from direct observation in the field in the form of crosssection river data, the distance between piers, pier shape, pier dimensions, sediment samples, distance of the bridge to the downstream section, bridge deck width, bridge elevation, the elevation of decks under bridges . As for the secondary data in the study in the form of watershed map data that contains information on the area of the watershed and the length of the main river, as well as rainfall data to determine the magnitude of the flood discharge plan.The data processing steps to be carried out in this study follow the research flowchart ( Figure 2): 1. Field survey; 2. Obtain field data: Location of Study a. The distance between piers is done by determining the center point of the pier and measuring the distance between the piers; b. Pier shape and dimension data can be done by looking at the pier shape visually and measuring the length and width of the pier; c. The distance of the bridge to the downstream cross-section is done by measuring the distance between the bridge starting point to the end of the bridge section; d. Bridge deck width is done by measuring the width of the bridge located at the study site; e. Elevation of the bridge is done using a water pass. 3. Measurement of grain size analysis: a. The sediment sample was tested with a sieve analysis to obtain the percentage of sediment passed through a sieve; b. Make a filter analysis chart, the correlation between a sieve diameter and percentage of sediment escaped; c. The grain size used is the average grain size D50 and D95 from the graph. 4. Hydrological analysis to determine the design flood discharge from a watershed: a. Monthly Maximum Rainfall Data; b. Analysis of rainfall frequency and testing the suitability of distribution to produce planned rainfall with a 2, 5, 10, 25, 50, and 100 year return period; c. Calculation of the flood discharge plan with the synthetic unit hydrograph of Nakayasu with a return period of 50 and 100 years in the Krueng Seunagan watershed. The Nakayasu Synthetic unit hydrograph equation is as follows (Soewarno, 1995): Where: Qp = design flood discharge (m 3 /s); Re = unit rain (mm); Tp = time lag from the beginning of the rain to the peak of the flood (hour); T 0.3 = the time required for a decrease in discharge, from peak discharge to 30% of peak discharge (hour). 5. Processing of data using HEC-RAS software 5.0.7. The software also calculates the scour depth of the bridge piers constructed on rivers using hydraulic flow data, shape and geometric characteristics of the bridge pier, as well as the material and shape of the substrate around the river. The default software model for estimating the local scour depth around the bridge piers is the CSU model defined as follows (Richardson and Davis, 1995 in Ghaderi, 2019). ( Where: Y s = the maximum scour depth; α= width or diameter of the pier; Y 1 = the flow depth in the pier upstream; k 1 = the pier shape coefficient; k 2 =the coefficient of the impact angle; k 3 =the bed condition coefficient; k 4 =the bed's coefficient of reinforcement by the sediment particles; Fr 1 = Froude number. Correction factor table k 1 , k 2 , and k 3 can be seen in tables 1 to table 3 below (Suma, 2018). The bridge scour computations are performed by opening the Hydraulic Design Functions window and selecting the scour at bridges function. Once this option is selected the program will automatically go to the output file and get the computed output for the approach section, the section just upstream of the bridge, and the sections inside of the bridge. Input data, a graphic, and a window for summary results. Input data tabs are available for contraction scour, pier scour, and abutment scour. Entering contraction scour data: all of the variables except K 1 and D 50 are obtained automatically from the HEC-RAS output file. To compute contraction scour, the user is only required to enter the D 50 (mean size fraction of the bed material) and a water temperature to compute the K 1 factor. And then entering pier scour data: the user is only required to enter the pier nose shape (K 1 ), the angle of attack for flow hitting the piers, the condition of the bed (K 3 ), and a D 95 size fraction for the bed material. All other values are automatically obtained from the HEC-RAS output file. The user has the option to use the maximum velocity and depth in the main channel, or the local velocity and depth at each pier for the calculation of the pier scour (Graduado, 2001 Where: Ys= the maximum scour depth; = the coefficient of correction for the nose; Y 1 =the depth of flow at the upstream of the pier; D50=the average diameter of the bed particles; α= the width or diameter of the base; α'=width of the lateral bridge pier that imported to flow path. No Shape of pier nose K 1 Result and discussion This measurement is carried out to determine the dimensions of the piers that will be used in research. From measurements in the field, pier dimension data obtained such as pier width, the distance between piers, and pier shape (are shown in Table 4 and Table 5 ). The grain size analysis to obtain the required grain diameter as a parameter in the scour depth calculation. The variable to be obtained is the average particle size diameter of D50 and D95 sediment grain. Sediment grain size analysis are shown in Table 4 and figure 4 above. The results analysis of the sediment grain that has been carried out (figure 4 above) obtained the average values of sediment grain size for D50 is 0,91mm and D95 is 4,35mm. In this study, the debit that will be used in the calculation of the scour depth is the peak discharge with Synthetic Unit Hydrograph Nakayasu method. The rainfall plan can use the Log Pearson Type III distribution is acceptable (are shown in Table 6 and figure 5). Table 6. Parameters of the Krueng Seunagan River Basin The hydraulic analysis was carried out using the HEC-RAS 5.0.7 program. This type of simulation for hydraulic analysis uses a steady flow analysis. The first step to develop HEC-RAS model is to create a HEC-RAS geometric file. The basic geometric data consists of establishing how the various river reaches are connected: Enter river cross-section data including distance, elevation, manning coefficient, the distance between cross-sections, and levee. Enter the bridge deck data next is to enter the pier data. The trick is to select the pier in the pier data editor window, fill in the pier spacing in the upstream centerline station and the downstream centerline station. The discharge value entered is the discharge value obtained from the Nakayasu calculation (100 years of 1513m 3 /sec). Select the reach boundary conditions and fill the boundary conditions for downstream and upstream as normal depth, then fill in the riverbed slope of 0,022 for downstream and upstream of 0,042. Modeling carried out is a condition in which a bridge already exists. This modeling is intended to determine the prediction of local scour depth due to contraction scour and piers. The pier shape analyzed in this modeling is the pier with round nose shape. The data needed to calculate contraction scour almost all have been automatically simulated by HEC-RAS based on the steady flow analysis. Data to be entered in the form of sediment diameter D 50 on each left and right overbank and channel. The coefficient K1 value will be automatically calculated by the HEC-RAS program. The contraction scours calculated with the Laursen equation of the clear-water scour or live-bed scour version. But in this study, Equation is selected in the Default position. Then the depth scouring analysis was carried out on the pier by HEC-RAS with CSU equation. The data entered there are 4 types with their respective values, pier shape with round nose shape, flow angle of attack to the pier is 0, river bed shape (K 3 ) is small dunes, and sediment diameter (D 95 ) 4,35mm. The next step is to compute the data based on the predetermined debits, which are 100 years peak discharge of 1513m 3 /sec. Then the modeling of local scour depth can be seen in the Figure 11 and Figure 12 below. The results of depth scouring analysis on the retur n period of 100 years return with HEC-RAS 5.0.7 modeling were obtained at 5.04m. Conclusions In this study, the peak discharge that occurred in the Seunagan Krueng watershed using the Nakayasu HSS method was Qp 100 1513m 3 /sec. Analysis of sediment grains obtained the average value of sediment grain size for D 50 is 0,91mm and D 95 is 4,35mm. Analysis of local scour depth using HEC-RAS 5.0.7 software with round nose pillar shape at peak discharge when 100 years return is equal to 5.04m. Further research can be done by examining the depth of local scours through experiments in the laboratory using a different piers shape.
2021-05-10T00:04:04.199Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "5ee3cced1d989dd38791a3f7172e82a09541bcd1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1764/1/012151", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0278943b654a7e6dea016c6d082bd594504b7055", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
7125294
pes2o/s2orc
v3-fos-license
Studies on congenital osteopetrosis in microphthalmic mice using organ cultures: impairment of bone resorption in response to physiologic stimulators. The mechanism of congenital osteopetrosis in microphthalmic (mi) mice has been examined in bone organ cultures. Resorption was measured by the release of previously incorporated 45Ca in fetal long bones and newborn calvaria from mi mice and heterozygous or homozygous normal litter mates. Bones from mi mice showed a generalized resorption defect with decreased spontaneous or control resorption and failure to respond to parathyroid hormone (PTH), prostaglandin E2, 1,25 dihydroxy vitamin D3, vitamin A, or osteoclast activating factor (OAF) from human peripheral leukocytes or mouse spleen cells. Bones from heterozygotes showed a smaller response to PTH than bones from homozygous normals. Mutant bones failed to show an increase in lysosomal enzyme release in response to PTH or vitamin A, agents which increased release from bones of homozygous normals. Proline incorporation into collagenase-digestible protein was similar in cultures of normal and mutant bone and was inhibited by PTH and OAF. These results indicate that congenital osteopetrosis in mi mice is due to a generalized defect in the function and hormonal response of osteoclasts and suggests that this cell line is separate from the osteoblast cell line which shows no impairment of hormonal response. mi mice showed a generalized defect in bone resorption with marked impairment of the resorptive response to both human and mouse OAF as well as other known stimulators of osteoclastic bone resorption including parathyroid hormone (PTH) (12), prostaglandin E2 (PGE2) (13), 1,25 dihydroxy vitamin D3 [1,25(OH)2D3] (14), and vitamin A (12). Bone formation was not impaired in mi mice and could be inhibited by PTH and OAF. OAF production did not appear to be impaired since supernatant fluid of PHA-activated spleen cells from mi mice contained as much bone resorbing activity as supernates from normal animals. Methods Mice, heterozygous for the mi recessive gene, were obtained from Dr. D. W. Walker, The Johns Hopkins University School of Medicine, Baltimore, Md., and from The Jackson Laboratory, Bar Harbor, Maine. Separation of the various genotypes was possible in fetal as well as newborn animals because mi mutants have no eye pigment while heterozygotes have a small amount and homozygous normals have a fully pigmented eye. Except for one experiment in which fetal age was estimated from bone size and development, the studies were carried out after a single overnight mating to permit precise dating of pregnancy. Long Bone Cultures. The technique was similar to that used for measuring resorption in fetal rat long bones (12,15). The pregnant mice were injected with 4~Ca on the day before sacrifice. On the 16th to 19th day the fetuses were removed, the shafts of the radius and ulna dissected free of soft tissue and cartilage, and precultured for 18-24 h in a chemically defined medium, BGJ, at 37°C in an atmosphere of 5% CO2 in air. Subsequently the bones were transferred to the same medium supplemented with 4 mg/ml bovine serum albumin, to which various stimulators of bone resorption were added, and cultured for 3 days. 45Ca in the medium and bone were measured and resorption estimated from the percent of 45Ca released. Bones killed by three cycles of freezing and thawing were used to correct for any differences in the exchange of 45Ca by bones from the different genotypes. Experiments Combining Bones from Different Genotypes. In some experiments, 45Ca release from long bones of mi normal or heterozygous mice were compared when cultured alone or together with long bones from unlabeled animals. Labeled bones were co-cultured with unlabeled bones from the same or different genotype. The labeled and unlabeled bones were placed together on a 2 mm square Millipore filter and the cultures were maintained for 6 days with a change of medium at 3 days. Calvarial Cultures. Newborn mice were used (21 days post-mating). The bones were labeled with 4aCa by injection of the mother 3 days before birth. The flat, central portions of the frontal and parietal bones were dissected, care being taken to preserve the periosteum. Half calvaria were incubated in BGJ supplemented with 5% heat-inactivated human serum in a manner similar to the long bone cultures. The bones were cultured for one or two 3-day periods. In some of the experiments the bones were transferred to Ehrlenmeyer flasks at the end of the resorption culture period for studies of bone collagen synthesis. Bone Collagen Synthesis. After 3-6 days culture, bones were transferred to 25 ml Ehrlenmeyer flasks with 2 ml of growth medium containing i mM proline and no serum, gassed with 5% CO2 and air, stoppered, and incubated for 3 h. Labeled proline (2,3-~H-proline; 5t~Ci/ml; New England Nuclear, Boston, Mass.) was added for the last 2 h. At the end of incubation the bones were washed with 5% TCA, acetone, and ether; weighed; and homogenized. 3H-proline in collagen was determined by digestion with repurified bacterial collagenase by the method of Peterkofsky and Diegelmann (17). Labeled collagenase-digestible protein (CDP) and noncollagen protein (NCP) were determined and the percent collagen synthesis calculated using a factor of 5.4 to correct for the relative abundance of proline in collagen and noncollagen protein. Lysosomal Enzyme Measurements. To measure lysosomal enzyme activity, calvaria were homogenized in 0.1% Triton in water and aliquots of bone homogenate or medium were incubated with appropriate buffers and substrates using standard methods (18). For measurement of flglucuronidase (fl-G1) the substrate was phenolphthalein-fl-glucosiduronate, for n-fi-acetylglucosaminidase (AG) the substrate was the nitrophenylglycoside and for cathepsin D (Cath D), 3Hacetylated hemoglobin was the substrate. Lysosomal data are presented as the percent of total Values are means ± standard error for 6-10 bones cultured for 3 days. * Significantly different from control, P < 0.05. enzyme present in bone and medium that was released into the medium during a 3 day culture period. The recovery of enzyme from bone homogenate appeared to be complete, since the initial values for enzyme in homogenates of the calvaria and in the medium plus bone of calvaria killed by repeated freezing and thawing and then incubated for several days were similar. Moreover, no additional enzyme activity could be obtained by centrifugation and re-extraction of the homogenate. Cultures of Mouse Spleen for OAF. Homozygous normal and mi newborn mice spleens were used. The remaining heterozygous animals were returned to the mother and ultimately added to the breeding colony. The spleens were minced and the cells dispersed by repeated pipetting, the cell suspension pelleted by brief centrifugation and resuspended in BGJ medium at 10 e mononuclear cells/ml. 10 ml of cell suspension was incubated in a 150 mm Petri dish with 1% PHA (PHA-M; Grand Island Biological Corp., Grand Island, N. Y.). After 24 h the supernate was removed, centrifuged, and the clear supernate frozen. This crude supernate which contained bone-resorbing activity is referred to as mouse OAF. In addition, a human OAF preparation partially purified by Sephadex G100 chromatography, as previously described (9), was used in some cultures. Results Long Bone Cultures. The radius and ulna from 16-to 19-day fetal mice provided satisfactory material for measurement of bone resorptive response to stimulators. Bones from homozygous normal or heterozygous animals showed up to twofold increase in 45Ca release in response to PTH and a significant response to partially purified human OAF (Table I). Control resorption rates were low, but the difference from killed bones was significant for homozygous normals and for heterozygotes. In contrast, bones from mi/mi mice showed minimal response to PTH or OAF and no control resorption. The response to bones from heterozygotes appeared less than that for homozygous norma]s, and in another experiment (Table II) the response to a submaximal dose of PTH was significantly less in bones from heterozygous animals (P < 0.05). Histologic studies showed that both heterozygous and normal bone treated with PTH contained numerous osteoclasts on the bone surface, with loss of matrix. Stable calcium content of the bone was reduced by treatment with PTH by a percentage Values are means -+ standard error for 9-17 bones cultured for 3 days. * Significantly different from control, P < 0.05. To test the possibilities that normal bone cells were producing a humoral factor essential for osteoclast activation or that osteopetrotic bones contain an inhibitor of osteoclasts, paired cultures of mutant and normal or heterozygous bones were maintained in culture with the addition of PTH. There was no difference in 45Ca release when mi/mi bones were cultured alone, with paired bones from normal or heterozygous animals or with paired bones from mi animals (Table III). Similarly, the data for bones from normals and heterozygotes, which are pooled, showed no difference in response whether cultured with unlabeled bones from the same genotypes or from mi animals. Since these small bones (less than 1 mg wet weight) were cultured in 0.5 ml of medium, the data do not rule out the existence of a humoral factor which was ineffective because of dilution. Culvaria Cultures. Further studies were carried out on half calvaria from mi/mi, heterozygous, and normal animals, because (a) they had been used in previous studies of osteopetrosis, (b) they show relatively greater rates of control resorption, and (c) they provide larger amounts of bone for studies of lysosomal enzyme release and collagen synthesis. Both heterozygotes and homozygous normals showed substantial control resorption and responded to PTH, PGE2, and 1,25(OH)2D3 (Table IV). PTH at supramaximal initial concentration had a greater effect on normal bone than on bone from heterozygotes. However, the response to PGE2 and 1,25(OH)2D3 at supramaximal concentrations was not different for bones from normals or heterozygotes. Bones from mi/mi mice showed no significant control resorption and no response to any of the stimulators. Because the number of bones was limited, only mi/rni and heterozygote bone was used to test the 45Ca response to vitamin A and mouse OAF. Vitamin A was found to stimulate resorption of bone from heterozygotes but not from mi/mi animals. When tested on bones from heterozygotes, the bone-resorbing activity in supernates of PHA-stimulated spleen cell cultures from mi/mi mice was as great as for spleen cells from homozygous normals. However, mouse OAF did not stimulate resorption of mi/mi bone. In other studies (data not shown) mouse OAF from normal and mutant animals was assayed at several dilutions and no difference in potency or dose-response curve was observed. An increase in the lysosomal enzyme release into the medium was associated with stimulation of resorption by PTH or vitamin A in cultures of normal bone and was similar for all three enzymes tested (Table V). In contrast, bone from mi/mi animals showed no increase in enzyme release with stimulation. Control release of enzymes was similar for mutant and normal bone, and there was no difference in total enzyme content of normal and mutant bone. In other experiments (data not shown) the medium content of fl-G1 was increased with stimulation of bone resorption by PGE2 in normal but not in mutant bones. PTH and human OAF are inhibitors of bone collagen synthesis in vitro at high concentrations (10). Mouse calvaria were cultured for 3-6 days and transferred to a growth medium to measure proline incorporation into CDP (Table VI). Bones from mi/mi mutants and normal or heterozygous mice incorporated Values are means _+ standard error for six bones cultured for percent of total enzyme released into medium in 48 h. * Significantly greater than control, P < 0.01. Values are means ± standard error of 3 to 12 bone cultures treated for 3-6 days and pulsed with 3H-proline for the last 2 h of culture. * Significantly less than control, P < 0.05. substantial amounts of proline into CDP, and the value was somewhat higher for the mutants. Both PTH and OAF caused similar inhibition of the incorporation of labeled proline into collagen in mi/mi and normal or heterozygous mouse calvaria. NCP synthesis was not affected. Discussion The present studies have used techniques for the study of the pathogenesis of osteopetrosis in vitro which should make it possible to identify the specific defects in the various animal models of this disorder. The mi mouse was used initially because the defect is severe and it is easy to identify the genotypes in fetal and newborn animals. Cultured long bones and calvaria from mi mice showed decreased spontaneous or control resorption and failed to respond to a number of potent stimulators of resorption including PTH, PGE~, 1,25 (OH)2D3, vitamin A, and OAF from both human peripheral leukocyte and mouse spleen cultures. These results indicate that there is a generalized bone resorption defect in mi mice which is responsible for the development of osteopetrosis. The most likely explanation for this is an abnormality in osteoclast formation or function. We have looked for a local humoral factor which might be deficient in the microphthalmic mouse or which might inhibit bone resorption. Co-culture of mutant and normal bones provided no evidence for such a humoral factor, however, dilution or inactivation could have been responsible for the negative results. The failure to respond to widely differing stimulators of bone resorption makes it extremely unlikely that there was any defect in receptor sites on the osteoclasts or their precursors. Spleen cells from mi mice produced as much bone-resorbing activity in PHAstimulated cultures as did spleen cells from normal animals. These results suggest that defective OAF production is not important in the pathogenesis of osteopetrosis in this animal model. The physiologic role of OAF is not known and it is possible that its spontaneous release may be important in initiating endosteal bone resorption. Failure of such release could be responsible for osteopetrosis in other mutants. Our results differ from those reported by Marks, who found that 4~Ca loss was increased in cultured bones from ia rats, despite evidence that inhibition of bone resportion was present in vivo (19). This discrepancy could be due to the difference in species, particularly since the abnormality in resorption is transient in the ia rat. Reynolds and his associates found that control resorption was impaired in cultures of calvaria from mice with the gray-lethal mutation which produces severe osteopetrosis (20). Reynolds et al. (20) also demonstrated that administration of a potent inhibitor of bone resorption, dichloro-methylene diphosphonate, to newborn normal mice could mimic the osteopetrotic lesion of the gray-lethal mutants. The studies with calvaria confirmed and extended the results with long bones, and provided a model in which biochemical studies were more easily carried out. Measurements of several lysosomal enzymes showed that the increased enzyme release occurred from normal bones when resorption was stimulated but did not occur in mutant bone. Failure of enzyme release has been suggested on the basis of morphologic observations of acid phosphatase accumulation in osteoclasts in other animal models of osteopetrosis (4-6). We found no difference in enzyme release between normal and mutant bone in the absence of stimulators, although 4~Ca release was different. Many of the cells in the calvaria are not resorbing cells. These may release substantial amounts of lysosomal enzymes due to cell damage or normal turnover and lead to high control values which could obscure differences in the behavior of bone-resorbing cells. Measurements of bone collagen synthesis in this study have indicated that mi mutant bone shows inhibition of labeled proline incorporation into collagenase digestible protein in response to both PTH and OAF. This is considered to be due to inhibition of collagen synthesis rather than changes in amino acid transport or precursor pool size because an excess of cold proline was added to the culture medium. The results indicate that the bone-forming cells in mutant animals do have receptors for PTH and OAF and do not appear to have any functional defect in collagen synthesis or its regulation. The rate of proline incorporation was actually somewhat increased in mutant bone cultures. Walker has shown that proline incorporation is higher in gray-lethal mice in vivo, and that this incorporation is inhibited by injections of parathyroid extract, although to a less extent than in controls (21). A marked increase in bone formation has not been observed in most congenital osteopetrotic models, however, and changes in osteoblastic activity do not appear to be sufficient to produce the defects in remodeling and growth observed. The fact that osteoblastic activity was inhibited by PTH and OAF in mi mice while osteoclastic activity was not stimulated is consistent with the hypothesis that these cells are derived from different precursors, and that the defect is in the osteoclast or its progenitor cell. Summary The mechanism of congenital osteopetrosis in microphthalmic (mi) mice has been examined in bone organ cultures. Resorption was measured by the release of previously incorporated 4~Ca in fetal long bones and newborn calvaria from mi mice and heterozygous or homozygous normal litter mates. Bones from mi mice showed a generalized resorption defect with decreased spontaneous or control resorption and failure to respond to parathyroid hormone (PTH), prostaglandin E2, 1,25 dihydroxy vitamin Da, vitamin A, or osteoclast activating factor (OAF) from human peripheral leukocytes or mouse spleen cells. Bones from heterozygotes showed a smaller response to PTH than bones from homozygous normals. Mutant bones failed to show an increase in lysosomal enzyme release in response to PTH or vitamin A, agents which increased release from bones of homozygous normals. Proline incorporation into collagenase-digestible protein was similar in cultures of normal and mutant bone and was inhibited by PTH and OAF. These results indicate that congenital osteopetrosis in mi mice is due to a generalized defect in the function and hormonal response of osteoclasts and suggests that this cell line is separate from the osteoblast cell line which shows no impairment of hormonal response.
2014-10-01T00:00:00.000Z
1977-04-01T00:00:00.000
{ "year": 1977, "sha1": "9e22360812f1c07c910bc095043dfd5a28b3caf7", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/145/4/857.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e8f3d632e743942c446f9ebdde9ea4e9170b3e81", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
27191602
pes2o/s2orc
v3-fos-license
Free energies of molecular clusters determined by guided mechanical disassembly The excess free energy of a molecular cluster is a key quantity in models of the nucleation of droplets from a metastable vapour phase; it is often viewed as the free energy arising from the presence of an interface between the two phases. We show how this quantity can be extracted from simulations of the mechanical disassembly of a cluster using guide particles in molecular dynamics. We disassemble clusters ranging in size from 5 to 27 argon-like Lennard-Jones atoms, thermalised at 60 K, and obtain excess free energies, by means of the Jarzynski equality, that are consistent with previous studies. We only simulate the cluster of interest, in contrast to approaches that require a series of comparisons to be made between clusters differing in size by one molecule. We discuss the advantages and disadvantages of the scheme and how it might be applied to more complex systems. I. INTRODUCTION The formation of droplets from a metastable vapour phase is a commonplace event in nature, but so far it has resisted quantitative analysis, despite repeated attention [1][2][3][4]. The phenomenon plays a role in atmospheric aerosol and cloud formation [5,6], as well as in industrial processes [7,8]. Theoretical analysis often begins with the Becker-Döring equations [9] that describe changes in the populations n i of clusters of i molecules brought about by the processes of gain and loss of single molecules, or monomers. They take the form where β i and α i are growth and evaporation rates, respectively. The rate of nucleation J of droplets from a metastable vapour phase may then be expressed as [10] where k is the Boltzmann constant, T is the temperature, i * is the size of the critical cluster, defined to have equal probabilities, per unit time, of molecular gain or loss, and Z is the Zeldovich factor that accounts for the nonequilibrium nature of the kinetics [11]. We shall refer to φ(i) as the thermodynamic work of formation of a cluster of i particles (or i-cluster) starting from the metastable vapour phase. A range of nomenclature is used for this quantity in the nucleation theory literature: the work of formation was denoted by ǫ(i) in [10], and elsewhere the same, or a very similar quantity has been labelled as ∆F , ∆G or ∆W , for example. We note that φ in the nucleation rate expression has both a kinetic and a thermodynamic interpretation [12]. The quantity φ(i) − φ(1) can be expressed in terms of ratios of cluster growth and evaporation rates: but φ is also related to the grand potential Ω s (i) = F (i)− iµ s of an i-cluster at the chemical potential µ s of the saturated vapour [10]: where F (i) is the Helmholtz free energy of the cluster, S = p v /p vs is the vapour supersaturation, and p v and p vs are the vapour pressure and saturated vapour pressure, respectively. The role of the grand potential in this context is to specify the equilibrium population of clusters of size i in a saturated vapour, namely n s i = exp(−Ω s (i)/kT ). The nucleation model is completed by representing the population of monomers as n 1 = Sp vs V /kT , where V is the system volume, by assuming that the vapour pressure is dominated by the ideal partial pressure of single molecules. In classical nucleation theory (CNT), clusters are viewed as scaled down versions of macroscopic droplets. According to this approach, the difference φ(i) − φ(1) is replaced by φ(i) alone with where γ is the surface tension of a planar interface between vapour and condensate, and A(i) is the surface area of a cluster represented as a sphere with a density equal to that of the bulk condensed phase. The work of formation is a combination of a free energy cost of forming the interface, and a free energy return proportional to the number of molecules in the cluster (or proportional to its volume since the condensed phase density is taken to be a constant). The neglected φ(1) term might be represented by γA(1)−kT ln S, which leads to the internally consistent classical theory [13]. The cluster size dependence of the CNT work of formation is illustrated in Figure 1. It represents a thermodynamic barrier, with a maximum at the critical size, that limits the natural tendency for small molecular clusters to grow into large droplets when exposed to a supersaturated vapour. CNT has been modified in several ways, for example by introducing a size-dependent surface tension [14] or by introducing compatibility with nonideal vapour properties [15,16]. More fundamentally, the ratio of kinetic coefficients β j−1 /α j might be evaluated using an underlying microscopic model for all clusters up to the critical size and beyond [12], and the work of formation determined through Eq. (3). It may be shown that (6) which shifts attention to the free energy difference F (j)− F (j − 1) associated with the addition of a molecule to a (j − 1)-cluster. Computing these differences is the basis of an approach has been used extensively in calculations of cluster free energies [17][18][19][20][21][22][23]. But nucleation is actually controlled by the properties of clusters near the critical size, and one drawback of computing the differences F (j)−F (j −1) is that the predicted nucleation rate could be susceptible to the accumulation of errors in evaluating such a sequence. In this paper, we describe a computational method for directly obtaining the cluster free energy without the need to perform calculations for a sequence of smaller clusters. We consider the following representation of the work of formation of a cluster minus that of a monomer: We shall refer to F s (i) as the cluster excess free energy, though more accurately it is a difference between the excess free energies of an i-cluster and a monomer [10]. It is 'excess' in that it represents the free energy required to carve a cluster out of a bulk condensed phase, or equivalently to assemble it out of saturated vapour. It may be associated with the thermodynamic cost of creating an interface, which is why in CNT it is modelled by a surface term, and why we have given it a suffix s. Our approach centres on disassembling a cluster into its component molecules using guided molecular dynamics in order to calculate the cluster excess free energy directly. The method employs the Jarzynski equality [24][25][26] and we provide details in Section II, including a comparison with the related method of thermodynamic integration. Tests of the method where we separate a Figure 2. Guided disassembly process for an i-cluster. The real particles (circles) are initially weakly tethered to the guide particles (diamonds). The latter drift apart and the tethers gradually tighten leading to i independent, tethered particles upon completion of the process. dimer according to a variety of protocols are described in Appendix A. The disassembly of argon-like Lennard-Jones clusters is presented in Section III and we compare our results with those obtained from Monte Carlo studies by Barrett and Knight [27] and Merikanto et al. [28,29]. These studies gave consistent excess free energies, though they were not in agreement with experiments by Iland et al. [30]. We conclude with a discussion of the advantages and disadvantages of the approach compared with other treatments in Section IV. A. Fundamentals of the method We study the dynamical evolution of a cluster against a background of external manipulation. The cluster particles are harmonically tethered to a set of artificial 'guide particles', which lie initially at the origin but after a period of system equilibration are programmed to move apart, driving cluster disassembly. The strength of the tether forces is initially quite weak, in order to disturb the properties of the cluster as little as possible. Later, the tethers can be strengthened in order to guide the separation process more firmly, and to prevent the atoms from interacting with each other once the final guide particle positions have been reached. The mechanical work of the disassembly can then be related to the change in Helmholtz free energy. The masses of the guide particles are taken to be very much greater than those of the cluster particles. This essentially fixes the trajectories of the guide particles in the molecular dynamics, in accordance with the velocities assigned to each at the beginning of the disassembly process. By choosing guide particle velocities, simulation times and a time-dependent tethering force, a range of cluster disassembly protocols can be explored. A simple illustration of the process is shown in Figure 2. We shall consider clusters of argon-like atoms interacting through Lennard-Jones potentials, and so we shall refer to the cluster particles as atoms. We equilibrate this system under the influence of the tethers for a suitable period, the duration of which will depend upon the cluster size and the desired temperature. A further molecular dynamics simulation is performed and from this trajectory we select initial configurations for cluster disassembly. In order that the configurations should represent a bound structure, we employ a Stillinger cluster condition [31] in the selection, allowing a separation of no more than 1.5 σ ArAr between an atom and its nearest neighbour, where σ ArAr is the usual Lennard-Jones range parameter. Such a Stillinger condition has been used in previous Monte Carlo approaches. The cluster definition is an important ingredient of a modelling strategy [2], and deserves careful consideration, but here we shall use this simple criterion for convenience. The simulations were performed using the DL_POLY [32] molecular dynamics package, with modifications to the source code to implement the time-dependent harmonic tether potentials. We include a physical heat bath of helium-like Lennard-Jones atoms thermalised using a Nosé-Hoover thermostat [33]. We could instead have implemented a thermostat that acts on the cluster itself, but chose not to in order to achieve as natural a thermalisation as possible during the nonequilibrium processing. B. Work performed on a system Given an external control parameter λ in a Hamiltonian H(λ), the work W done on a system due to the evolution of λ over a finite time period may be written For example, consider the Hamiltonian H 1 of a single guided atom of mass m: where p is the momentum, κ(t) is the time-dependent tethering force or spring constant, x(t) is the atomic position and X(t) is the guide position. For a set of guided atoms, each controlled by a Hamiltonian H containing terms of the form given in Eq. (9) supplemented by interparticle interactions, κ(t) and X(t) play the role of λ and the work W performed on the set is where τ is the length of the molecular dynamics simulation, and V j (t) is the velocity of the guide particle associated with the j th atom, defined as V j (t) = dX j (t)/dt. The first term in Eq. (10) arises from the time dependence of the spring constant, and the second term is simply the conventional force times distance expression. It should be noted that all tethers within the system are characterised by the same spring constant, although more elaborate protocols could be imagined. C. The Jarzynski equality If we were able to perform an extremely slow, quasistatic process, then the mechanical work done would be equal to the difference in Helmholtz free energy between the initial and final equilibrium states. However, quasistatic processes are unfeasible in finite time molecular dynamics simulations and according to the second law [34], the average of the work done (as a result of a timedependent change in the Hamiltonian of the system), performed over many realisations of a nonquasistatic process (indicated by angled brackets), will always be an overestimate of the free energy change, W > ∆F , allowing us only to infer an upper limit to ∆F . However, the Jarzynski equality [24,25] exp (−W/kT ) = exp (−∆F/kT ) allows us to do better. For this identity to hold, the system must begin in thermal equilibrium, but need not remain so as the Hamiltonian changes during the simulation. Exploiting the work done in a nonequilibrium process is a powerful strategy for calculating cluster surface free energies and numerous computational studies [35][36][37][38][39][40][41] as well as experiments [42][43][44][45][46][47] have achieved this with the help of the Jarzynski equality. Systems studied include argon-like Lennard-Jones fluids, ion-charging in water, ideal gases confined to a piston, and one-dimensional polymer chains. Nevertheless, there are distinct aspects of this strategy for analysing the controlled disassembly of a cluster that need to be explored. The Jarzynski equality ought to recover the free energy difference regardless of the nature of the evolution between initial and final Hamiltonians, but computed results might still depend upon the rate of the process as a consequence of a limited sampling of system trajectories in finite simulations [42]. We might expect 'slow' processes that gently pull a cluster apart to generate a narrower distribution of work compared with 'fast' processes that are violent and highly dissipative. A balance must therefore be struck between the poorer convergence of fast simulations and the demand for computational resources required for slow simulations. Furthermore, a consequence of the exponential averaging in the Jarzynski equality is that occasional values of work that are well below the average, arising from unusual trajectories, can sometimes distort the extracted free energy change. This is a consequence of insufficient sampling of the system trajectories and so we need to give careful attention to the statistical errors. We have explored the outcomes of various guiding protocols, and the robustness of the Jarzynski equality in the face of limited statistics, in a test case of the separation of a dimer, for which the free energy change is easily calculable. These studies are described in Appendix A. We have used similar protocols to study the disassembly of larger clusters, which is described in Section III. D. Comparison with thermodynamic integration The method bears some similarity to thermodynamic integration, where the strength of the interparticle interactions is evolved over a sequence of equilibrium calculations in order to compare the system in question with another that has a known free energy [48][49][50][51]. The basic relationship ∆F =´ ∂H(λ)/∂λ dλ is analogous to Eq. (8). The reference system for clusters might, for example, be a set of noninteracting particles held together through the retention of the constraining cluster definition. Or indeed the cluster definition could be changed progressively along with the interactions in order to reach a more convenient final state, perhaps noninteracting particles inside a sphere. However, there are some important differences. In our approach it is the tether potentials that change with time, not the interparticle interactions, and our reference system is a set of independent harmonic oscillators, not an ideal gas. Furthermore, we conduct the evolution by nonequilibrium molecular dynamics rather than by moving through a sequence of equilibrium ensembles, and we only need to impose a cluster definition when selecting the initial configurations, not throughout the evolution. An abrupt removal of the cluster definition constraint is acceptable in a nonequilibrium evolution, when the results are processed using the Jarzynski equation, but it would not be appropriate during a sequence of equilibrium calculations. A. Preliminaries We have investigated the disassembly of clusters consisting of 5, 10, 15, 20 and 27 argon-like atoms in order to obtain their excess free energies. Scaling up the guided molecular dynamics simulations from the test case of dimer separation is fairly straightforward. We perform simulations in a cubic cell with edge lengths of 100 Å, so that the initial clusters and the final disassembled configurations may be easily accommodated. We employ Lennard-Jones interaction potentials for each species (see Table III) and the helium temperature is set at 60 K in order to facilitate a comparison with the Monte Carlo studies by Barrett and Knight [27] and Merikanto et al. Figure 3. The difference in tether energy across a cluster configuration is given in terms of the maximum and minimum separations between an atom and its guide particle. The circles depict the argon atoms, while the diamond represents the position of all of the guides at the origin of the cell. [28,29], as well as the experimental studies of Iland et al. [30]. However, converting the free energy change associated with disassembly into an excess free energy requires some careful consideration of the statistical mechanics of tethered and free molecular clusters. We require the excess free energy of a cluster that is free to move anywhere inside a system volume, but our initial state is a cluster tethered to guide particles at the origin. The free energy change that emerges from our calculations will correspond to the disassembly of a cluster whose centre of mass explores a region around the origin, and furthermore, one that possesses energy due to the tethers in addition to that of the physical interactions between the atoms. These matters are discussed in detail in Appendix B. The energetic perturbation of the cluster by the tethers can be reduced by choosing a small force constant. We take the view that the mean variation in tethering energy of an atom, as it explores different regions of the cluster during the equilibrated trajectory, should not exceed the thermal energy kT , or where x max and x min are, respectively, the maximum and minimum separations between an atom and its guide particle in a configuration (see Figure 3). This criterion may also be expressed as ξ = κ i x 2 max − x 2 min /(2kT ) < 1. From the equilibrated molecular dynamics trajectory, we select, for disassembly, a set of 'valid' cluster configurations that satisfy the Stillinger cluster definition [31], but this can be quite difficult for the smaller clusters at 60 K. Tethering the atoms keeps them closer together and more likely to form valid configurations. We therefore choose a tethering strength that satisfies the condition on ξ, but also helps to produce sufficient valid cluster configurations. The initial value of the tethering force constant was taken to be κ i = 0.01 kJ mol −1 Å −2 , which gives ξ ∼ 0.6 − 0.9 for the five sizes of argon cluster studied. Table I shows the duration of the equilibrated cluster trajectory, the number of valid cluster configurations identified from candidates selected at intervals of 100 ps from the equilibrated molecular dynamics trajectory, and the ratio ξ characterising the suitability of the tethering force constant. Having obtained initial cluster configurations for the five sizes of cluster, the next stage is to disassemble them by a combination of guide particle motion and tether tightening. A range of separation times t sep is explored, with the larger and more stable clusters expected to require longer disassembly processes in order to provide accurate estimates of the free energy change. As in the dimer calculations described in Appendix A, we use a tethering strength that strengthens in time according to Eqs. (A7), with a final value of κ f = 0.05 kJ mol −1 Å −2 . The terminal positions for the guide particles are chosen from a 3 × 3 × 3 grid with spacing of 33.33 Å. The largest cluster considered contains 27 argon atoms so after the process of disassembly, the tethered atoms move around each point on this grid. For smaller systems, the same grid of final guide positions is adopted, but employing only as many points as are necessary for the cluster in question. With initial guide positions at the origin and final positions defined in this way, it is straightforward to calculate the necessary drift velocities of the guide particles for a given separation time. Applying the Jarzynski procedure to the distribution of performed work then gives us the estimated free energy change ∆F associated with the disassembly of a cluster. However, as mentioned previously, this free energy difference will only correspond to the disassembly of a tethered i-cluster, rather than of a freely translating, undistorted cluster. Furthermore, by necessity we obtain free energies of systems of distinguishable atoms in molecular dynamics, and we need to make an indistinguishability correction. An analysis of the thermodynamics is required in order to extract the excess free energy of an i-cluster from the free energy of disassembly, and the details are given in Appendix B. It turns out that we can In the first term the free energy of disassembly ∆F appears with a negative sign because it refers to the process of taking a cluster apart while F s is the free energy of interface formation. The f 2 s term arises from relating the final state in the disassembly process, namely the separated harmonically bound particles, to the appropriate reference state of a saturated vapour. It represents the difference in free energy between the tethered particles, each effectively confined to a volume v HO = (2πkT /κ f ) 3/2 , and particles in the saturated vapour phase with density ρ vs and volume per particle 1/ρ vs . The f 3 s term is the entropy penalty associated with the initial tethering: the centre of mass of the cluster is effectively confined to a volume v c = (iκ i /(2πkT )) −3/2 and needs to be referred to a situation where it is allowed, like a particle in saturated vapour, to explore a volume 1/ρ vs . The f 4 s term is an approximate expression for the perturbation in the cluster energy due to the initial presence of the tethers, where v l = 1/ρ l is the volume per particle in the condensed phase. Finally, f 5 s converts calculations derived from molecular dynamics with distinguishable particles into results relevant to a system of indistinguishable particles. B. Results and discussion A typical example of the work W (t) performed over a disassembly trajectory of duration 20 ns for a 27-atom cluster is shown in Figure 4. The gradual rise in the work performed prior to about 5 ns represents an accumulation of tethering energy as the guide particles move away from their initial positions at the origin. After this time, atoms begin to leave the cluster, and less work is needed to move the corresponding guides. After about 7 ns, the work rate reduces significantly as the cluster disintegrates and the guide particles move towards their final positions. Visual representations of the disassembly process (see Figure 5) provide further insight into the manner in which the clusters are pulled apart. The onset of cluster disassembly is signalled by the loss of one or two atoms from the cluster, perhaps only temporarily. The cluster soon after breaks into several smaller clusters, which eventually disintegrate into fragments or single atoms. It is rare to see a complete and sudden disintegration of a cluster, where all the constituent atoms disassemble together within a short space of time. Figures 6 and 7 show distributions of the work performed in disassembling the 5-cluster and the 27-cluster, along with estimates of the free energy change, for separation times between 0.5 ns and 20 ns. As expected, the work distributions are broader for the processes that are most rapid (smallest t −1 sep ) and hence least quasistatic in nature. Conversely, the work distributions become narrower, and lead to free energy changes that presumably provide the most accurate estimates of the true free energy change, as the rate of separation is reduced. The free energy change ∆F for the disassembly of each size of cluster at the slowest rate studied is shown in Table II, along with the other contributions to the excess free energy F s . We refer to a molecular dynamics study by Baidakov et al. [53] to provide values of the saturated vapour density ρ vs and liquid density ρ l = 1/v l of the argon-like Lennard-Jones fluid at a temperature of 60.31 K. Figure 8 shows our excess free energies F s (i) as a function of cluster size i. Statistical errors propagated from uncertainties in the free energy change ∆F are similar to the size of the symbols. We also include corresponding results from the Monte Carlo studies by Barrett and Knight [27] and Merikanto et al. [28,29]. Barrett . Illustration of the disassembly of a 27-atom argon cluster, with green spheres representing the argon atoms and lighter spheres the guide particles (helium atoms are not shown). In frame 1, all the guides lie at the origin of the cell. By frame 2, the guides have drifted far enough apart for a single argon atom to escape temporarily from the cluster before rejoining it in frame 3. In frame 4, several atoms have escaped, but remain in close proximity to the reduced cluster. A threshold is reached in frame 5, where many argon atoms break free to leave a fragment of about five atoms that also soon disintegrates as shown in frame 6. Shortly after, all of the atoms fall into motion about their partner guide particles which continue along steady paths away from one another (frames 7 and 8). The reader is encouraged to view movies of the disassembly provided in the Supplemental Material [52]. structed such that F ICCT s (1) = 0, where γ is the surface tension of the planar liquid-vapour interface, again taken from Baidakov et al. [53]. It is clear from Figure 8 that the calculations presented in this study are consistent with the previous Monte Carlo results. This is satisfactory support for the disassembly approach that we have developed. We note that all three are reasonably well represented by the ICCT model, which is somewhat surprising. Note that the construction of a traditional plot of the nucleation barrier such as Figure 1 would require us to subtract a term ikT ln S from the excess free energies in Figure 8. Inserting a supersaturation of 30 would then yield a critical size of about 20, for example. Barrett and Knight [27] at 59.88 K (solid line) and Merikanto et al. [28,29] at 60.18 K (triangles). Also shown is the prediction from internally consistent classical nucleation theory for a temperature of 60.31 K (dashed line). IV. CONCLUSIONS We have developed a method of guided cluster disassembly in molecular dynamics, capable of extracting the excess free energy associated with the formation of a molecular cluster from the saturated vapour phase. This property is often regarded as a surface term and it plays a central role in kinetic and thermodynamic models of the process of droplet nucleation. After exploring some aspects of the method by separating a dimer, the technique was applied to the controlled disassembly of Lennard-Jones argon clusters between 5 and 27 atoms in size. The extracted free energy of disassembly has been related to the excess free energy of the cluster through an analysis of the statistical mechanics of free and tethered clusters. Our calculations for clusters of various sizes are consistent with previous studies by Barrett and Knight [27] and Merikanto et al. [28,29], both of which require the evaluation of a sequence of free energy differences between monomer and dimer, dimer and trimer, etc. A Lennard-Jones microscopic model of argon, within the standard kinetic and thermodynamic framework of nucleation theory, cannot account for the experimental argon nucleation data of Iland et al. [30], but we do not speculate here about this disparity. The approach should be contrasted with methods of free energy estimation based on thermodynamic integration. In those methods, the strength of the interparticle interactions is evolved over a sequence of equilibrium calculations. Our approach also involves the evolution of a Hamiltonian, but it is the tether potentials that change with time, not the interparticle interactions. Furthermore, we evolve by nonequilibrium molecular dynamics rather than studying a sequence of equilibrium ensembles, and we are only required to apply a cluster definition when selecting the initial configurations, not during the evolution. We believe that our process of mechanical disassembly offers an intuitive understanding of the meaning of the work of formation that plays such a central role in nucleation theory. We suggest that a direct evaluation of this quantity is preferable to an approach based on summing the free energy changes associated with the addition of single molecules to a cluster, on the grounds that we avoid the possible compounding of statistical errors. The computational costs of our current study of argon clusters have been higher than those of more traditional methods such as grand canonical Monte Carlo [29], for the same level of accuracy, largely because of our exploration of different protocols and our use of an explicit helium thermostat, but these can be reduced with further development. A particularly powerful variant of the disassembly scheme is to separate a cluster into two subclusters under similar mechanical guidance, in order to relate the distribution of work performed to a free energy of 'mitosis', essentially a difference in excess free energies between the initial cluster and the two final subclusters. Such comparisons would be unfeasible to perform in Monte Carlo. The calculations are not onerous and an evaluation of the excess free energy of clusters of up to 128 water molecules is to be reported [54]. Furthermore, the explicit thermostat can be replaced by an implicit scheme. With such tools, and guided by the experience developed in the current investigation of argon, we intend to carry out studies of clusters of water, acids and organic molecules, species that are particularly relevant to the process of aerosol nucleation in the atmosphere. ACKNOWLEDGMENTS Hoi Yu Tang was funded by a PhD studentship provided by the UK Engineering and Physical Sciences Research Council. We thank Gabriel Lau and George Jackson for important comments. Appendix A: Argon dimer separation We test the feasibility of the approach using two protocols of controlled dimer separation. First, the guide particles are made to drift apart with the tether strengths held constant, and then we allow the tethers to tighten over the course of the process. We determine the manner of dimer separation that leads to an accurate estimate of the free energy change. We start by evaluating the free energy of a tethered dimer of argon-like atoms analytically. Particles are distinguishable in molecular dynamics simulations since they carry labels, so we take this into account in the analysis. The initial Hamiltonian of the dimer system is where m is the argon mass, and Φ(|x 1 −x 2 |) is a pairwise interaction potential. When the guide particles both lie at the origin (X 1 = X 2 = 0), the initial partition function is 2mkT dp 1 dp 2 (A2) noting that there is no correction factor of one half since the atoms are distinguishable. Substituting r = x 1 − x 2 and R = x 1 + x 2 , the partition function Z dimer where λ th = h/(2πmkT ) 1/2 is the thermal de Broglie wavelength. We have imposed an upper limit r c on the separation between the two atoms, corresponding to a definition of what we mean by a dimer. For the final state in which the two argon atoms are tethered to respective guide particles that are far apart, the Hamiltonian is simply that in Eq. (A1) without the interaction term, and with a final tether strength κ f . The corresponding final partition function is 2mkT dp 1 dp 2 (A4) The free energy change in separating a dimer of tethered atoms can therefore be expressed as which can be evaluated numerically. The parameter r c is the Stillinger radius used to identify a dimer configuration in the equilibrated molecular dynamics simulation, to which we now turn. We place two argon-like particles within a periodic cell with edge length 50 Å, each tethered to guide particles through a harmonic interaction 1 2 κ(t)r 2 , where r is the separation between the argon atom and its guide, and κ(t) is the tethering force constant. The argon atoms are thermalised through interaction with a gas of 100 helium-like atoms kept at constant temperature using a Nosé-Hoover thermostat. Conventional masses of 39.85 and 4.003 amu for the argon and helium-like particles are adopted, while the guide particles are assigned a vastly greater mass of 4 × 10 12 amu. Interaction potentials are specified by with parameters shown in Table III, though it should be noted that only the repulsive part of the interaction between argon and helium is employed in order to prevent any binding between the two. Simulations are performed at a temperature of 15 K such that dimers are long-lived and a sufficient number of configurations satisfying the separation criterion r ≤ r c = 1.5σ ArAr can be obtained from the equilibrated trajectory. With a constant tethering force constant of 0.05 kJ mol −1 Å −2 , we generate an equilibrated molecular dynamics trajectory of duration 100 ns and choose 10 3 dimer configurations for use as starting points for the separation process. j k ǫ jk / kJ mol −1 σ jk / Å Ar Ar 0.995581 3.405 He He 0.084311 2.600 Ar He 0.289721 3.000 Table III. Parameters for the Lennard-Jones potentials, where j and k are the atomic labels, ǫ jk is the depth of the potential well, and σ jk is the range parameter [55]. Figure 9. Illustration of the dimer separation process. Both guide particles (diamonds) are initially at the origin, but one is made to drift towards a corner of the simulation cell. a. Guiding at constant tether strength One of the guide particles drifts from the origin to a corner of the cubic simulation cell over a separation time t sep while the other remains stationary (see Figure 9). We choose t sep to be 1, 2 or 4 ns and the velocity of the moving guide particle (labelled 1) is given by For initial and final tethering force constants of 0.05 kJ mol −1 Å −2 , the expected free energy change in separating the dimer is 5.716 kT according to Eq. (A5). Distributions of the work done for each rate of dimer separation are shown in Figure 10, and the corresponding estimates of the free energy change obtained from the Jarzynski equality are compared with the expected value in the lower part of Figure 11. A longer separation time leads to a better estimate of the free energy change since the process is then closer to being quasistatic. b. Guiding with tether tightening We now elaborate the process by tightening the tethers during guide drift according to where t i is the time at which the force constant begins to change, and t s is the time at which it reaches its final value. Once again starting with dimer configurations and an initial tethering force constant of 0.05 kJ mol −1 Å at 15 K, three dimer separation times are investigated, during which the force constant rises by a factor of two. The times t i and t s are specified as 20% and 80% of the total separation time. The expected free energy change associated with dimer separation is 7.795 kT according to Eq. (A5). It can be seen from the upper part of Figure 11 that all three separation rates give acceptable estimates of the free energy change. Furthermore, the greater compatibility between the distributions of the work performed at different separation rates shown in Figure 12, compared with those in the simulations with constant tether strength, suggests that a protocol where the tethers tighten while the guide particles drift apart is more effective. Intuitively, the separation is then conducted more firmly, and with less dissipation. Appendix B: Analysis of cluster free energies Free and tethered clusters The canonical partition function Z F = exp (−F F /kT ) for an untethered, or 'free' cluster of i indistinguishable particles governed by a Hamiltonian H composed of kinetic energy terms and pairwise interactions is given by where F F is the associated free energy. For a cluster tethered to the origin, the Hamiltonian will include an additional set of harmonic potentials, such that the partition function is where F T is the free energy of the tethered cluster, and κ i is the initial tethering force constant. We insert a factor of unity in the form 1 = (B1) and (B2), and transform to particle coordinates with respect to the cluster centre of mass x c , namely x ′ j = x j −x c . The partition function for a free cluster becomes where V is the system volume and Z c F is the partition function for a cluster whose centre of mass is fixed at the origin. It should be noted that since the Hamiltonian contains pairwise interactions, it may be rewritten as H({x k }) = H({x ′ k }) after the change of variables. Similarly, the partition function for a tethered cluster can be rewritten as The second term in the exponent of Eq. (B4) may be simplified using the constraint i j=1 x ′ j = 0 and it follows that where Z c T is the partition function of a cluster constrained to have its centre of mass at the origin as well as having its constituent particles tethered to the origin by a harmonic potential. Next, we employ the Gibbs-Bogoliubov approach [56,57] where Γ represents the configuration of a system, and dΓ is proportional to the phase space volume element Π j dx ′ j dp j . In the context of the tethered cluster described by Eq. (B5), U represents the term 1 2 κ i i j=1 x ′2 j , while H 0 is the untethered Hamiltonian H ({x ′ k }) modified by the delta function constraint. F c T is therefore the free energy of a tethered cluster with its centre of mass further constrained to lie at the origin, and is equal to −kT ln Z c T . A similar relationship exists between F c F , the free energy of an untethered cluster with fixed centre of mass, and Z c F . The free energies F c F and F c T may be related through where angle brackets represent an average in the statistical ensemble corresponding to H 0 . For small U/kT 0 , we can write exp (−U/kT ) 0 ≃ exp (− U 0 /kT ), and hence is a sum of single-particle harmonic potentials of the form U HO (x ′ k ) = 1 2 κ i x ′2 k , so Eq. (B8) can be written as We next introduce the spatial density profile of a single particle (labelled k without loss of generality) in a cluster constrained to have its centre of mass at the origin but not tethered, namely with´ρ 0 (y)dy = 1. We can write which represents the average tethering energy of a particle that is spatially distributed according to the density ρ 0 (y). The condition that the tether potential makes a relatively small contribution to the mean energy of the cluster is U HO 0 = 1 2 κ i´ρ0 (y)y 2 dy ≪ kT , in which case the approximations involved in the Gibbs-Bogoliubov approach are acceptable and the initial tethering potential weak enough that the cluster is only slightly distorted in comparison with a free cluster. Thus we write (B12) Eq. (B5) can then be written as such that the relationship between the partition function of a tethered cluster, and the partition function of a free cluster with a constrained centre of mass Z c F , is exp −iˆρ 0 (y) κ i y 2 dy/2kT . (B14) Combining Eqs. (B3) and (B14) then gives where (iκ i /2πkT ) 3/2 has been replaced by a function ρ c (0), representing the probability density that the centre of mass of the tethered cluster lies at the origin. This equivalence can be demonstrated by deriving the distribution of the cluster centre of mass, through considering a single particle with mass M = im and coordinates x c and p c residing in a potential iκ i x 2 c /2. The positional probability density at z is such that ρ c (0) = (iκ i /(2πkT )) 3/2 . The purpose of the substitution is that the first term on the right hand side in Eq. (B16) may be interpreted as two competing contributions to the free energy difference (B18) such that the first term corresponds to the removal of the entropic contribution to free energy associated with the freedom of motion of the cluster centre of mass within a constrained volume 1/ρ c (0), brought about by the tethers, and the second term represents the addition of entropic free energy corresponding to the freedom of motion in volume V . Finally, the second term in Eq. (B16) is an estimate of the removal of tethering potential energy when relating a tethered to a free cluster. Excess free energy from the free energy of disassembly We now establish the relationship between the free energy of a free cluster to the cluster work of formation defined as φ (i) = Ω s (i) − ikT ln S, where Ω s (i) = F F (i) − iµ s is the grand potential of a free cluster of i particles in an environment at chemical potential µ s for which the bulk condensed and vapour phases coexist. The excess free energy (difference) of the cluster is therefore having used Eq. (7). Assuming the vapour is ideal, the coexistence chemical potential µ s and the monomer Helmholtz free energy F (1) are simply µ s = kT ln(ρ vs Λ) and F (1) = −kT ln(V /Λ), respectively, where ρ vs is the particle density in a saturated vapour and Λ = λ 3 th with λ th = h/(2πmkT ) 1/2 . The excess free energy F s (i) can now be expressed as Now we consider the free energy change associated with the process of cluster disassembly. The difference in free energy between separated constituent particles each tethered to a guide particle, and a tethered cluster, is δF = F f − F T , where F f = −3ikT ln (kT / ω f ) is the free energy of i harmonic oscillators in three dimensions, where the angular frequency ω f = (κ f /m) 1/2 of the oscillators is related to the final value of the tethering force constant κ f . It should be recognised, however, that the quantity δF is not the free energy difference extracted from the molecular dynamics simulations of cluster disassembly. Molecular dynamics simulations always involve distinguishable particles, since they are assigned labels, and δF is a difference between the free energy of i indistinguishable particles in a cluster, and i particles that are distinguishable through having been physically separated to regions around their final tether points. The free energy difference that is extracted in our procedure is actually ∆F = F f −F dist T , where the superscript in F dist T reminds us that it is the free energy of a tethered cluster of distinguishable particles. But we can relate the partition function of such a cluster to the partition function Z T for indistinguishable particles by the usual classical procedure, namely Z dist T = i!Z T , and since F dist such that F T = F f − δF = F f − ∆F + kT ln i!. Substituting into Eq. (B20) then gives where v HO = (2πkT /κ f ) 3/2 is a volume scale associated with the confinement of particles within the final harmonic tether potentials. It should be noted that the excess free energy F s does not depend upon the Planck constant h, nor on the system volume V , as is to be expected. In order to complete our specification of F s (i) in terms of ∆F and material properties, we need to estimate the final term in Eq. (B22). We write´ρ 0 (y)y 2 dy = ∞ 0 ρ 0 (r)4πr 4 dr, where r is the distance from the cluster centre of mass, and recall that ρ 0 (r) is the single-particle density profile in an untethered cluster with fixed centre of mass. As an approximation, we imagine the cluster to be spherical with a constant particle density, such that ρ 0 (r) ≃ ρ l /i for 0 < r < r max , where ρ l is the particle density in the condensed phase, and r max is the radius of the cluster. Since the probability density ρ 0 (r) is normalised, we have´r max 0 (ρ l /i)4πr 2 dr = 1, such that r max = (3i/4πρ l ) 1/3 and so where v l = 1/ρ l is the volume per particle in the condensed phase. Substituting this into Eq. (B22) gives Eqs. (13)(14)(15)(16)(17) in the main text.
2015-05-22T16:19:02.000Z
2015-01-30T00:00:00.000
{ "year": 2015, "sha1": "6f8353c3bbf3e771bfb618ca6fc39e1eb01b0ea6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1501.07793", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6f8353c3bbf3e771bfb618ca6fc39e1eb01b0ea6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
59018638
pes2o/s2orc
v3-fos-license
Factors associated with psychological distress among members of HIV discordant couples in western Kenya : the role of adverse childhood experiences Cite as: Ondenge K, Khalil G, Odero I, Ford DC, Thompson WW, Awuonda E, Omoro T, Gust DA. Factors associated with psychological distress among members of HIV discordant couples in western Kenya: the role of adverse childhood experiences. J Glob Health Rep 2018; 2: e2018018. Background The relationship between measures of psychological distress and factors such as adverse childhood experiences (ACE) and HIV infection have not been well studied among members of HIV discordant couples living in Kenya. Background The relationship between measures of psychological distress and factors such as adverse childhood experiences (ACE) and HIV infection have not been well studied among members of HIV discordant couples living in Kenya.Methods A structured questionnaire, which included the non-specific psychological distress Kessler 6 (NPD K6) scale, was administered to members of HIV discordant couples using a computer-assisted personal interview in two Kenyan communities. Results Among the 202 participants who completed the survey (52% women and 48% men), the median NPD K6 score was equal for men and women (median=4; maximum=24).Participants did not report high levels of distress.For women, factors associated with a higher or more distressed NPD K6 score were a higher ACE score, religious affiliation, and perception of not being treated with respect by family members and partners.For men, factors associated with a higher NPD K6 score were HIV-positive status and higher ACE score. Conclusions It is important to assess NPD and ACE in members of discordant couples, and if needed, assist them in identifying psychological counselling and support activities.By being better equipped to deal with the stressors associated with not only HIV, but also the discrimination and stigma associated with the disease, members of discordant couples may be more inclined to consider the importance of treatment and prevention. The relationships among biological mechanisms, stress, and physical and mental health outcomes are still not well understood.In early work assessing the relationships between the mind and body connection, research focused on assessing the "fight or flight" response both in humans and animal models (1)(2)(3)(4).Previous research has demonstrated that the fight-or-flight stress response is often biologically adaptive in terms of responses to brief stressful situations, The long-term effects of stressful life events on health are often more severe if they are associated with trauma experienced during early childhood or during adolescence.This relationship has been explained by intense or chronic stress (toxic stress) (21), and physiologic adaptations to stress (allostatic load) (22).The consequences of toxic stress through over-activation of stress hormones are numerous, including damage to the cardiovascular and nervous systems (23).In particular, a weakened nervous system can compromise the functioning of areas of the brain responsible for planning, problem solving, and self-regulation of behavior and management of emotions.In cases of prolonged childhood neglect, cognitive impairments, such as attention problems, language deficits, academic difficulties, withdrawn behavior, and problems with peer interaction, can be more severe than physical trauma experiences (24). A growing body of research demonstrating the influence that childhood sexual, physical, and emotional trauma have on subsequent health problems has emerged in recent years.One of the largest epidemiological studies conducted in the U.S., the Centers for Disease Control and Prevention (CDC) -Kaiser -Adverse Childhood Experience (ACE) Study, assessed the association of various forms of childhood abuse and household dysfunction with subsequent health-related outcomes occurring in adolescent to adulthood years.Findings from this study supported the significant relationship between the ACE score (based on a series of questions) and health and social negative after-effects such as smoking (25), unintended pregnancies (26), sexually transmitted diseases (27), male involvement in teen pregnancy (28), adult alcohol problems (29,30), and attempted suicides (31).Based on the results of these studies, the U.S. CDC designed a survey, referred to as the CDC ACE questionnaire, using a subset of the items from the previous study (6).The CDC has also confirmed these survey instruments in terms of reliability, validity, and measurement characteristics.Studies have additionally shown an increased prevalence of psychiatric disorders among individuals who experienced abuse and trauma in childhood compared to same age-peers who did not have these experiences (32,33).A study among members of a primary care health maintenance organization found that childhood emotional abuse increased the risk for lifetime depressive disorders (34).Research studies in African settings are fewer, however a large study of persons in five African countries showed a significant dose-response relationship between physical and sexual violence and risk behaviors such as smoking, alcohol abuse, unsafe sex, and suicidal thoughts (35). To our knowledge, no studies assessing both the influence of psychological distress and childhood trauma among adult members of HIV discordant couples have been undertaken.Thus, the purpose of the present study was to 1) compare participants' NPD K6 scores and ACE scores by gender and 2) determine the factors associated with the NPD K6 score, by gender, among members of HIV discordant couples living in Asembo and Karemo regions of Siaya County, Kenya. Participants A survey study occurred during 2016 in Asembo and Karemo, Kenya.A purposive convenience sampling method was used to recruit study participants.First, former participants from the HIV Prevention Trials Network (HPTN) 052 trial (a clinical trial whose primary objective was to compare the rates of HIV infection among partners of HIV-infected participants (36)), were contacted directly through the community staff who conducted follow-up and referral activities for the clinical trial.HPTN 052 participants had provided written consent to be contacted for future studies.Second, persons in a discordant relationship who had not participated in the HPTN 052 clinical trial were recruited from the U.S. President's Emergency Plan for AIDS Relief (PEPFAR)-funded HIV care clinic support groups.After we provided details on the survey to the HIV care clinic staff, they helped facilitate recruitment by connecting our research staff with support group leaders.The support group leaders were then asked to arrange a time and place to meet the group members.At the meeting with the support group members, the research staff informed them about the study and asked those interested in participating if they would answer two screening questions (age and residence).If eligible, the research staff arranged interviews. Inclusion and exclusion criteria To be eligible for the study, participants had to be ≥18 years of age, in an HIV discordant relationship, a resident of Asembo or Karemo, and willing to give informed consent.It is important to note that the research staff were not always able to interview both members of a discordant couple; interviews were conducted with men and women in a discordant relationship irrespective of whether their corresponding partners agreed to be interviewed.Data on the identity of the participant's partner were not collected.Participants were stratified on gender to obtain an approximate 1:1 ratio. Survey A structured questionnaire was administered to participants using computer-assisted personal interviewing (CAPI) at a convenient study-designated community location for the participant (eg, health facilities, schools, churches, participant home, under a tree).Interviewers were trained to be attentive to maintaining privacy during interviews. NPD K6 The NPD K6 scale was used to assess each participant's level of psychological distress (13,37).This instrument consists of six items in which participants were asked to rate how they have been feeling during the past 30 days (nervous, hopeless, restless or fidgety, so depressed that nothing could cheer you up, that everything was an effort, worthless) using the following five-point Likert scale: All of the time, Most of the time, Some of the time, A little of the time, and None of the time.The scale was assigned a numeric value ranging from 0-4 with 0 corresponding to None of the time and 4 indicating All of the time.Responses to each item were then summed to create a composite measure of NPD (highest possible score was 24).A score of ≥5 has been defined as moderate mental distress (38).In addition to this overall score, the NPD K6 was separated into two subscales measuring the individual's anxiety (nervous, restless) and depression (hopeless, depressed, effort, worthless) levels (18) with valid scores ranging from 0-8 and 0-16, respectively.Higher scores are indicative of experiencing greater levels of distress. Perceived respect Participants were asked whether they agreed or disagreed with statements related to perceived treatment of respect by community, family, partner, and healthcare workers (eg, discordant couples are treated with respect). Adverse childhood experiences (ACEs) To assess childhood exposure to adverse or traumatic experiences, participants were asked to report on exposure to the types of ACEs: childhood abuse (emotional, physical, and sexual), neglect (emotional and physical), and family household dysfunction (witnessing domestic violence, parental marital discord, and living with substance abusing, mentally ill, or criminal household members).Responses were coded such that an affirmative response to the question was assigned a value of 1. Affirmative responses were then summed to obtain an overall measure where higher scores indicated greater childhood adversity.Scores on this measure can range from 0 to 10.The questions in this scale have been used in other international settings (39,40). HIV status Previous participants in the HPTN 052 clinical trial were known to be members of an HIV discordant couple through laboratory testing during the clinical trial.Non-HPTN 052 participants were determined to be members of a discordant couple by their participation in a clinic-based support group.HIV status was determined through self-report. Other covariates Sociodemographic data including sex, age, income, highest educational attainment, and religion, as well as the individual's participation in the HPTN 052 clinical trial, were collected. Ethics All persons received information about the objectives of the study, and were informed that the information they provided would be kept private, that they could choose not to participate, and that they would not be identified when the information was reported.Verbal informed consent was obtained, and a copy of the consent was offered to all participants.Survey participants were reimbursed KSH 500, equivalent to $5, for their transportation, and given a bar of soap as a token of appreciation for their participation.The study protocol, consent forms, and data collection instruments were reviewed and approved by the Kenya Medical Research Institute (KEMRI) local and National Ethical Review Committees, and the U.S. Centers for Disease Control and Prevention. Statistical analysis Frequency counts and percentages for sociodemographic variables, NPD K6 levels, and ACE scores were calculated.In addition, medians (MED) and interquartile ranges (IQR) were computed for NPD K6 and ACE scores.As descriptive data were stratified by gender, statistical differences by gender were not evaluated.Correlations were computed for the NPD K6 scores and each of the other remaining measures described previously to determine if statistically significant bivariate relationships existed among them (all data not shown).Finally, multiple regression models examined the association between NPD K6 and the variables found to be statistically significant (P<0.05) in the bivariate analysis.Stratifying by gender we estimated these models using the overall NPD K6 and the two subscales (depression and anxiety) separately as the outcomes of interest.Analysis were completed using SAS version 9.4 (SAS Institute Inc, Cary, North Carolina, USA). RESULTS Of the 202 participants, 52% were female and most had a primary school level of education (66.3%).Fifty-four percent (100/185) of persons with data on their HIV status reported being HIV-positive.Other participant characteristics can be found in Table 1.Data were missing for the following categories: HIV status -9 females (n=96) and 8 males (n=89); age -2 females; income -1 male; perception questions -17 individuals for community question, 8 for family question, 7 for partner question, and 3 for healthcare workers question. ACE scores In a descriptive analysis, the median ACE score for women was lower (2.0,IQR=3.0)than for men (4.0, IQR=4.0)(Table 3).The most common ACE was living with a problem drinker or alcoholic or someone who used illicit drugs (women: 44.8%; men: 58.8%) and the second most common was being physically assaulted by a parent or other adult in the household (women: 37.1%; men: 56.7%).† HIV Prevention Trials Network 052 is a clinical trial whose primary objective was to compare the rates of HIV infection among partners of HIV-infected participants (36). Bivariate analyses Among women, a higher ACE score (P=0.0007),young age group (17-29 years compared to 50 years and older) (P=0.025),participating in the HPTN 052 study (P=0.005),perceived respect by community (P=0.022),perceived respect by family (P<0.0001),and perceived respect by partner (P=0.015) were significantly associated with NPD K6 score (complete data not shown). Regression analysis A higher ACE score was significantly associated with an increase in the overall NPD K6 and depression subscale scores (Table 4a).Among women, ACE scores were not found to be related to the anxiety subscale when adjusting for the other factors, while perceived respect by their family members and their partners was significantly associated with lower NPD K6 and depression subscale scores.The model showed that women in the "Other" religious category (n=6: Nomiya, n=3; Roho Mowar, n=3) had NPD K6 scores that were lower than Protestants (P=0.01).Similarly, women in the "Other" religious category were found to have significantly lower anxiety and depression subscale scores compared to Protestants (Table 4).Neither the HPTN 052 participation variable nor the age group variable improved the fit of the final linear regression models, thus both were removed. Bivariate analyses Among men, a higher ACE score (P<0.001),participating in the HPTN 052 study (P=0.023),and perceived respect by family (P=0.011)were significantly associated with overall NPD K6 scores (complete data not shown). Regression analyses A higher ACE score and HIV positive status were significantly related to higher levels of NPD K6 and the anxiety and depression subscales (Table 5).The model showed that men in the "Catholic" religious category had lower anxiety scores than Protestants (P=0.047).Neither the HPTN 052 participation variable nor the age group variable improved the fit of the final linear regression models, thus both were removed. DISCUSSION HIV takes a toll on both the member of the couple with the infection and their partner.While professional HIV counselling can provide solutions to challenges encountered by members of a discordant couple (eg, conception), for some, challenges and stresses remain.Among persons who were members of an HIV discordant couple from two regions in western Kenya, the median NPD K6 score was the same for women and men, though a slightly larger percent of men (20.6%) reported no distress compared to women (18.1%). Our results showed that men had a higher median ACE score than women.While in low income rural areas women have more cultural disadvantages than men, including less opportunity for secondary school attendance, more household responsibilities, and more often forced into early marriage (41), boys are frequently treated roughly by patriarchal leaders of their households and communities with the intention of teaching them responsibility.In the Luo culture, male children learn their traditional sex roles by following their fathers and other male family members (42) when they are adolescents.In a comparison of corporal punishment in nine countries, the use of and belief in the necessity of using corporal punishment was highest in Kenya.Kenyan fathers reported using corporal punishment less frequently with daughters than with sons (43). For women in our study, factors associated with a higher NPD K6 score included a higher ACE score and religion.While these findings need to be interpreted cautiously, there is an increasing literature showing that more traumatic experiences in childhood (ACE score) are associated with poorer mental health conditions later in life.In a study among mothers in semi-rural Kenya, perceived stress was shown to be related to emotional abuse during childhood (44).Compared to women who reported their religion as Protestant, NPD K6 scores were lower among women who reported their religion as "Other".The "Other" category, comprised of two African independent churches, Nomiya and Roho, are based in Christianity but have broken off with other Christian or Protestant denominations.They tend be more tolerant of cultural practices and incorporate ancestral spirits and the holy spirit and provide support through promises of mental and physical healing (45,46).This may help explain the lower NPD K6 scores among women who practice these religions given that social support has been shown to be more important to women than men (47).It is of note that in our study, HIV positive status was not a factor significantly associated with higher NPD K6 scores for women.While HIV is a devastating and potentially stigmatizing disease, it may be possible that women have developed skills for coping with psychological stress.A qualitative study carried out in Kenya found that some women reported that they found support from other women living with HIV as a way to mitigate stigma surrounding HIV (48).They sought out and supported other HIV-positive women emotionally, and even physically, when they were too sick to work (48).When the NPD K6 scores were broken down into anxiety and depression subscales, results were similar.Finally, the anxiety subscale did not show that being treated with respect by their family and being treated with respect by their partner was associated with improved scores.This highlights the important role that social support can play in mediating depression (47,49), but not necessarily anxiety. For men, factors associated with a higher NPD K6 score included HIV-positive status and higher ACE scores.Research suggests that men may react differently to receiving a positive HIV test result than women, possibly due to their cultural roles.Men's belief in the patriarchal society and hegemonic masculinity before their HIV test has been shown to negatively affect their ability to cope with an HIV diagnosis, seek help, and learn to live with HIV post-diagnosis (50).A qualitative study of men living with HIV in South Africa found that men saw an HIV diagnosis as a loss, and evidence of their failure as a man (50).Being sick challenged their ability to accomplish expectations leading them to feel powerless, worthless, and distressed (50).In another study of men living with HIV in South Africa, some men worried about the difficulties their illness would cause their relatives (51).Religion was not significantly associated with the overall NPD K6 scores.However, when the NPD K6 scores were broken down into anxiety and depression subscales, religion was associated with anxiety in that Catholic men had lower anxiety scores compared to Protestant men.One possi-JOGHR 2018 Vol 2 • e2018018 ble explanation is that the Catholic mandatory sacrament of confession, where a person is absolved of their sins by a priest, reduces anxiety for men of this faith compared to men of the Protestant faith where confession is to God, directly (52). Our study had several limitations.First, our analysis was cross-sectional, so the direction of associations cannot be determined.Second, due to participant time constraints, the questionnaire needed to be relatively short, and additional variables could not be collected (eg, frequency and duration of psychological distress, family history of psychological distress, nutrition, number of children, current sexual behaviors, HIV medication adherence, oral pre-exposure prophylaxis (PrEP) use, and partner support).Third, we did not use the international version of the ACE questionnaire, which asked the same questions as in the original version that we used plus other questions on, for instance, demographics and exposure to violence.Fourth, we were not able to link participants by marriage or cohabitation.Finally, there may have been social desirability bias or stigma around mental health that could have influenced underreporting of psychological distress.Similarly, ACEs may have been underreported.A strength of our study is the compelling nature of the findings.In the original study evaluating the reliability and validity of the NPD K-6, the authors (37) determined that the mean and standard deviation for the instrument was 5.93 and 4.26, respectively.This suggests that the approximate 0.75 regression coefficient that we found using the ACE score to predict the K-6 score for both sexes represents about 18% of a standard deviation change in the K-6 score.An approximate one point difference in the K-6 score also represents a substantial change for population-based studies carried out using the Behavioral Risk Factor Surveillance System (BRFSS) in the U.S. (14).Another strength of our study is that it reports on psychological distress and ACE scores in an area of Kenya with the highest HIV prevalence in the country. CONCLUSIONS This study found that among members of HIV discordant couples, the median NPD K6 score was equal for women and for men.In addition, childhood trauma was found to affect NPD K6 scores for both men and women.Finally, an individual's perceptions of respect by their partner and their family impacted the overall NPD K6 score, and the depression subscale, in women but not in men.Earlier studies have found that women have larger and more multifunctional support networks than men (53).These findings support initiatives to assess childhood trauma and psychological distress in discordant couples, and to assist them in finding appropriate psychological services and social support.Moreover, it may be prudent to include education on self-care and wellness (eg, nutrition, exercise, deep breathing, social support) and to address economic stressors.By doing so, members of discordant couples may be better equipped to deal with the stressors associated with not only the disease, but also the discrimination and stigma associated with HIV.What is more, they may be more inclined to consider the importance of treatment and preventions. Table 3 . Affirmative responses to ACE questions by gender, Asembo and Karemo, Kenya, 2016* Did you often feel that …You didn't have enough to eat, had to wear dirty clothes, and had no one to protect you?Or your parents were too drunk or high to take care of you or take you to the doctor if you needed it? Was your mother or stepmother: Often pushed, grabbed, slapped, or had something thrown at her? Or sometimes or often kicked, bitten, hit with a fist, or hit with something hard?Or Ever repeatedly hit over at least a few minutes or threatened with a gun or knife?JOGHR 2018 Vol 2 • e2018018 OVERALL NPD K6 SCORE NPD K6 ANXIETY SCORE NPD K6 DEPRESSION SCORE b -unstandardized regression coefficient, SE -standard error, ACE -adverse childhood experiences, NPD K6 -non psychological distress Kessler-6 scale *P value of 0.047 was rounded to 0.05.
2018-12-18T13:50:20.919Z
2018-07-15T00:00:00.000
{ "year": 2018, "sha1": "15a61572bbbe450cfbcddcc08ea0edbf22335703", "oa_license": "CCBY", "oa_url": "http://www.joghr.org/documents/volume2/joghr-02-e2018018.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "15a61572bbbe450cfbcddcc08ea0edbf22335703", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
259499374
pes2o/s2orc
v3-fos-license
Vertical Phase Regulation with 1,3,5‐Tribromobenzene Leads to 18.5% Efficiency Binary Organic Solar Cells Abstract The sequential deposition method assists the vertical phase distribution in the photoactive layer of organic solar cells, enhancing power conversion efficiencies. With this film coating approach, the morphology of both layers can be fine‐tuned with high boiling solvent additives, as frequently applied in one‐step casting films. However, introducing liquid additives can compromise the morphological stability of the devices due to the solvent residuals. Herein, 1,3,5‐tribromobenzene (TBB) with high volatility and low cost, is used as a solid additive in the acceptor solution and combined thermal annealing to regulate the vertical phase in organic solar cells composed of D18‐Cl/L8‐BO. Compared to the control cells, the devices treated with TBB and those that underwent additional thermal processing exhibit increased exciton generation rate, charge carrier mobility, charge carrier lifetime, and reduced bimolecular charge recombination. As a result, the TBB‐treated organic solar cells achieve a champion power conversion efficiency of 18.5% (18.1% averaged), one of the highest efficiencies in binary organic solar cells with open circuit voltage exceeding 900 mV. This study ascribes the advanced device performance to the gradient‐distributed donor‐acceptor concentrations in the vertical direction. The findings provide guidelines for optimizing the morphology of the sequentially deposited top layer to achieve high‐performance organic solar cells. Introduction Solution-processed organic solar cells (OSCs) attract broad research interest for their low cost, lightweight, environmental DOI: 10.1002/advs.202303150[12][13][14][15][16] Recently, combined with acceptor synthesis and a quaternary strategy, the OSCs with PCE of 19.76% were reported, originating from the better compromise between charge generation and recombination regulated by the BTP-S17 and BTP-S16 mixture. [17]herefore, managing the nanoscale bicontinuous donor-acceptor interpenetrating network in the photoactive layer to optimize the charge dynamics for highperformance OSCs remains challenging.[23] The pseudo-bilayer structure differs from the true "bilayer" processed with orthogonal solvents to avoid erosion of the layer beneath. [24]For instance, the OSCs composed of D18 and L8-BO were fabricated with the LBL casting method, and by optimizing the spin-coating conditions, the vertical phase separation was optimized, leading to a PCE of 19.05%. [25]The impact of the solvent additive, 1-chloronaphthalene (CN), on the charge dynamics was analyzed by fabricating OSCs composed of PM6/Y6 with the LBL thin film coating process.The ultrafast spectroscopy results showed that adding 0.5 vol.% (volume percentage) CN in the Y6 solution could facilitate exciton dissociation and charge separation.In contrast, excessive (> 1 vol.%)use of CN causes fast geminate and non-geminate charge recombination, leading to poor device performance. [26]Adding 0.25 vol.% 1,8-diiodooctane (DIO) and different weight ratios of BTP-S2 in BO4Cl solution, the photoactive layer composed of PM6/(BO4Cl:BTPS2) could form a morphology with donorenrichment at the anode and acceptor-enrichment at the cathode prepared with LBL method.The reduced charge recombination and promoted charge transport/collection properties led to an improved fill factor (FF) of 78.04%, an enhanced short circuit current density (J SC ) of 27.14 mA cm −2 , and a PCE of 18.16%. [27]It is worth noting that the high boiling point (bp.) liquid additives (e.g., DIO with a bp. of 332.5 °C and CN with a bp. of 260.3 °C) are usually difficult to remove completely, [1,28,29] resulting in unstable photoactive layer morphologies and poor device reproducibility. [30,31]To overcome this drawback, volatile solid additives that could evaporate entirely from the photoactive layer become a practical approach to optimizing the photoactive layer.It was reported that 1,3-dibromo-5-chlorobenzene [32,33] and 1,4-diiodobenzene [34][35][36] could help acceptor molecules to form tighter molecular packing, enhance the intermolecular - stacking, and more ordered microstructures for the nonfullerene acceptor, resulting in high-performed OSCs.However, low prices and efficient volatile solid additives still lag behind the development of non-fullerene OSCs. In this contribution, we systematically investigated the impact of a novel volatile solid additive, 1,3,5-tribromobenzene (TBB), on the vertical phase segregation of LBL-processed D18-Cl/L8-BO.TBB was added to the chloroform (CF) solution of L8-BO, and it can be completely evaporated from the photoactive layer by thermal annealing (TA) at 75 °C for five minutes.The OSCs composed of D18-Cl/L8-BO with TBB and thermal treatment exhibited a high open circuit voltage (V OC ) of 910 mV, a J SC of 26.3 mA cm −2 , an FF of 77.2%, and a PCE of 18.5%, which is higher than the control devices (17.2%).To our knowledge, the V OC achieved here is one of the highest in OSCs with >18% efficiency.The improved device performance was ascribed to the enhanced exciton dissociation rate, charge mobility, and reduced charge recombination, originating from the distribution of the acceptor-donor in the vertical direction and the increased thin film crystallinity.Our results provide alternative design guidelines on volatile solid additives for high-performing non-fullerene OSCs. Results and Discussion The chemical structures of D18-Cl, L8-BO, and TBB are shown in Figure 1a.The highest occupied molecular orbitals (HOMO) and lowest unoccupied molecular orbitals (LUMO) of D18-Cl and L8-BO were taken from reported values, [14,16,35,37] as shown in Figure 1b.The energy difference between the HOMO of D18-Cl and LUMO of the L8-BO is larger than that between PM6 and Y6, the state-of-the-art photoactive layer combination, indicating that the potential V OC could be >840 mV. [4,32,36,38]As presented in Figure 1c, two absorption peaks at 576 and 803 nm correspond to the maximum absorption of neat D18-Cl and L8-BO films, respectively.Compared with the control film, the TBBprocessed (TBB) and TBB-processed in combination with TA (TBB+TA) samples exhibit red-shifted spectra in the absorption range of L8-BO, indicating the molecular packing of L8-BO was influenced by introducing TBB.The enlarged differences in spectra are provided in Figure S2 (Supporting Information).Notably, TBB has been entirely evaporated from the thin films, as evidenced by the Fourier-transform infrared (FTIR) spectroscopy shown in Figure S3c (Supporting Information), in which the distinct functional group (C-Br) peak of TBB located at 655 cm −1 disappears in the TBB and the TBB+TA films.The photoluminescence (PL) spectra of the neat D18-Cl and L8-BO films are plotted in Figure 1d.When L8-BO was photoexcited, comparable PL quenching efficiency of 86.2%, 85.9%, and 87.1% for the control, TBB, and TBB+TA samples were obtained, respectively.On the other hand, when the neat D18-Cl was photoexcited, the PL quenching efficiency was close to unity in all the thin films, implying efficient exciton dissociations.As illustrated in Figure 1eg, transfer matrix simulation was employed to estimate the exciton generation rate with a conventional device architecture of ITO/PEDOT:PSS/active layer/PDIN/Ag.The observations indicate that the exciton generation rate remains unchanged across all devices within the D18-Cl absorption range.However, in the L8-BO absorption range, the addition of TBB and TBB+TA leads to a significant increase in the exciton generation rate, implying an enhanced contribution to photocurrent conversions of L8-BO. We fabricated conventional OSCs to check the performance variations of the control, TBB, and TBB+TA devices, and the OSCs were measured under AM 1.5G simulated solar light with 100 mW cm −2 intensity.The J-V characteristics are plotted in Figure 2a, and the photovoltaic parameters of all the OSCs are summarized in Table 1.The control OSCs exhibit a PCE of 17.2% with a J SC of 25.8 mA cm −2 , an FF of 70.6%, and a V OC of 945 mV.When TBB was added to the CF solution of L8-BO, the resulting J SC and FF were enhanced, but the V OC decreased, resulting in a PCE of 17.4%.Remarkably, adding TBB in the device and combining it with TA (at 75 °C for five minutes) further promoted the J SC and FF, and a champion PCE of 18.5% was achieved, associated with a V OC of 910 mV, a J SC of 26.3 mA cm −2 , and an FF of 77.2%.To our knowledge, the V OC (910 mV) is among the highest for OSCs, achieving an efficiency of over 18%.The external quantum efficiency (EQE) spectra of the control, TBB, and TBB+TA devices are presented in Figure 2b.The EQE of TBB devices shows improvements in ca.350-450, 450-650, and 650-950 nm.Especially the increment in the L8-BO absorption range proved that a fast exciton generation rate could lead to higher photocurrent conversions.The J SC integrated from EQE is within a 5−7% deviation compared with the J SC measured with a solar simulator.Additionally, the J-V curves under the dark are plotted in Figure S4 (Supporting Information), and we fitted the positive bias part with the diode equation (Table S2, Supporting Information).The shunt resistance R sh values, which determine current leakage, were 2.5 × 10 4 , 8.4 × 10 4 , and 20.5 × 10 4 Ω cm 2 for Table 1.Parameters of the Control, TBB, and TBB+TA OSCs under AM 1.5 G simulated irradiance (100 mW cm −2 ).Average values with standard deviation (in parenthesis) were obtained from 20 independent devices.The calculated J SC values listed were integrated from the EQE spectra.the control, TBB, and TBB+TA devices, respectively.Additionally, ideality factor n values of 1.57, 1.70, and 1.55 were obtained for the control, TBB, and TBB+TA devices, respectively.These values suggest the presence of traps in the devices, with the TBB devices exhibiting a more significant degree of trap-assisted charge recombination. The impact of TBB solid additive and TA on charge carrier dynamics was investigated.[41] The mobility values of 3.81 × 10 −4 cm 2 V −1 s −1 for the control devices, 4.19 × 10 −4 cm 2 V −1 s −1 for the TBB de-vices, and 4.88 × 10 −4 cm 2 V −1 s −1 for the TBB+TA devices are obtained.The apparent higher mobility suggests the positive impact on the active layer morphology caused by the addition of TBB and TA.Besides the faster charge mobility, the time-resolved charge carrier density integrated from the current transients follows the bimolecular charge recombination behavior in the OSCs.Thus, we fitted the extracted charge carrier density in Figure 2e to get the bimolecular charge recombination rate (the notes on the bimolecular recombination fitting are presented in Section 7, and the current transient of photo-CELIV measurements are plotted in Figure S5, Supporting Information).The bimolecular charge recombination rates (Table S3, Supporting Information) of the control, TBB, and TBB+TA devices are 5.22 × 10 −13 cm 3 s −1 , 4.69 × 10 −13 cm 3 s −1 , and 3.99 × 10 −13 cm 3 s −1 , respectively.The smaller means slower bimolecular charge recombination, which aligns with the transient photovoltage measurements.So far, the device performance and charge dynamics have been analyzed quantitatively.Now we turn to discuss their morphological origins. We used film-depth-dependent light absorption spectroscopy (FLAS) and time-of-flight secondary ion mass spectrometry (TOF-SIMS) to investigate the vertical phase segregation in thin films.Figure S6a-c (Supporting Information) show that both D18-Cl and L8-BO absorption peaks are present throughout the etched films in the control, TBB, and TBB+TA samples, indicating the formation of a bulk heterojunction (BHJ) active layer structure through the penetration of L8-BO into the D18-Cl layer during spin-coating.The absorption peaks of L8-BO exhibit a red shift upon TBB and TA treatments, suggesting the presence of molecular packing and energy landscape heterogeneity along the film depth.Besides the film structural information, the composition of the D18-Cl and L8-BO at different film-depth could be extracted, as depicted in Figure 3a-c.The penetration of the L8-BO solution leads to an initial increase in the L8-BO weight ratio at a depth of ≈60 nm, followed by a reduction probably caused by the thin film drying process (Figure 3a).After the addition of TBB (Figure 3b), the donor-acceptor ratio remains stable at 55.5 wt.% (weight percentage) L8-BO and 44.5 wt.% D18-Cl up to a depth of ≈50 nm, after that the L8-BO ratio gradually increases toward the bottom of the thin film, resulting in acceptor enrichment near the anode.Upon thermal treatment, the composition of donor and acceptor is similar to that of the TBB samples; however, the L8-BO load at the bottom is equivalent to that of D18-Cl (Figure 3c), which could potentially suppress bimolecular charge recombination by reducing the L8-BO accumulation. The fluorine (L8-BO) and chlorine (D18-Cl) ions, F − and Cl − , were traced with the TOF-SIMS measurements.Both ions' intensities in the vertical direction as a function of the sputtering time are illustrated in Figure 3d-f for the control, TBB, and TBB+TA films, respectively.The higher F − intensity compared to Cl − suggests that the acceptor concentration is higher than that of the donor, which agrees with the FLAS results.Besides, a stronger F − signal on top of the active layer indicates L8-BO rich phase present on the top.As a comparison, the F − and Cl − intensity in a BHJ active layer is provided in Figure S7e (Supporting Information).With the increasing sputtering time (etching depth), the intensities of the F − signal for the TBB+TA and the TBB films are still stronger than the control and the BHJ-film presented in Figure S7f (Supporting Information), implying L8-BO enriches more on the shallow layer of the LBL samples.In contrast, the intensity of the F − signal in the TBB+TA film at the bottom of the active layer is lower than those of other films, implying the D18-Cl enriches at the bottom layer.Though the donor-acceptor weight ratio is not easy to estimate from the ions' intensity, these results provide fundamental proof that with the assistance of TBB and thermal treatment, the vertical phase separation could be optimized, and a graded donor-acceptor morphology could form to reduce the charge recombination and improve charge transport. Likewise, the vertical phase compositions at the thin film surface and the out-of-plane molecular orientation structure of the highly ordered surface can be accessed with near-edge X-ray absorption fine-structure (NEXAFS) spectroscopic characterizations.The NEXAFS spectroscopy is a technique that examines the X-ray absorption spectrum of a material near one of its absorption edges.We performed NEXAFS spectroscopy in the energetic range of 280-320 eV to analyze the K-edge band of carbon by tilting the angles.Tuning the angles in NEXAFS spectroscopy can differentiate the degree of pi-orbital overlapping, transition dipole moments (TDM), which is oriented perpendicular to the conjugated ring plane so that could provide information about the average tilt angle of the conjugated backbones. [42]Figure 3g-i shows the angle-resolved NEXAFS (30°, 45°, 55°, and 70°) spectra of the control, TBB, and TBB+TA blend film with total electron yield (TEY) modes that have a surface sensitivity of 3 nm. [43]he uppermost layers of the control, TBB, and TBB+TA thin-film were finely angle-resolved, and the equally first observed peak of the 285.5 eV indicates the 1s → * (C═C) transitions.The inten-sity of the * manifold as a function of the X-ray angles of incidence is illustrated in Figure 3j-l, and the plotted data is fitted with the expression presented below: where I is the total electron yield intensity, is the angle of X-ray incidence, and is the average tilted angle of the TDM.For TEY detection, an average tilted angle of the conjugated backbone is 45.27°( = 44.73°),47.62°( = 42.38°), and 50.49°( = 39.51°) is found on the control, TBB, TBB+TA thin-film, respectively.To deconvolution the interaction of D18-Cl and L8-BO, we performed the NEXAFS spectra on the neat films, respectively.As shown in Figure S8a-d (Supporting Information), L8-BO presents a perfect face-on orientation upon film surfaces with a dichroic ratio, DR = (A ⊥ − A ∥ )/(A ⊥ + A ∥ ), of −0.98.In contrast, the D18-Cl donor has a slightly tilted backbone with an angle of 41.30°(DR = −0.44)from the substrate.After the addition of the TBB additive into the control film, the D18-Cl slightly promotes raising the average tilted angle through a mutual diffusion process into the L8-BO abundant upmost part of the thin film.Moreover, the thermal treatment of TBB-added thin film clearly accelerates the diffusion process mentioned above with optimized morphology.The analysis on the NEXAFS spectra could provide evidence that the donor and acceptor are much more interpenetrating with each other during the SD processing method.Grazing incidence wide-angle X-ray scattering (GIWAXS) measurement was employed to analyze the variation of crystallinity and molecular packing behaviors of the thin films.Figure 4a-c shows the 2D GIWAXS patterns of the control, TBB, and TBB+TA films, while Figure S9a,b (Supporting Information) present the patterns of neat D18-Cl and L8-BO, respectively.The information on the in-plane (IP) and out-of-plane (OOP) parameters, such as peak position, d-spacing, full width at half maximum (FWHM), and crystal coherence length (CCL) extracted from the 2D GIWAXS of the five films are described in Table S4 in Section 13 (Supporting Information).The (010) diffraction peaks appear in the OOP direction and correspond to the - stacking, which indicates their preferred face-on packing nature.Both donor and acceptor show good crystalline order in neat films (Figure S9a,b, Supporting Information).In the blend films, the (010) - stacking peaks are observed at q z = 1.677, 1.684, and 1.697 Å −1 for the control, TBB, and TBB+TA films, with corresponding d-spacing values of 3.75, 3.73, and 3.70 Å, respectively.In the blend films, conjugated D18-Cl and L8-BO are preferentially face-on oriented according to - stacking peak locations, and the observed trend of increased q values indicates tighter - stacking distance, facilitating charge carrier transport properties.The CCL is derived from the Scherrer equation: CCL = 2 × 0.9/FWHM to analyze the molecular stacking in blend films.The resulting CCLs for the control, TBB, and TBB+TA films are 20.39,19.61, and 20.96 Å, respectively.Compared to the control films, the larger CCLs in the solid additive TBB-treated films demonstrate that the solid additive can effectively enhance the crystallinity and phase separation in the thin films.The TBB+TA films exhibit the highest crystallinity and form a bi-continuous donor-acceptor network, consistent with the observed experimental increase in mobility and decreased charge carrier recombination.In addition, we conducted atomic force microscopy (AFM) and transmission electron microscopy (TEM) measurements to visualize the bulk morphology of the photoactive layer.The root-mean-square (RMS) roughness values are 0.66, 0.98, and 1.15 nm for the control, TBB, and TBB+TA blend films (Figure S10d-f, Supporting Information), respectively.]44] These observations are in line with the TEM results presented in Figure 4f-h.The control films exhibit well-mixed domains [45] resulting from the penetration of the upper layer solution (Figure 4f).However, the addition of TBB and further TA gradually increase molecular aggregation features (Figure 4g,h).Notably, the TBB+TA films exhibit long-range molecular packing that is uniformly distributed throughout the film, which is conducive to efficient charge transport and suppresses bimolecular charge recombination.The optimizing active layer morphology achieved in TBB+TA films demonstrates that the crystallinity of the thin film could be improved through the addition of TBB and TA, as confirmed by GIWAXS, AFM, and TEM analysis.According to the above results and discussions, we inferred a schematic illustration of the film evolution mechanism with the sequential deposition processing method in combination with solid additive TBB.As presented in Figure 5a, the donor layer is formed with face-on aggregations before spin-coating of the acceptor solution.Because of the penetration of the solvent, the donor and acceptor molecules undergo a mutual diffusion process (Figure 5b) followed by a gradual aggregate formation (Figure 5c).With annealing at 75 °C for five minutes, the TBB additive evaporated completely, and the post-treatment improved the homogeneous distribution of the crystalline domains and promoted the phase purity with clear vertical phase separation.As illustrated in Figure 5d, adding TBB and further thermal treatment can optimize the morphology by finely enhancing the acceptor crystallinity.At the same time, the relatively independent diffusion behavior of the upper small molecules leads to stronger crystallization in this strategy.In addition to the morphological changes, the improved active layer stability is noticed.We monitored the performance fluctuation with storage time to investigate the devices' stability, and the V OC , J SC , FF, and PCE were measured.As illustrated in Figure S11 (Supporting Information), TBB+TA devices still retain 92% of the initial PCE after 240 h. Conclusion In summary, we systematically investigated the impact of a novel volatile solid additive, TBB, on the performance of the OSCs composed of D18-Cl and L8-BO, fabricated with a two-step deposition approach.We found that the addition of TBB, combined with further thermal treatment, could result in long-range molecular packing and bi-continuous donor-acceptor network homogenously in the thin film, leading to increased thin film crystallinity and favorable donor-acceptor distribution in the vertical direction.Therefore, the optimal active layer morphology is beneficial for the charge transport properties and reduces the bimolecular charge recombination.As a result, the TBB+TA OSCs achieved a champion PCE of 18.5% (18.1% averaged), with a V OC of 910 mV, J SC of 26.3 mA cm −2 , and FF of 77.2%, outperforming its control devices (17.2%, 16.7% averaged).The improved device performance was associated with increased exciton generation rate, charge carrier mobility, charge carrier lifetime, and reduced bimolecular charge recombination.Our results provide insights into the design and selection of volatile solid additives for efficient non-fullerene OSCs. Figure 2 . Figure 2. a) J-V characteristics, b) external quantum efficiency (EQE) spectra, c) transient photocurrent, d) transient photovoltage, and e) charge carrier density versus delay time of the Control, TBB, and TBB+TA devices composed of D18-Cl/L8-BO.The solid lines in (c), (d), and (e) fit to the experimental data. displays the transient photocurrent decay, and the charge extraction time of 0.27, 0.26, and 0.23 μs are fitted for the control, TBB, and TBB+TA devices, respectively.The resulting extraction time indicates that adding TBB and further TA treatment could effectively facilitate the extraction of charge carriers.The charge carrier lifetimes at opencircuit conditions were extracted from the transient photovoltage decay dynamics by fitting them to a mono-exponential model presented in Figure 2d.The TBB+TA device exhibits a carrier lifetime value of 1.05 μs, longer than the control and TBB counterparts (0.44 and 0.83 μs), implying reduced charge recombination in TBB+TA devices due to the positive morphological changes.To get the charge carrier mobility and the bimolecular charge recombination rate, we performed the time-delayed photon-induced charge-carrier extraction in linearly increasing voltage (photo-CELIV).The faster charge carrier mobility (μ) of the devices can be calculated with the equation of = 2d 2 3At 2 Figure 3 . Figure 3.The composition profiles extracted from the film-depth-dependent light absorption spectroscopy (FLAS) spectra of the a) Control, b) TBB, and c) TBB+TA films.Time-of-fight secondary ion mass spectrometry (TOF-SIMS) ion yield of F − and Cl − as a function of sputtering time for the d) Control, e) TBB,and f) TBB+TA-films composed of D18-Cl and L8-BO.Angular dependence of near-edge X-ray absorption fine-structure (NEXAFS) spectra with X-ray beam at 30°, 45°, 55°, and 70°in total electron yield (TEY) detection mode of the g) Control, h) TBB, and i) TBB+TA films, respectively.A plot of the intensity of the * manifold according to angles with surface molecular orientation analysis of the j) Control, k) TBB, and l) TBB+TA films. Figure 4 . Figure 4. 2D GIWAXS patterns of the blend films: the a) Control, b) TBB, and c) TBB+TA.GIWAXS intensity profiles of the blend films along the corresponding d) in-plane and e) out-of-plane line cuts.TEM images of the blend films: the f) Control, g) TBB, and h) TBB+TA. Figure 5 . Figure 5. Schematic illustration of the distinction status from solution to the thin film. the Ministry of Science and ICT (MSIT) of Korea.Portions of this research were carried out at the 3C SAXS-I, 4D PES, 9A U-SAXS, and 10A2 HR-PES II beam lines of the Pohang Accelerator Laboratory, Republic of Korea.The Center for Instrumental Analysis of Guangxi University is acknowledged for providing research facilities and resources for the experiments.
2023-07-11T06:16:18.025Z
2023-07-09T00:00:00.000
{ "year": 2023, "sha1": "836033375eb1a4a79ac01c727fc0c59e2688a029", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202303150", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "d50b26e35e0213238e023a5b0151abfc876e18e2", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
260066654
pes2o/s2orc
v3-fos-license
NCUEE-NLP at WASSA 2023 Shared Task 1: Empathy and Emotion Prediction Using Sentiment-Enhanced RoBERTa Transformers This paper describes our proposed system design for the WASSA 2023 shared task 1. We propose a unified architecture of ensemble neural networks to integrate the original RoBERTa transformer with two sentiment-enhanced RoBERTa-Twitter and EmoBERTa models. For Track 1 at the speech-turn level, our best submission achieved an average Pearson correlation score of 0.7236, ranking fourth for empathy, emotion polarity and emotion intensity prediction. For Track 2 at the essay-level, our best submission obtained an average Pearson correlation score of 0.4178 for predicting empathy and distress scores, ranked first among all nine submissions. Introduction Empathy is the capacity to understand or feel what another person is experiencing from his/her perspectives, which is a cognitive and emotional reaction to observing the situation of others (Omitaomu et al., 2022). Computational detection and prediction of empathy has attracted considerable attention in recent years. Empathy assessment by the writer of a statement was captured and annotated to computationally distinguish between multiple forms of empathy, empathic concerns and personal distress (Buechel et al., 2018). Mixed-Level Feed Forward Network (MLFFN) was proposed to learn word ratings for empathy and distress (Sedoc et al., 2020). Logistic regression models were used to recognize distress and condolences reactions to such distress (Zhou and Jurgens, 2020). A multi-task RoBERTa-based bi-encoder model was developed to identify empathy in conversations (Sharma et al., 2020). A demographic-aware EmpathBERT architecture was presented to infuse demographic information for empathy prediction (Guda et al., 2021). The Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA) organizes shared tasks for different aspects of affect computation from texts. In WASSA 2021 and 2022 shared tasks that focus on predicting empathy and emotion in reaction to news stories (Barriere et al., 2022;Tafreshi et al., 2021), using transformers based pre-trained language models to achieve promising results. The PVG team proposed a multi-input and multi-task framework based on the RoBERTa transformer for empathy score prediction (Kulkarni et al., 2021). An ensemble of the RoBERTa multi-task model and the vanilla ELECTRA model was used to predict empathy scores (Mundra et al., 2021). The IUCL system fine-tuned two RoBERTa large models, including a regression model for empathy and distress prediction and a classification model for emotion detection (Chen et al. 2022). A multioutput regression model fine-tuned by RoBERTa with additional features, including gender, income and age was used to predict empathy and distress intensity (Arco et al., 2022). The task adapters for a RoBERTa model were trained to predict empathy and distress scores at the essay-level (Lahnala et al., 2022). WASSA-2023 organizes a similar task with a newly added track on empathy, emotion and selfdisclosure detection in conversation at the speechturn level (Barriere et al., 2023). We participated in the Track 1 for Empathy and Emotion Prediction in Conversations (CONV), aiming to predict perceived empathy, emotion polarity and emotion intensity at the speech-turn-level in a conversation, and Track 2 for Empathy Prediction (EMP) to predict empathy concerns and personal distresses at the essay-level. Both tracks are regression tasks evaluated based on the average of the Pearson correlations. Following the successes of RoBERTa-based models in the previous WASSA shared tasks, we explore the use of sentimentenhanced RoBERTa models to address the challenges for both tracks in the shared task 1. This paper describes the NCUEE-NLP (National Central University, Dept. of Electrical Engineering, Natural Language Processing Lab) system for WASSA 2023 shared task 1. A unified framework is used to integrate the original RoBERTa transformer (Liu et al., 2019) with different sentiment-enhanced versions, including RoBERTa-Twitter (Barbieri et l., 2020) and EmoBERTa (Kim and Vossen, 2021) for both tracks. For Track 1, our best submission achieved an average Pearson correlation of 0.7236 and ranked fourth among all participating teams. For Track 2, our best result had an average Pearson correlation of 0.4178, ranking first among all nine submissions. The rest of this paper is organized as follows. Section 2 describes the NCUEE-NLP system for Tracks 1 and 2 in the WASSA 2023 shared task 1. Section 3 presents the results and performance comparisons. Conclusions are finally drawn in Section 4. The NCUEE-NLP System We propose a unified architecture of ensemble neural networks to solve Tracks 1 and 2 of WASSA-2023 shared task 1. Figure 1 shows our system architecture for empathy and emotion prediction, which mainly depends on ensemble sentiment-enhanced transformers. We select the following RoBERTa-based transformers to tackle both tracks in this task. dynamically changing the masking pattern applied to the training data, and training with large batches. (3) EmoBERTa (Kim and Vossen, 2021) EmoBERT is a RoBERTa model trained to solve emotion recognition in conversation tasks. EmoBERTa can learn speaker-aware states and contexts to predict the emotion of a current speaker by simply prepending speaker names to utterances and inserting separation tokens between the utterances in a dialogue. For both tracks in this shared task, we fine-tune these pre-trained RoBERTa-based transformers using the datasets provided by task organizers. For Track 1 on empathy and emotion prediction in conversations, we separately fine-tuned these transformers for empathy, emotion polarity and emotion intensity prediction. For Track 2 on empathy prediction at the essay-level, we respectively trained the transformers for empathy and distress score prediction. Finally, we use the average ensemble mechanism to combine these individual sentimentenhanced RoBERTa transformer to produce a desired score output for both tracks. Datasets The experimental datasets were provided by task organizers (Barriere et al., 2023). During system development phase, the training and validation sets respectively consisted of 8,776 and 2,400 conversations for Track 1. In addition, the training and validation sets for Track 2 respectively feature 792 and 208 essays. During the evaluation period, the test sets contain 1425 conversations for Track 1 and 100 essays for Track 2. Settings The pre-trained RoBERTa transformers models were download from HuggingFace 1 . The hyperparameter values for our model implementation were used as follows: epoch 25, batch size 8, learning rate 1e-5, and max sequence 256. To confirm the average ensemble performance, we also compared individual transformers. The evaluation metric is the Pearson correlation for both tracks. For Track 1, we obtained Pearson correlation coefficients of the empathy, emotion polarity and emotion intensity at the speech-turn level. For Track 2, we had Pearson correlation coefficients for empathy and distress at the essaylevel. The official ranking of each participating team was based on the average of the obtained Pearson correlation coefficients. Table 1 shows the results on the validation set. For Track 1 at the speech-turn level, RoBERTa-Twitter outperformed the other standalone transformer models for all evaluation metrics, but relatively underperformed for Track 2 at the essay level. The ensemble transformers clearly achieved the best https://huggingface.co/cardiffnlp/twitter-robertabase-sentiment performance for both tracks on the validation set. This confirms that the ensemble averaging mechanism works well in integrating multiple models to obtain performance improvement. Table 2 shows the results on the test set for both tracks. For CONV Track 1, RoBERTa-twitter outperformed the others in the emotion intensity evaluation at the speech-turn level. Our ensemble sentiment-enhanced RoBERTa model achieved the best average Pearson correlation coefficient of 0.7236. For EMP Track 2, EmoBERTa obtained the best distress and average correlation coefficients, while our ensemble transformer model achieved the second-best correlation coefficient of 0.4178. Rankings According to official rankings released by task organizers (Barriere et al., 2023), our final submission from ensemble neural networks of sentiment-enhanced RoBERTa transformers ranked fourth for Track 1 and first for Track 2 among all nine submissions. Conclusions This study describes the model design, system implementation and performance of the NCUEE-NLP system in the WASSA 2023 Task 1 for empathy and emotion prediction. Our unified architecture used an average ensemble mechanism of three sentiment-enhanced RoBERTa transformers to predict empathy, emotion polarity and emotion intensity for Track 1 at the speech-turn level and empathy and distress scores for Track 2 at the essay-level. Our final submission based on sentiment-enhanced RoBERTa transformers ranked fourth for Track 1 and first for Track 2 among all nine submissions.
2023-07-23T13:18:14.168Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "096440c5d58f320f2896c01dbd6970d20b0e9ba1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "096440c5d58f320f2896c01dbd6970d20b0e9ba1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258170075
pes2o/s2orc
v3-fos-license
Subsampling-Based Modified Bayesian Information Criterion for Large-Scale Stochastic Block Models Identifying the number of communities is a fundamental problem in community detection, which has received increasing attention recently. However, rapid advances in technology have led to the emergence of large-scale networks in various disciplines, thereby making existing methods computationally infeasible. To address this challenge, we propose a novel subsampling-based modified Bayesian information criterion (SM-BIC) for identifying the number of communities in a network generated via the stochastic block model and degree-corrected stochastic block model. We first propose a node-pair subsampling method to extract an informative subnetwork from the entire network, and then we derive a purely data-driven criterion to identify the number of communities for the subnetwork. In this way, the SM-BIC can identify the number of communities based on the subsampled network instead of the entire dataset. This leads to important computational advantages over existing methods. We theoretically investigate the computational complexity and identification consistency of the SM-BIC. Furthermore, the advantages of the SM-BIC are demonstrated by extensive numerical studies. Introduction Network community detection is one of the most widely-studied topics in network analysis [25,48,23]. Intuitively, for networks with assortative communities, community detection aims to distribute the network nodes to several clusters, so that nodes in the same cluster have denser connectivity. Network community structure is beneficial for understanding the characteristics of each cluster [25,7]. Specifically, in social network platforms (e.g., Facebook, Twitter, and Sina Weibo), communities can be formed by users with similar interests or preferences, which enables online platforms to recommend suitable products and services to targeted groups [9,5,56]. In the past few decades, numerous assortative community detection methods have been proposed, including but not limited to modularity maximization [47,28], spectral clustering [49,61,54], belief propagation [30,69], and pseudolikelihood methods [3,65]. Theoretically, the stochastic block model (SBM), has been widely assumed to analyze the consistency properties of network community methods [32,58,50]. It should be noted that most community detection methods require the number of communities K 0 to be known in advance. Then, the theoretical properties can be carefully established. However, K 0 is typically unknown in real-world networks. Therefore, how to choose K 0 is important. A variety of methods have been proposed to determine the number of communities K 0 , such as the eigenvalue-based methods [37,10,8], semi-definite programming-based methods [42,67], network cross-validation methods [14,41], and likelihood-based methods [18,66,33,44]. Specifically, the eigenvalue-based methods estimate the number of communities based on the eigenvalue properties of non-backtracking, Bethe Hessian, or normalized Laplacian matrices [37,10,8,17]. Additionally, the semi-definite programming approach identifies K 0 by solving a semi-definite optimization problem [42,67]. Moreover, the network cross-validation method extends the cross-validation method to network data via a network sampling strategy [14,41]. Lastly, the likelihood-based approaches aim to make full use of observed samples, which have been widely studied, including Bayesian information criterion and likelihood ratio methods. Specifically, the Bayesian information criterion consists of a conditional log-likelihood of entire observations and a penalty term that depends on the prior distribution of the latent variable [18,55,33]. For the likelihood ratio approaches, [44] proposed to estimate K 0 by comparing the goodness-of-fit of two models estimated by a candidate number of communities K and K+1. Moreover, [66] discussed the asymptotic properties of the log-likelihood ratio statistics. It is remarkable that to evaluate each candidate K via the aforementioned criteria, such as the network cross-validation methods and the likelihood-based approaches, we need to first estimate the parameters for the SBM using the entire observed network. In this case, spectral clustering is considered a simple and easy-to-implement approach with well-founded theoretical guarantees [54,12,73,39]. However, recent advances in science and technology have brought about largescale network data, leading to unprecedented computational challenges for community detection. For example, as reported by Statista (www.statista.com), in January 2022, the online social networks Facebook, Twitter, and Sina Weibo had approximately 2,910 million, 436 million, and 573 million active users, respectively. Researchers could also access the relationships of millions of network nodes using open-source datasets, such as the Stanford Large Network Dataset 1 , which has collected different networks with more than 10 million nodes each. Consequently, directly applying traditional methods to estimate K 0 for these large-scale network data is impractical. For example, for a network with N nodes, the time complexity of spectral clustering-based methods is no lower than O(N 3 ) for estimating K 0 [68,40,16]. Even if the algorithm could be accelerated, the computational complexity is still in the order of O(N 2 ) [29,22,45]. To deal with the computational challenge brought by large-scale networks, subsampling is a valuable tool [52]. Its main advantage is that we can obtain a computationally efficient and consistent estimator based on a small subsample [64,63,62,70]. Although subsampling pays the price of statistical convergence, it makes the traditional methods feasible in large-scale data analysis. In the literature, various sampling designs have been proposed to derive representative samples of a given network, which include node sampling methods [57,6,46] and edge sampling methods [26,27,41]. The node sampling methods select landmark nodes from the entire network, and the subnetwork is induced by these selected nodes. Uniform node sampling is considered to be the simplest method and has been widely used [57,6,43,46]. Another widely studied node sampling method is snowball sampling [34,51,15]. Based on the snowball sampling approach, [59] and [2] recently developed bootstrap methods to reduce estimation bias for large networks. The edge sampling methods randomly collect edge samples from the entire network, which have also received considerable attention [21,20,41]. For example, [21] and [26] adopted edge sampling procedures in estimating the average degree of a network. Recently, edge sampling approaches have been investigated to approximate counting the number of subgraphs [27,20,4]. Moreover, [41] applied uniform edge sampling in random graph model selection. Note that existing studies focus on subsampling many times to provide stable statistical inference for network models. However, we aim to conduct subsampling only once to allow model selection for large-scale networks with limited computational resources. This work proposes a novel subsampling-based modified Bayesian information criterion (SM-BIC) for identifying the number of communities for large-scale SBMs. Specifically, in the context of large-scale networks, we first develop a node-pair subsampling method to extract a subnetwork from the entire network. The node-pair subsampling method combines the idea of uniform node sampling and edge sampling. More precisely, we first uniformly and randomly select a subset of nodes from the entire network and then collect all edges related to these nodes to construct a subnetwork. In this way, this subnetwork fully retains the connection information between the selected nodes and the entire network. Note that the node-pair subsampling method only requires subsampling once due to computational efficiency. Then, based on the selected subnetwork, we derive a purely data-driven criterion without tuning any parameters. Since the criterion is based only on subsampled data, it makes the subsequent parameter estimation applicable even for large-scale networks with affordable computational resources. In particular, we use spectral clustering for the subsampled subnetwork to obtain the community assignments. In this way, the computational complexity of the SM-BIC can be as low as O(N n), where n is the subsample size satisfying n << N . Furthermore, we extend the SM-BIC to the degree-corrected stochastic block model (DCSBM) [35]. We theoretically investigate the computational advantage of the SM-BIC. Most importantly, for both the SBM and DCSBM, we establish the consistency of the SM-BIC by studying the penalized log-likelihood function under misspecification cases (e.g., under-fitting and over-fitting). To summarize, the proposed method has the following advantages. First, compared with the eigenvalue-based methods [10,37,8,17], the SM-BIC fully exploits the connectivity information in the selected subnetwork, while the eigenvalue-based methods use the eigenvalue information of network matrices. Second, compared with the method based on semi-definite programming [42,67], the proposed SM-BIC method applies the spectral clustering algorithm to identify community labels for network nodes, which is more computationally efficient. Third, compared with the network cross-validation methods [14,41], the SM-BIC only requires subsampling once, while the network cross-validation method uses a network resampling technique, which requires tuning the number of folds. Finally, compared with the aforementioned BIC-based approaches [18,55,33] and likelihood ratio methods [66,44], the SM-BIC can identify K 0 using only a small subnetwork; further, it is a completely data-driven method without any predefined tuning parameters. Consequently, the SM-BIC could be feasibly applied to identify the number of communities for large-scale networks with affordable computational resources. Specifically, its computational complexity could be as low as O{N (log N ) 2 }, as demonstrated in Propositions 1 and 2. The remainder of this paper is organized as follows. In Section 2, we introduce the subsampling-based modified Bayesian information criterion. In Section 3, we discuss the theoretical properties of the SM-BIC and establish the consistency of the estimator of the number of communities. In Section 4, we demonstrate the effectiveness of our method through extensive numerical studies. Further discussions are provided in Section 5. Proofs are presented in the Appendices and the supplementary materials. Subsampling-based modified Bayesian information criterion for stochastic block model In this section, we first introduce the stochastic block model and challenges of existing model selection methods. Then, we develop the SM-BIC for large-scale SBMs and extend the criterion to DCSBMs. Lastly, we discuss the parameter estimation procedure for this method. Preliminaries Consider a large-scale undirected graph generated from an SBM with N nodes and K 0 communities. The observed random graph is often represented by a symmetric adjacency matrix A ∈ R N ×N with zero diagonal entries. Specifically, for any node pair (i, j), if there is a connection, then A ij = 1; otherwise, A ij = 0. For each node i, denote its community label as g * is a symmetric matrix describing connectivity probability within and between communities. Namely, each element B * kl ∈ (0, 1) represents the connectivity probability between k and l communities (1 ≤ k, l ≤ K 0 ). In this way, the connectivity probability between any node pair (i, j) depends only on their community labels. For simplicity, let SBM K0 (g * N , B * ) represent a stochastic block model with K 0 blocks parameterized by g * N and B * . Throughout this paper, we let g * N and B * denote the true parameters of the observed adjacency matrix A. Furthermore, K 0 is considered to be a fixed constant. For any 1 ≤ l = k ≤ K 0 , we assume B * kl < B * kk , which means that the connectivity probability of within-community is higher than that of betweencommunity. Under any candidate K, denote g N ∈ [K] N as the community assignment of the K-block model, and the corresponding connectivity matrix is represented by a symmetric matrix B ∈ B K = (0, 1) K×K . Additionally, when we refer to model selection, we mean the selection of K 0 for SBM K0 (g * N , B * ). For the likelihood-based methods, to determine the number of communities, it is necessary to estimate the community assignment g N for each candidate K. For super-large N , even if accelerated algorithms are adopted, the computational cost is still high. For example, the randomized spectral clustering algorithm [71] has computational cost in the order of O(N 2 ). This motivates us to develop a network subsampling-based model selection criterion that reduces the cost by investigating small subsamples. Subsampling-based modified Bayesian information criterion In the context of large-scale networks, we first introduce the network subsampling method. Note that, unlike independent data, network data are correlated with each other by connections. To characterize the community membership of network nodes, we use a node-pair subsampling method to collect a subnetwork from the entire network. Specifically, we first uniformly sample n nodes from [N ]; that is, the probability of each node being selected is equal to n/N , where the subsample size n << N . We further denote the set of selected nodes as S = {j ∈ [N ] : node j is selected}. Then, we sample all node pairs related to these selected nodes. That is, if node i is selected and there is a connection between i and j, then node pair (i, j) is also collected. The subsampling method is illustrated in Figure 1. We refer to this method as node-pair subsampling. For convenience, let s j (s j ∈ [n]) denote the index of the selected node j in the node set S. Define a N × n matrix A S to represent these selected connections, where the entries are A S isj = A ij , for i ∈ [N ], j ∈ S. Then, we focus on the observation A S rather than the entire network connections, to identify the number of communities. For model selection, we introduce the proposed modified Bayesian information criterion based on A S . The criterion is derived from the maximization of the log-posterior likelihood function of g N . We first provide the prior distribution of g N under SBM K . Based on the selected sample A S , we demonstrate that the community partition of the entire network is determined by the community assignment of the selected nodes. Specifically, consider the community assignment of the selected nodes to be g n (g n ∈ [K] n ), and g n,sj is the com- munity label of the selected node j. Then, for any unselected node i / ∈ S, we have different ways to obtain its community label based on the label of the selected nodes. For example, we could cluster this node to the community with the most connections to it. Namely, the community label of the unselected node i is given by g N,i = max k j∈S A ij I(g n,sj = k), where I(·) is an indicator function. We could alternatively obtain the label assignment for unselected nodes by spectral clustering, which is illustrated in detail in the next subsection. In this way, based on A S , the set of all possible community assignments for entire network nodes is provided as C(A S , K) = gn∈[K] n g N ∈ [K] N : ∀ i / ∈ S, g N,i = max k j∈S A ij I(g n,sj = k), ∀ j ∈ S, g N,j = g n,sj . Therefore, the number of possible community assignments is |C(A S , K)| = K n . Similar to [13], we assign the prior probability to g N as φ(g N ) = K −n , for g N ∈ C(A S , K). (2.1) Next, we analyze the posterior probability of g N . We start with studying the probability of A S under SBM K . We denote the set of node pairs corresponding to the independent edge variables in − S, j ∈ S} represent the set of node pairs within selected nodes and that between selected and unselected nodes, respectively. Moreover, since |E in | = n(n − 1)/2 and |E out | = (N − n)n, we have |E| = N n − n(n + 1)/2. Let o kl,g N = (i,j)∈E A ij I(g N,i = k, g N,j = l) and n kl,g N = (i,j)∈E I(g N,i = k, g N,j = l) denote the number of observed connections and the number of maximum possible connections between (k, l) clusters, respectively. Additionally, define a vector θ ∈ Θ K = (0, 1) K(K+1)/2 to represent the upper triangle elements of B. Then, given (g N , θ), the log-likelihood function of A S is Then, we give an approximation of the log-likelihood function log f (A S |g N ) in the following lemma. Lemma 1 (Log-likelihood function approximation). Suppose the adjacency matrix A generated from SBM K and the subset of nodes S collected by simple random sampling n nodes from the entire network. Then, the log-likelihood function log f (A S |g N ) can be approximated by, where M denotes the number of independent edge variables in A S , i.e., M = |E| = N n − n(n + 1)/2. The proof of Lemma 1 can be found in Appendix B.1. As a result, under SBM K , according to (2.1) and (2.2), the log-posterior probability of g N is We now establish the SM-BIC. According to Bayesian inference, the community assignment that maximizes the posterior probability is estimated, that is g N = argmax g N ∈C(A S ,K) log f (g N |A S ). To this end, based on (2.2) and (2.3), the SM-BIC is proposed as follows: (2.4) The form of the criterion (2.4) seems to be similar to the corrected BIC criterion proposed by [33]. However, there are two key differences from the corrected BIC, which are also the key contributions of our criterion. First, the SM-BIC is a purely data-driven method without any predefined tuning parameters, whereas the corrected BIC requires choosing one parameter to control the model selection results. This is because we assume a simple uniform prior for SBM K and the latent label vector g N ; this prior setting follows the work of [13]. Second, based on (2.4), we estimate the community assignment from A S , which has a lower dimension than A for n << N . Hence, criterion (2.4) could save computational costs. We demonstrate the important computational advantages of the SM-BIC in Subsection 2.4. Extension to degree-corrected stochastic block model The DCSBM [35] is generalized from the SBM, which introduces node-specific parameters to allow for degree heterogeneity within communities. Specifically, given parameters g N , B, the probability of an edge between (i, j) is represented by P (A ij = 1) = ψ i B g N,i g N,j ψ j , where the parameter ψ i characterizes the individual activeness of node i. In this way, a DCSBM is parameterized by a triplet (g N , B, ψ) where ψ = (ψ 1 , · · · , ψ N ) . For consistency, we assume that the underlying model is DCSBM K0 (g * N , B * , ψ * ). For identifiability of this model, the constraint i ψ * i I(g * N,i = k) = N k,g * N is imposed on each community 1 ≤ k ≤ K 0 . Then, we extend the SM-BIC to the DCSBM. We start with the log-likelihood function of the subsampled adjacency matrix A S . Similar to [35] and [73], we replace Bernoulli likelihood with Poisson likelihood and assume A ij ∼ Poisson(ψ i B g N,i g N,j ψ j ) to simplify the derivation. Furthermore, let n kl,g N (ψ) = (i,j)∈E ψ i ψ j I(g N,i = k, g N,j = l). In this way, under DCSBM K (g N , B, ψ), the log-likelihood function of the subsampled adjacency matrix A S is given by Then, we consider ψ in two cases. First, if ψ is known, according to (2.4), the SM-BIC of the DCSBM is proposed as follows: (2.5) Second, if ψ is unknown, we take a plug-in estimator ψ into the (2.5) criterion to replace ψ. In this case, an estimation of ψ is provided in the following subsection. Parameter estimation based on subsampled adjacency matrix Here, we first introduce how to apply the SM-BIC to determine the number of communities for large-scale SBMs. Specifically, based on node-pair subsampling, we evaluate each candidate K through the following three steps: label assignment, parameter estimation, and SM-BIC calculation. Thereafter, we further present the estimation method of the degree heterogeneity cases. Label assignment. We first perform the label assignment step on the N ×n subsampled adjacency matrix. For a candidate K and subsampled adjacency matrix A S , the extended spectral clustering algorithm can be accomplished as follows. (1) Perform SVD on A S , and extract the largest K left eigenvectors, denoted as V 1 , · · · , V K , and define a N × K matrix V = (V 1 , · · · , V K ) to represent the embedding matrix. (2) Apply K-means clustering to the rows of V to estimate node assignments and denote the clustering results by g N . Parameter estimation. Based on the estimated label vector g N , we construct the plug-in estimator for the connectivity matrix B. Specifically, for all and taking B lk = B kl , we obtain the estimated connectivity matrix B. SM-BIC calculation. Given ( g N , B), we evaluate the estimated SBM K ( g N , B) by Therefore, we choose K which maximizes the SM-BIC (2.7) as the number of communities. Algorithm 1 Model Selection Algorithm for SBM Input: adjacency matrix A S , a maximum candidate Kmax. 1. For each candidate 1 ≤ K ≤ Kmax, 1.1 (Label Assignment) compute the community assignment estimator g N using spectral clustering on A S ; Output: the optimal choice of the number of communities, K. In the framework of the DCSBM, we need to modify the parameters' estimation methods. First, under candidate DCSBM K , to obtain g N , we use the spherical spectral clustering method proposed by [39]. Specifically, let v i be the ith row of V , i.e., V = (v 1 , · · · , v N ) . Furthermore, let V be the row-normalized version of V , namely, the i-th row of V is v i / v i , where · denotes the Euclidean norm of a vector. Then, we estimate the node assignments by the following steps: (1) form matrix V by normalizing each row of V to unit norm; and (2) perform K-means clustering to the rows of V to obtain g N . Second, based on the embedding matrix V , the plug-in estimator of ψ i is provided as and then take B lk = B kl . To this end, we obtain the SM-BIC for DCSBM K by taking ( g N , B, ψ) into (2.5). For convenience, we provide the model selection procedure for the SBM and DCSBM in Algorithms 1 and 2, respectively. To illustrate the model selection 2. Calculate K = argmax 1≤K≤Kmax (K). Output: the optimal choice of the number of communities, K. algorithm, we show the procedure of identifying K 0 for SBM in Figure 2. Moreover, based on the works of [39,19], we demonstrated the consistency of spectral clustering for the sub-adjacency matrix A S in the supplementary materials. To show the effectiveness of the SM-BIC, we discuss its computational complexity in Proposition 1. Proposition 1 (Computational complexity). Suppose that the subset of nodes S is collected by simple random sampling n nodes from [N ]. Then, for both the SBM and DCSBM, the computational complexity of identifying K 0 by SM-BIC is O(N n). The proof of proposition 1 is provided in Appendix B.2. Note that for each candidate K, in the spectral clustering algorithm, we perform a truncated SVD to the sub-adjacency matrix, where the truncated SVD only computes the largest K eigenvalues and the corresponding eigenvectors with computational complexity O(N n) for a constant K [22,45]. Proposition 1 shows the computational advantage of the SM-BIC for large-scale networks. In the next section, we demonstrate that the required subsample size n could be as small as c(log N ) 2 , where c > 0 is a constant. In this case, the computational cost for identifying K 0 based on the SM-BIC could be O{N (log N ) 2 }. Theoretical properties In this section, we discuss the theoretical properties of the SM-BIC. We first introduce some necessary conditions and subsequently discuss the required subsample size to ensure the effectiveness of the selected sample. Then, we demonstrate the consistency of the SM-BIC under the SBM and DCSBM. Namely, the criterion chooses the right K 0 with probability tending to one as N goes to infinity. Basic assumptions and required subsampling size To discuss the theoretical properties of the SM-BIC, the following assumptions are considered. Assumption (A1) allows for sparse networks, where the network density is ρ N → 0 at the same rate as in the studies of [66], [33], and [41]. Assumption (A2) requires the size of each community to be relatively balanced. This is a mild and common condition. For example, if the community assignment g * N is generated from a multinomial distribution with parameters π = (π 1 , · · · , π K0 ) such that min 1≤k≤K0 π k ≥ c/K 0 , then Assumption (A2) is satisfied almost surely. This restriction is also used in [38] and [14]. It is noteworthy that a small subsample leads to higher computational efficiency. However, if the subsample size is too small, it is difficult to guarantee the statistical validity of the proposed method. Therefore, we provide two necessary conditions to establish the lower bound of the subsample size n. First, we require that the subsampled nodes cover all blocks with high probability. Specifically, i is the ground truth label of node i. This implies that the elements in M K0 completely cover K 0 blocks. Second, we require that the average degree of the subnetwork should increase with N . Specifically, let d i = j∈S A ij denote the degree of node i in the subnetwork based on A S , for i = 1, · · · , N. Furthermore, let d = N i=1 d i /N denote the average degree of the subnetwork. Then, we assume the expected average degree E(d) = Ω(log N ). Based on these two conditions, we provide the lower bound of subsample size n in the following proposition. Proposition 2 (Subsample size). Under Assumptions (A1)-(A2), suppose S is collected by simple random sampling n nodes from the entire network. If the subsample size is n = Ω(log N/ρ N ), then we have S ∈ M K0 and E(d) = Ω(log N ) with high probability. The proof is provided in Appendix B.3. Note that n = Ω(log N/ρ N ) means that there are positive constants c and N 0 such that n ≥ c log N/ρ N for all N > N 0 [36]. According to Proposition 2, the lower bound of subsample size goes to infinity with a lower speed compared to N . In particular, consider ρ N = (log N ) −1 , then the subsample size n = Ω{(log N ) 2 }. Based on this proposition, we then demonstrate the consistency of this criterion. Consistency of SM-BIC We first establish the consistency of the SM-BIC under SBMs. Given a subsampled adjacency matrix A S , the underlying SM-BIC of SBM K0 Intuitively, fitting the observed network with a correct number of communities yields the largest value of the SM-BIC. Then, for any candidate SBM K , we compare its SM-BIC (K) with the underlying SM-BIC * (K 0 ) under three different cases, namely, under-fitting (K < K 0 ), correctly fitting (K = K 0 ), and over-fitting (K > K 0 ). We analyze the divergence between (K) and * (K 0 ), which is It is noteworthy that L K,K0 is a log-likelihood ratio, which measures the goodness-of-fit of the estimated model compared with the underlying model. Since R K,K0 is fixed for a given K and n, we focus on analyzing L K,K0 in the three cases mentioned above. Case 1: Under-fitting. In this case, we prove the upper bound for the loglikelihood ratio L K,K0 in the following theorem. Theorem 1 (Upper bound of the log-likelihood ratio under underfitting). Suppose A is generated from SBM K0 (g * N , B * ). Furthermore, suppose Assumptions (A1)-(A2) hold and n satisfies the condition in Proposition 2. If The technical proof of Theorem 1 can be found in Appendix C.1. For K < K 0 , it can be verified that R K,K0 = −Ω(n + log M ). Combining the conclusion in Theorem 1, we have (K) − * (K 0 ) = −Ω P (ρ N M ) by (3.1). Moreover, note that the lower bound of the ratio L K,K0 is negatively related to ρ N and M , and goes to negative infinity as N → ∞. This indicates that under the proposed conditions, the SM-BIC avoids the under-fitting case with high probability. Case 2: Correctly fitting. We then analyze the log-likelihood ratio L K,K0 under a given correct number of communities, i.e., K = K 0 . Theorem 2 (Convergence of the log-likelihood ratio under SBM). Make the same assumptions as in Theorem 1. . Moreover, in Case 2, Theorem 2 implies that the log-likelihood ratio L K0,K0 converges faster in sparse networks. Case 3: Over-fitting. Similar to the conclusion in Case 1, we present the upper bound of the log-likelihood ratio L K,K0 in the following theorem. Theorem 3 (Upper bound of log-likelihood ratio under over-fitting). Make the same assumptions as in Theorem 1. For any candidate The proof of this theorem can be found in Appendix C.3. For K > K 0 , by the definition of R K,K0 , we have R K,K0 = Ω(n + log M ). Then, together with the conclusion in Theorem 3, we have (K) − * (K 0 ) = −Ω P (n + log M ). Note that the upper bound is negatively related to the subsample size n, which indicates that the SM-BIC avoids over-fitting with increasing probability as n grows. Therefore, Theorem 3 ensures that the subsample size n = Ω(log N/ρ N ) is large enough to prevent this misspecification. To summarize, we establish the consistency of the SM-BIC under the SBM in the following corollary. Corollary 1 (Consistent results for SBM). Suppose A is generated from SBM K0 (g * N , B * ) and Assumptions (A1) and (A2) hold. If the subsample size n satisfies the condition in Proposition 2, then for Corollary 1 demonstrates that for the SBM, the correct number of communities can be identified by the SM-BIC with high probability. Now, we investigate the consistency of the SM-BIC under the DCSBM. We assume that the degree heterogeneity parameter ψ is known, which is also considered in the theoretical studies of [38] and [24]. In this case, according to criterion (2.5), we have . Then, we first investigate the convergence of the loglikelihood ratio under the correct specification. Theorem 4 (Convergence of the log-likelihood ratio under DCSBM). Suppose that A is generated from DCSBM K0 (g * N , B * , ψ * ). Under Assumptions (A1) and (A2), if n satisfies the condition in Proposition 2, for The proof of Theorem 4 is provided in Appendix C.4. According to Theorem 4, under the DCSBM, the convergence of L K,K0 can also be guaranteed if K is correctly specified. Based on Theorem 4, together with similar arguments, one can show that the conclusions of Theorem 1 and Theorem 3 hold under the DCSBM. Hence, we draw the theoretical results for the DCSBM as follows. Simulation models and performance measurements We start with the generation mechanism of the networks. For a given K 0 , we assume that the underlying node labels are generated by g * N,i ∼ Multinomial(π) independently for all i = 1, · · · , N , where π = (1/K 0 , · · · , 1/K 0 ). Second, we define the connectivity matrix as B * = ρ N (β1 K0 1 K0 + (1 − β)I K0 ), where 1 K0 ∈ R K0 is filled with elements 1 and I K0 ∈ R K0×K0 is an identity matrix, and the out-in-ratio parameter β ∈ (0, 1) measures the connectivity divergence within and between communities. Then, we evaluate the performance of the SM-BIC through the following three different examples under SBM framework. Example 1 (Consistency of the approximated SM-BIC). Let the number of communities K 0 vary from 2 to 5. For each K 0 , let N increase from 500 to 5,000. Furthermore, set the out-in-ratio parameter β = 0.15 and let the network density ρ N = N −1/2 . Then, according to Proposition 2, take the subsample size as n = ζ log N/ρ N , where x represents the smallest integer of no less than x, and ζ is set to 1.0, 1.5, and 2.0, respectively. Example 2 (The effect of network density). Let the number of communities K 0 vary from 2 to 5 and the entire network size N increase from 1,000 to 5,000. Additionally, take the out-in-ratio parameter as β = 0.15 and let the network density ρ N increase from 0.5N −1/2 to 1.5N −1/2 . For each network setting, we take the subsample size as n = 1.5N 1/2 log N . To further evaluate the performance of the SM-BIC method, we compare it with four existing approaches, namely, the method based on the Bethe Hessian matrix with moment correction (BHMC) proposed by [37], the network cross-validation (NCV) method proposed by [14], the network cross-validation method by edge sampling (ECV) proposed by [41], and the corrected Bayesian information criterion (CBIC) proposed by [33]. Example 4 (Comparison under SBM). We generated the network from SBM K0 (g * N , B * ) with β = 0.2 and ρ N = N −1/2 . Furthermore, let the network size N increase from 3,000 to 5,000 and K 0 vary from 2 to 6, accordingly. Example 5 (Comparison under DCSBM). We follow the scenario proposed in [73]. The parameters ψ i are independently generated from a distribution with expectation 1, specifically, where η i is uniformly distributed on the interval [3/5, 7/5]. The variance of ψ i is equal to 4α/75 + 4(1 − α)/9. Then, the variance is a decreasing function of α. We vary α from 0.4 to 0.8. The other parameters are set to be the same in Example 4. Throughout this simulation study, we set the maximum candidate to K max = 10. The random experiments are repeated T = 100 times to ensure a reliable evaluation. Additionally, for each repetition, we assume the selected number of communities is K t , for t = 1, · · · , T. Then, to gauge the performance of the SM-BIC, we consider two measurements. First, the probability of correct identification is defined as Prob = T t=1 I( K t = K 0 )/T, where a larger Prob corresponds to more accurate model selection. Second, the average of the selected number of communities is defined by Mean = T t=1 K t /T . All simulations are conducted in a Linux server with a 3.60 GHz Intel Core i7-9700K CPU and 16 GB RAM. Simulation results All simulation results are shown in Tables 1-5 and Figure 3. We draw the following conclusions from different examples. Example 1. The simulation results are presented in Table 1. We make the following comments. First, as n grows from log N/ρ N to 2 log N/ρ N , the Table 1 Simulation results of SM-BIC of Example 1. The network density ρ N = N −1/2 and the network subsample size n = ζ log N/ρ N . The measurements are provided and the average CPU computational time is also reported. probability of correct identification increases from 0.84 to 1.00 under the setting K 0 = 5 and N = 500. Second, as N increases from 500 to 5,000, the probability of correct identification increases from 0.84 to 1.00 under the setting K 0 = 5 and n = log N/ρ N . Third, as the network size N increases from 500 to 5,000, the average CPU computational time of each experiment does not exceed 10.70 seconds. Hence, the SM-BIC is an efficient and consistent method for largescale networks, and these results are consistent with our theoretical results in Proposition 2 and Corollary 1. Example 2. The simulation results are provided in Table 2. We obtain the following findings. First, as network density ρ N increases from 0.5N −1/2 to 1.5N −1/2 , the probability of correct identification increases to 1 for all K 0 = 2, · · · , 5. Second, even in the sparsest case ρ N = 0.5N −1/2 , as N grows from 1,000 to 5,000, the probability of correct identification increases from 0.75 to Table 3 Simulation results of Example 3. In this study, the network density is ρ N = N −1/2 and the subsample size is n = 1.5 log N/ρ N . Furthermore, for each network with N nodes, the number of outlier nodes m increases from 20 to 100. The measurements are provided and the average CPU computational time is also reported. 1.00. Hence, for large-scale networks, the proposed method allows for a higher level of sparsity. Example 3. This simulation results are provided in Table 3. We draw the following conclusions. First, as the number of outliers decreases from 100 to 20, the accuracy of recovering K 0 increases from 0.82 to 1.00 under the setting N = 2, 000 and K 0 = 5. Second, as N varies from 2,000 to 5,000, the probability of correct identification grows from 0.82 to 1.00 in the case of K 0 = 5. Therefore, for large-scale networks with arbitrary outliers, the SM-BIC method can accurately identify the number of communities with high probability. Example 4. The comparison results are shown in Table 4 and Figure 3. We draw the following conclusions. First, SM-BIC is more accurate than the ECV method in this study. Specifically, for the setting of N = 5, 000, when K 0 = 4 and K 0 = 6, the Prob of the ECV method is only 0.83 and 0.84, respectively, while the Prob of the SM-BIC is 1.00 in these cases. Second, the average computational time of the SM-BIC is much smaller than that of the BHMC, NCV, and CBIC, especially when N is large. As shown in Figure 3, the average CPU computational time of these methods is further compared across diverse network sizes. We observe that the average CPU computational time of the SM-BIC is the smallest, while the ECV method is much more computationally expensive than other algorithms. Because in each iteration, ECV performs matrix completion and estimates community labels from a N × N -dimensional low-rank matrix. Example 5. The comparison results are reported in Table 5. We draw the following conclusions. First, as α decreases from 0.8 to 0.4, the accuracy of the ECV method decreases from 1.00 to 0.00 under the settings K 0 = 4, 5, 6. However, the SM-BIC method can correctly identify K 0 in these cases. Second, compared with the BHMC, NCV, ECV, and CBIC methods, when α = 0.4 and K 0 = 2, the average CPU computational time of the BHMC, NCV, ECV, and Table 4 Simulation results of Example 4. The network density ρ N = N −1/2 and the subsample size n = 1.5 log N/ρ N .The measurements of each method are provided and the average CPU computational time is also reported. Table 5 Simulation results of Example 5. The network density ρ N = N −1/2 and the subsample size is n = 1.5 log N/ρ N . Moreover, the heterogeneity parameter α varies from 0.4 to 0.8. The measurements of each method are provided and the average CPU computational time is also reported. CBIC methods is 41.88s, 86.19s, 482.43s, and 68.75s, respectively, while that of the SM-BIC method is only 11.83s. In this way, in the DCSBM, the SM-BIC is more robust than the ECV method in terms of degree heterogeneity and more computationally efficient than all these methods. Real data analysis Political blog dataset. The political blog dataset was collected and analyzed in [1]. The data set consists of over one thousand blogs discussing US politics, with edges representing web links. The nodes are labeled as being either "conservative" or "liberal", which can be treated as two well-defined communities. We only consider the largest connected component of this network, which consists of 1,222 nodes with community sizes of 586 and 636, while the network density is ρ N = 2.24%. The degree-corrected stochastic block model is believed to fit better for this network than stochastic block model [35,72]. Then, under the DCSBM framework, we take the subsample size of the SM-BIC as n = 1.5 log N/ρ N = 475, and compare the SM-BIC method with other algorithms. Specifically, we obtain the estimated number of communities as 2 by the NCV, CBIC, and SM-BIC, with computation times of 5.57s, 3.82s, and 1.60s, respectively. While the BHMC and ECV estimate K = 7 and K = 6, respectively. We see that the NCV, CBIC, and SM-BIC methods all give correct estimates for the number of communities, and SM-BIC further outperforms these two algorithms in terms of computational efficiency. A house price dataset. This dataset is publicly available on the platform Kaggle (https://Kaggle.com), which contains housing transaction information in Beijing from 2011 to 2017. Here, we collect 6,000 samples traded in 2016, distributed in the "Feng Tai", "Chang Ping", and "Hai Dian" districts of Beijing. The nodes are these collected samples and a network is obtained by randomly connecting the node pairs in the same district with a probability of 0.1. That is, if node i and j are in the same district, then we add an edge to node pair (i, j) with probability 0.1. As a result, this network has three well-defined communities with the sizes of communities 1,661, 2,365, and 1,974, respectively, while the network density is ρ N = 3.40%. We then apply the SM-BIC and the aforementioned methods to identify the number of communities for this network under the SBM and DCSBM frameworks, respectively. For the SM-BIC method, the subsample size is set to be n = 2 log N/ρ N = 511. The results are provided in Table 6. As shown in Table 6, we observe that the SM-BIC method can correctly identify the number of communities under both the SBM and DCSBM frameworks. Moreover, the SM-BIC takes only 8.75s for SBM, which is only 11.0% of BHMC, 12.4% of NCV, and 1.4% of ECV, respectively. For the DCSBM model, the SM-BIC takes 12.58s, which is only 16.3% of BHMC, 11.3% of NCV, 1.8% of ECV, and 9.5% of CBIC, respectively. Concluding remarks This work proposes a subsampling-based modified Bayesian information criterion (SM-BIC) to identify the number of communities for large-scale SBMs. We also extend this criterion to DCSBMs. Specifically, the technical conditions of subsampling size are derived, and the consistency properties of SM-BIC are established. In the context of large-scale networks, the proposed SM-BIC has more valuable computational advantages than existing model selection methods. Specifically, the computational complexity of the SM-BIC for both the SBM and DCSBM could be as low as O{N (log N ) 2 }. Consequently, the SM-BIC method could be performed even using a personal computer. Numerical studies further demonstrate these computational improvements. To conclude this work, we consider several interesting topics for future research. First, in this study, we focus on reducing computational costs by network subsampling only once; this idea can be extended to a resampling approach, which is currently under investigation. Second, informative subsamples are important for extracting useful information from the entire network. Subsampling strategies for independent big data have been extensively studied; see [53], [62], and [70] for further discussions. Based on these studies, it would be interesting to investigate subnetwork extraction methods with meaningful statistical interpretations in large-scale networks. Third, in this work, following [44], we assume that K 0 is fixed. However, it is an interesting and challenging question to allow for a diverging K 0 . We will work in this direction in future research. Appendix A: Necessary notations and lemmas In Appendix A, we introduce some necessary notations in Appendix A.1. Then, we give three useful lemmas for the subsequent theoretical proof of the proposed method in Appendix A.2. A.1. Notations Given a label vector g N , we define some necessary count statistics. Define a K × K count matrix as n g N = (n kl,g N ) 1≤k,l≤K and o g N = (o kl,g N ) 1≤k,l≤K . Let p = (N 1,g * N , · · · , N K,g * N ) /N denote the underlying block proportions, where N k,g * N = N i=1 I(g * N,i = k) represents the number of nodes belonging to the k-th cluster. For two sets of labels g N and g N , In addition, define τ as a permutation on [K] and denote · ∞ as a maximum norm of a matrix. For simplicity, we quote the notations from [66] to characterize the loglikelihood function. Let H g N be an K × K 0 confusion matrix whose (k, l)-entry 1). Then, for a fixed label vector g N , the corresponding log-likelihood can be expressed as We further define its expectation as A.2. Useful lemmas Here, we provide some useful lemmas, that is, Lemmas 2-4, for the proof of the consistency of the SM-BIC. In statistics, Hoeffding inequality provides an upper bound for the sum of bounded random variables, which was proved by [31]. Lemma 2 (Hoeffding inequality). Let x i , i = 1, · · · , N, be mutually independent random variables such that a i ≤ x i ≤ b i almost surely. Consider the sum of these random variables, Then, for all s > 0, In the under-fitting case, without loss of generality, we start with K = K 0 −1, and the following Lemma 3 shows that G(H g N , B * ) is maximized by combining two existing communities in g * N . Lemma 3 (Expectation of the log-likelihood function of under-fitting). Given the true label g * N , suppose g N ∈ C(A S , K 0 − 1), and then maximizing the function G(H g N , B * ) over H g N achieves its maximum in the label set merges g * N with labels k and l. Furthermore, suppose g N gives the unique maximum (up to a permutation τ ), and for all H g N , there exists a positive constant c 1 > 0 such that H g N ≥ 0, H g N 1 = p, For subsampled adjacency matrix A S , consider A S ∞ = max 1≤i≤N j∈S |A ij |. The following Lemma 4 provides a concentration inequality to bound the variation in the adjacency matrix A S , as proposed by [66]. Lemma 4 (Concentration inequality). Assume g N ∈ C(A S , K) and define where c 1 (B * ) is a constant depending on B * and M = N n − n(n + 1)/2. Let ω n = (ρ N N log n/M ) 1/2 , then max g N ∈C(A S ,K) W g N ∞ > ω n → 0, with high probability, for n, N → ∞. Furthermore, let g N ∈ C(A S , K) be a fixed set of labels; then, for ≤ 3m N , where m is an integer and c 2 (B * ) is a constant depending on B * . Appendix B: Demonstrations of SM-BIC In Appendix B, we use the BIC approximation to prove Lemma 1, shown in Appendix B.1. Furthermore, we provide the proofs of Propositions 1 and 2 in Appendices B.2 and B.3, respectively. B.1. Proof of Lemma 1 The proof of the log-likelihood function approximation can be accomplished by the following two steps. First, we use Taylor approximation for the likelihood function, i.e., f (A S |g N ). Then, we investigate its Hessian matrix. Step 1. Assume that the likelihood function f (A S |g N , θ) attains its maximum at θ so that ∂f (A S |g N , θ)/∂θ| θ= θ = 0. By Taylor expansion, we have, Since f (A S |g N , θ) attains its maximum at θ, the Hessian matrix D is negative definite. Let D = −D, and then we approximate f (A S |g N ), Since p(θ) is a uniform prior probability to θ, then where c 0 is a constant. Considering the matrix D is symmetric, we can perform an eigenvalue decomposition on it as D = S ΛS, and denote the k -th diagonal element of Λ by λ k , for k = 1, · · · , K(K + 1)/2. Furthermore, we let a substitution (θ − θ) = S η. Then, the Jacobian matrix J(η) = ∂θ/∂η = S . Thus, det(J(η)) = 1, where det(·) denotes the determinant function of the matrix. Furthermore, As a result, Step 2. To obtain the approximation of the likelihood function, we further study the determinant of D. Recall that the number of independent observations in A S is M . Let {y r } M r=1 denote these independent observations. Then, As M grows large, we use the weak law of large numbers on random variables, x r = M log f (y r |g N , θ), r = 1, · · · , M. We obtain with high probability. Therefore, every element in the observed Fisher information matrix is This accomplishes the proof. B.2. Proof of Proposition 1 To demonstrate the effectiveness of our SM-BIC, we first prove the statement regarding the computational complexity of the SM-BIC in Proposition 1. Since the DCSBM is a generalization of the SBM, we discuss the computational complexity of the SM-BIC for DCSBM. According to the SM-BIC, there are two main procedures for determining the number of communities, including nodepair subsampling and the model selection algorithm. Therefore, we analyze the computational complexity of each procedure in detail. First, the node-pair subsampling procedure includes two steps, where the time complexity of collecting the node set S is O(N ) according to [60], and that of forming an N × n subsampled adjacency matrix is no more than O(N n). In this way, the computational complexity of the network subsampling procedure is O(N n). Second, perform the model selection algorithm to identify K 0 for the DCSBM. For each candidate K, the SM-BIC evaluates K by the following steps. (1) Perform spectral clustering to the subsampled adjacency matrix A S using a truncated SVD, which takes O(N n) time complexity [22,45]. After repeating steps (1)-(4) K max times, we obtain the optimal choice of the number of communities. Since K max is a constant, the time complexity of the SM-BIC is O(N n). Therefore, we have proved Proposition 1. B.3. Proof of Proposition 2 In this section, we accomplish the proof of Proposition 2 by the following two steps. Under the assumptions in Proposition 2, we first prove that the selected node set S covers K 0 blocks completely with high probability. Then, we demonstrate that the expected average degree of the subnetwork could be E(d) = Ω(log N ) with high probability. Step 1. We first represent the event S ∈ M K0 using some simple events. Specifically, we describe the event e = {S : ∀ k ∈ [K 0 ], ∃ i ∈ S, g * N,i = k} using several simple events to simply calculate its probability. Denote e k = {S : i∈S I(g * N,i = k) > 0}, for k = 1, · · · , K 0 . Then, we have e = Considering random simple sampling with replacement, the probability of choosing a node from the k-th block is N k,g * N /N in each sampling. Then, P (e c k ) = (1 − N k,g * N /N ) n , for k = 1, · · · , K 0 . As a result, according to ( . Consider the subsample size n such that ≥ K 0 (1 − N min,g * N /N ) n , and then, n ≥ log(K 0 / )/ log{(1 − N min /N ) −1 }. Under Assumption (A2), we can find a constant c 0 such that N min,g * N /N > c 0 /K 0 . As a result, the subsample size n ≥ log(K 0 / )/ log{K 0 /(K 0 − c 0 )}. If K 0 can go to infinity with N , we have n = Ω[log(K 0 / )/ log{K 0 /(K 0 −c 0 )}]. This condition can be simplified by taking = 1/N and K 0 = O(1); thus, n = Ω(log N ) in this case. Therefore, according to the assumptions in Proposition 2, we have S ∈ M K0 with high probability. Step 2. Consider that the network density is ρ N and under Assumptions . Furthermore, since n = Ω(log N/ρ N ), we have E(d) = Ω(log N ). Hence, we proved Proposition 2. Appendix C: Theoretical proof of SM-BIC Here, we first establish the consistency of the SM-BIC under the SBM. Specifically, we demonstrate the claim of Theorem 1 in Appendix C.1, and further give the proof of Theorems 2 and 3 in Appendices C.2 and C.3, respectively. Then, we discuss the theoretical property of the SM-BIC under the DCSBM, i.e., Theorem 4, in Appendix C.4. C.1. Proof of Theorem 1 Without loss of generality, we start with K = K 0 − 1. To prove Theorem 1, we focus on analyzing the log-likelihood ratio . Specifically, we accomplish the proof by following three steps. We first analyze the node assignments obtained by K 0 − 1 in detail, and then we discuss the likelihood function of L K0−1,K0 . Finally, we establish the upper bound for L K0−1,K0 . Step 1. We discuss the community assignments based on SBM K . In the under-fitting case, K = K 0 − 1, we define a merge mechanism. First, we give the merged label vector set. Define e K0−1 = {g N ∈ C(A S , K 0 − 1) : g N = U k,l (g * N ), 1 ≤ k = l ≤ K 0 }. Therefore, the assignments in e K0−1 merge two blocks in g * N into a block. By Lemma 3, without loss of generality, assume that the maximum of G(H g N , B * ) is achieved at g N = U K0−1,K0 (g * N ). Then, we establish the corresponding merged connectivity matrix B ∈ B K0−1 . Define U k,l (g * N , B * ) to represent merging blocks k and l in B * by taking weighted where 1 ≤ u(k) ≤ K 0 − 1 and 1 ≤ u(l) ≤ K 0 − 1 are the new block labels of communities k and l, respectively. Step 2. We now study the log-likelihood ratio L K0−1,K0 . We demonstrate the following critical equation in the first step: The proof of (C.1) can be accomplished in two steps. We first prove this by considering g N far away from g * N and close to g N (up to permutation τ ). Specif- where δ n → 0 slowly. Then, we apply some useful lemmas provided earlier to prove this in another case. Step 2.1. For g N ∈ J − δn , we prove the equality (C.1). By Lemma 4, there exists a constant c 1 such that where the inequality holds because γ(·) is Lipschitz on any interval bounded away from 0 and 1, and recall that ω n = (ρ N N log n/M ) 1/2 . Then, for any g N ∈ J − δn , we have sup Hence, we obtain where (C.3) is derived from (C.2), and if δ n → 0 slowly enough such that δ n /ω n → ∞, we have (C.4). Since the maximum is unique up to τ , Observe that . Note that F (·, ·) has a continuous derivative in the neighborhood for (o g N /M, n g N /M ). By Lemma 3, for (Q, q) in the neighborhood of (o g N /M, n g N /M ). Hence, F (oḡ N /M, nḡ N /M )− F o g N /M, n g N /M ≤ −c 1 ρ N m/N. Furthermore, we obtain sup Then, we conclude as follows: where (C.6) is obtained by (C.5), and the equality (C.7) holds because the number of all community assignments in τ (g N ) is (K 0 − 1) K0−1 . Additionally, the equality (C.8) results from M/N = Ω(n) = Ω(log N/ρ N ). Therefore, by (C.4) and (C.8), we have accomplished the proof of (C.1). Step 3. We then use the conclusion in (C.1) to give the lower bound of L K0−1,K0 . We start by analyzing the bias of the maximum likelihood estimator of the connectivity matrix elements. Consider that sup where the equalities (C.9), (C.10), and (C.11) are derived by Hoeffding's inequality [31] presented in Lemma 2. Hence, we have where I is the set of indices affected by the merge, For convenience, let where X 1 represents the bias within un-merged communities (i.e., 1 ≤ k, l ≤ K 0 −2) and X 2 measure the bias within the merged communities (i.e., (k, l) ∈ I). First, by Taylor's expansion, we obtain where ∆ kl = B kl − B * kl in equality (C.14), and (C.15) results from (C.9). Hence, the upper bound of (C.12) is O P (ρ N ). Then, we focus on (C.13). By Taylor expansion, we have . Therefore, we have accomplished the proof of Theorem 1. C.2. Proof of Theorem 2 Based on the proof of Theorem 1, we prove the convergence of the penalized log-likelihood function (K 0 ) via the following two steps. C.3. Proof of Theorem 3 Based on the proof of Theorem 2, we define a log-likelihood ratio as To provide the upper bound of L K,K0 , we start by discussing L K,K0 . Specifically, we establish the upper bound of L K,K0 by the following three steps. First, we introduce a set of community assignments that is formed by splitting the underlying node assignments into K blocks. Second, we study the corresponding likelihood functions of L K,K0 . Third, based on the conclusion of L K,K0 , we use the preceding lemmas to accomplish this proof. Step 1. We first define the community assignment set by splitting the underlying g * N into K blocks. Intuitively, embedding a K 0 -block model in a larger model can be achieved by appropriately splitting the labels g * N . Specifically, we define a subset e K = {g N ∈ C(A S , K) : each row of H g N has at most one nonzero entry}. Then, any g N ∈ e K satisfies the following: every block in g N is a subset of an existing block in g * N . Accordingly, we define a surjective function as h : [K] → [K 0 ] describing the assignments in H g N . In other words, for any k ∈ [K], h(k) ∈ [K 0 ], and ∀ a ∈ [K 0 ], h −1 (a) ∈ [K]. Step 2. We then discuss the log-likelihood ratio L K,K0 . Note that, in this case, G(H g N , B * ) is maximized at any g N ∈ e K with value 1≤k≤l≤K0 p k p l γ(B * kl ). Denote the optimal G * = 1≤k≤l≤K0 p k p l γ(B * kl ). Let J + δn = {g N ∈ C(A S , K) : G(H g N , B * ) − G * < −δ n }, for δ n → 0 slowly enough. Then, to analyze the log-likelihood ratio L K,K0 , we consider the likelihood sup B∈B K log f (A S |g N , B) under two cases, namely, the community assignment g N ∈ J + δn and g N / ∈ J + δn . Step 2.1 We analyze sup B∈B K log f (A S |g N , B) by considering g N ∈ J + δn . By Lemma 4, we have Therefore, for any g N ∈ e K , we obtain (C.21) Step 2.2 We investigate the likelihood function sup B∈B K log f (A S |g N , B) for g N / ∈ J + δn . Treating H g N as a vector, {H g N : g N ∈ e K } is a subset of the union of some of the K − K 0 faces of polyhedron P Hg N . For every g N / ∈ e K , g N / ∈ J + δn , let g ⊥ be such that H g ⊥ := min H g N :g N ∈e K H g N − H g N 2 . Then, H g N − H g ⊥ is perpendicular to the corresponding K − K 0 face. This orthogonal implies that the directional derivative of G(·, B * ) along the direction of H g N − H g ⊥ is bounded away from 0. That is, Step 3. Based on the assertion, L K,K0 ≤ µ N log N , we now bound the divergence of L K,K0 . According to (C.19) in the proof of Theorem 2, we have (C. 23) where the last inequality is according to (C.19). Hence, L K,K0 = O P (µ N log N ) where µ N = O P (1) for n, N → ∞. Therefore, we have accomplished the proof of Theorem 3. C.4. Proof of Theorem 4 Now, we prove the convergence of the log-likelihood ratio for the DCSBM by the following two steps.
2023-04-17T01:15:07.440Z
2023-04-14T00:00:00.000
{ "year": 2023, "sha1": "6c71a3a096aa901d335892d24cd5657d9ff7efcf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6c71a3a096aa901d335892d24cd5657d9ff7efcf", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119306895
pes2o/s2orc
v3-fos-license
Delimiting Maximal Kissing Configurations in Four Dimensions How many unit $n-$dimensional spheres can simultaneously touch or kiss a central $n-$dimensional unit sphere? Beyond mathematics this question has implications for fields such as cryptography and the structure of biologic and chemical macromolecules. The kissing number is only known for dimensions 1-4, 8 and 24 (2, 6, 12, 24, 240, 19650, respectively) and only particularly obvious for dimensions one and two. Indeed, in four dimensions it is not even known if Platonic polytope unique to that dimension known as the 24-cell is the unique kissing configuration. We have not been able to prove that the 24-cell is unique, but, using a physical approach utilizing the hopf map from four to three dimensions, we for the first time delimit the possible other configurations which could be kissing in four dimensions. dimensions Levenstein [13] and Odlyzko and Sloane [14] were able to prove in 1979 that the kissing numbers are 240 and 196560, respectively. Despite the existence of a plantonic polytope (known as the 24-cell) unique to four dimensions which is a kissing configuration of 24 hyperspheres, proving that 24 is indeed the kissing number in four dimensions was a more stubborn problem than for 8 or 24 dimensions. Indeed, the Delsarte method as typically applied was shown [15] to bound the kissing number only to 25 or less. A few years ago using ingenious and extensive applications of Delsarte's method Musin [16] and subsequently others using semidefinite programmings [17,18] were able to prove that in fact the kissing number in four dimensions is 24. But it is still not even known if the 24-cell is the only configuration in 4-dimensions with kissing number of 24. The 24-cell is shown in Figure 1. Our study of kissing configurations in 4-dimensions is aided by what is known as the Hopf map from four to three dimensions. The Hopf map takes points in four dimensions (w, z) -with the four coordinates as the components of the complex numbers w and z-and |w| 2 + |z| 2 = 1 on the surface of a four dimensional sphere (1) FIG. 1. 24-cell in a Schlegel-like representation We check: So this does map S 3 to the ordinary sphere S 2 . If one fixes a point of the ordinary sphere say (a, t) where a is complex, t is real and |a| 2 + t 2 = 1, then what its known as its fiber, i.e., the set of all points which map to it, is a circle Further details, discussions and proof of the Hopf map from S 3 to S 2 is given in [19]. Intuitively or physically one can think of coordinates points on the surface of a unit sphere in four dimensions (S 3 ) as two spherical polar coordinates on a 2-sphere (S 2 ) (the surface of a 3-dimensional sphere) and the third coordinate being an azimuthal angle around a circle at the point on the 2-sphere. Since a point on S 2 lifts to a circle on S 3 , henceforth points on S 2 will be called circles, e.g., when we say a circle on the north pole, we mean a point on the north pole of S 2 that after Hopf fibration becomes a circle on S 3 . We call kissing points those on S 3 separated by a distance larger or equal to 1. A representation of the 24-cell as six circles each with four points on it is shown in Figure 2. This "Hopf perspective" of the 3-sphere gives a simple appreciation of why the 24-cell is a kissing configuration [20]. We derive a relation to obtain distances on the 3-sphere (d 3 ) from polar coordinates on the 2-sphere and azimuthal angle on a circle (θ i ). If we have two circles on S 2 separated by d 2 and lift them to S 3 using the Hopf map, we have that distance on S 3 (d 3 ) is given by: where Φ ij is an angle that depends on original coordinates in S 2 : and where φ i and α i are the polar coordinates of point i on We observe that when one circle is on the north pole then Φ ij = 0 no matter where we place the other on S 2 . We can define a minimum separation angle θ min for kissing points on S 3 . This angle can be obtained imposing d 3 ≥ 1, using Eq. (4): The last expression must be taken with care. When d 2 > √ 3 the argument of cos −1 is larger than one and makes no formal sense, but tell us that there is not a minimum angle between points on different circles separated by that large a distance. Thus, any points on S 3 coming from different circles separated a distance d 3 > √ 3 are always kissing points. From the above we easily deduce that if we have a kissing configuration and we add the same constant c to all angles, the resultant configuration remains kissing. After a rigid body rotation in four dimensions, points on the same circle change to diferent circles. Points only remain over a same circle after a rotation if they are antipodal or, in other words, these points have angles separated by π radians. Let us have two points on S 3 separated by d 3 then, their circles on S 2 can be separated by a maximum distance d 2;max given by: 6 × 4 kissing configuration (24-cell) after a rotation becomes 12 × 2 since each circle have two pairs of antipodal points. We name a configuration N × n irreducible if N it is the maximum integer we can get after any rotation of S 3 . Any configuration with only one point or two antipodal points per circle is irreducible. 3×6 kissing configuration is reducible to 9×2 after rotation in S 3 , see Fig. 3. Anstreicher [21] showed that the unique antipodal configuration Can there exist a 24-point kissing configuration of the form 11 × 2 + 2× 1? The only kissing configuration with 11 antipodal points (11 × 2) not simply a subset of the 24-cell known was recently found by Cohn and Woo [22]. This is shown in Figure 4. Coordinates of this configuration can be obtained from the Hopf map using equation 3 and: a = 0, t = 0, θ = (n − 1/2) π 3 (n = 1, 2, 3, 4, 5, 6) a = 0 ± i √ 3/2, t = −1/2, θ = π/2, 3π/2 Which corresponds to a 1 × 6 + 8 × 2 configuration but reducible to a 11 × 2 after a rotation in 4D. We now prove that kissing configurations of the form 11 × 2 + 2 × 1 are not possible. For say there were such a configuration then by removing each of the singletons as we show below one would get a kissing configuration of the form 12 × 2-two different kissing configurations with twelve antipodal pairs thus contradicting Anstricher [21]. As we mentioned, each point in S 3 can be represented by a point on S 2 and an angle. Let say that, for 11×2+2×1, angles are named α i,k where i=1,. . .,13, k=1,2 for i ≤11 and k=1 for i=12,13. Antipodal points are on the same circle and verify: α i,1 = α i,2 + π. So we must check that α 12,2 = α 12,1 + π is kissing to probe that no irreducible 11 × 2 + 2 × 1 kissing exists. Distance in S 3 between two points is given by Eq. 4, and we can rewrite: where d i,j is the distance on S 2 between circles i and j and Φ i,j is an angle that depends on relative positions of circles i and j as previously stated. We remove a point and get 11 × 2 + 1 × 1, since is a kissing config: for each j=1,...,11. Using this proof we also show that there can be no kissing configurations of the form 10 × 2 + 3 × 1 (and thus certainly no configurations of the form 10 × 2 + 4 × 1) or of the form 9 × 2 + 6 × 1. Indeed if there were a kissing configuration of the form 10 × 2 + 3 × 1 and an antipodal point were added to one of the singletons one would have a configuration of the form 11 × 2 + 2 × 1 which we just showed is not possible. As above if one adds an antipodal point to a 10 × 2 + 3 × 1 configuration the antipodal point is kissing with the 10 × 2 antipodal points and of course kissing with its antipodal partner. The only thing new to be shown is that is kissing with the other two singletons. Let us say that a unit sphere p 1 is in the cover set of another unit sphere p 2 (p 1 ∈ cov(p 2 )) if it is not possible to place a third unit sphere on the antipodal of p 1 and get a kissing config. This is clearly symmetric, p 1 ∈ cov(p 2 ) ⇒ p 2 ∈ cov(p 1 ) and is antitransitive: p 1 ∈ cov(p 2 ) and p 2 ∈ cov(p 3 ) ⇒ p 1 / ∈ cov(p 3 ). This can be represented graphically using graphs were triangles are not allowed. Each point in the graph is a unit sphere and a bond between to spheres implies covering. In figure 5 we show the covering posibilities for 3 spheres, what show that a 10×2+3×1 kissing configuration would imply the existence of a forbidden kissing configuration. In these graphs, the existence of a sphere without bonds would imply that we can get a (N + 1) × 2 + (n − 1) × 1 kissing configuration from any N × 2 + n × 1 kissing arrangement. If we have in the graph an sphere with just one bond, we would be able to obtain a (N + 1) × 2 + (n − 2) × 1 from N × 2 + n × 1. On the other hand, if we have an sphere p 1 with n/2 bonds, we can remove all spheres not in cov(p 1 ) and then after adding antipodal to spheres in cov(p 1 ) we get a (N + n/2) × 2 kissing configuration from N × 2 + n × 1. Next let us show that kissing configurations of the form 9 × 2 + 6 × 1 cannot exist. If we have any of the six spheres in the singleton with a number of elements in its cover set less or greater than 2, then we would get a forbidden kissing config as demostrated above. Then, the only graph to analyze is shown in Fig. 6, that would imply the existence of two diferent 12 × 2 kissing configurations. We have not been able to continue this line of potential proof of the uniqueness of the 24-cell to configurations of the form 8 × 2 + 8 × 1. But we are able to delimit the possible maximal kissing configurations in four dimensions to deriving from at least sixteen circles on S 2 . In trying to find a 16 × 1 configuration it is easy to construct one analytically starting from 3 × 5 + 1 × 1 where first 3 circles are equispaced on the equator and the last one is on the north pole. We place, for example, θ=0,61,122,185,250 degrees for circles on equator and θ=300 for the circle at the pole and that config is kissing and after a rotation in 4D becomes a 16 × 1 since no point is antipodal of any other. If we put θ = nπ/3, n = 0, 1, 2, 3, 4 the config is also kissing but after a rotation it changes to an irreducible 6 × 2 + 4 × 1. The configuration of the form n x 1 (n the number of circles on S 2 ) with the largest n of which we are aware is has N = 22 [23]. While we have not been able to prove that the 24-cell is the unique kissing configuration in four dimensions, for the first time we have been able to delimit the space of other configurations that could possibly be kissing. We hope that our findings and approach may be helpful in learning more about kissing configurations in four and higher dimensions.
2013-01-21T14:54:29.000Z
2013-01-21T00:00:00.000
{ "year": 2013, "sha1": "06ac7d843da2be2f4458fc23c0e4184b5a7236a4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "06ac7d843da2be2f4458fc23c0e4184b5a7236a4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
267268681
pes2o/s2orc
v3-fos-license
A multidisciplinary approach to tackling invasive species: barcoding, morphology, and metataxonomy of the leafhopper Arboridia adanae The leafhopper genus Arboridia includes several species that feed on Vitis vinifera and cause leaf chlorosis. We report the first alien Arboridia infestation in Italy in 2021 in an Apulian vineyard. To confirm the taxonomic status of the species responsible for crop damage, and reconstruct its demographic history, we barcoded individuals from Apulia together with Arboridia spp. from Crete (Greece), A. adanae from Central Turkey and other specimens of the presumed sister species, A. dalmatina from Dalmatia (Croatia). Molecular phylogenies and barcoding gap analysis identified clades not associated with sampling locations. This result is incongruent with classical specimen assignment and is further supported by morphological analyses, which did not reveal significant differences among the populations. Therefore, we propose A. dalmatina as a junior synonym of A. adanae, which would become the only grapevine-related Arboridia species in the eastern Mediterranean. To further characterise A. adanae evolution, we performed a molecular clock analysis that suggested a radiation during the Pleistocene glaciations. Finally, to assess whether the Apulian individuals carried microorganisms of agricultural relevance, we sequenced their bacterial microbiota using 16S rRNA amplicon sequencing identifying three phytopathogens not generally associated with Arboridia activities as well as Wolbachia in one Apulian haplogroup. We discuss the agricultural implications of this infestation. Phylogenetic, barcoding gap and molecular clock analyses In order to resolve taxonomic questions from the molecular perspective and test phylogenetic hypotheses, a total of 23 individuals were sequenced (five each from Apulia, Crete, and Dalmatia, and eight from Turkey).Total DNA was purified from lyophilised and homogenised individuals with the NucleoSpin Tissue kit (MACH-EREY-NAGEL GmbH & Co. KG), according to manufacturer's instructions.The purified DNA was eluted in 30 μl Buffer BE. The sequenced COI regions were visualised using Chromas software (Technelysium Pty Ltd).Forward and reverse reads were assembled using a Biopython script 16 ; assembled sequences were then uploaded on NCBI GenBank (accession numbers provided in Table S1).A total of 16 Arboridia COI sequences were downloaded from the NCBI nucleotide database, nine of them belonging to A. kakogawana and seven belonging to A. maculifrons Vilbaste, 1968 (Table S1).All sequences were aligned using MAFFT version 7 17,18 and trimmed manually at the 3' and 5' ends in order to remove gaps and ensure the reliability of the phylogenetic result.The final dataset comprised 637 aligned base pairs (bp) from 39 samples.We used this dataset to infer phylogenies under a Maximum Likelihood (ML) framework with the freeware RAxML 19 using a GTR + Gamma replacement model with branch robustness assessed with 1,000 bootstrap replicates.FigTree (version 1.4.4) was used for topology visualisation and figure preparation. To analyse the COI barcoding gap, we used our Arboridia sequences and all the Erythroneurini COI sequences available from the NCBI nucleotide database in November 2022.Sequences were aligned using MAFFT version 7 17,18 and a custom python script was used to trim the head and tail of the alignment to avoid gaps.The resulting alignment was composed of 2875 sequences and 442 bp.The newly obtained alignment was used to calculate the pairwise distance matrix using the DistanceCalculator class of the TreeConstruction module of Biopython 16 with the 'identity' model.To visualise the distribution of genetic distances and obtain the barcoding gap plot, the intraspecific and interspecific genetic distances were plotted in a histogram using the matplotlib library, excluding those involving the species A. dalmatina and A. adanae.Outlier sequences were detected considering the first (q 1/4 ), the third quartiles (q 3/4 ), and the interquartile range (IQR) according to Eqs. (1) and (2) of the pairwise distances distributions. The same definitions of intraspecific and interspecific distributions were applied to pairwise distances.Finally, the genetic distances between sequenced Arboridia specimens were processed, plotted on a histogram and highlighted by the same python script. We also employed the same dataset used for the ML analysis in a Bayesian framework to estimate divergence times between species using BEAST2 20 .After model selection, we employed a birth and death model and a relaxed log-normal clock as a tree priors and set the tree topology to the ML topology.To calibrate the tree, we added eleven COI sequences of a species belonging to Dikraneurini (Dikrella cruentata Gillette, 1898), the closest tribe for which COI sequences and fossil calibration were available, as well as fourteen COI sequences belonging to Mileewa Distant, 1908, a genus belonging to Mileewini tribe and Mileewinae subfamily, the closest Cicadellidae subfamily for which COI sequences and fossil calibration were also available (Table S1).We calibrated the tree root using the fossil of a Dekraneurini gen.sp. 21, according to Yan et al. (2022) 12 , to set the Dikraneurini-Erythoneurini split at 17.5-90 MYA using a normal distribution (mean 53.6 MYA, standard deviation 18.4), and the fossil of Youngeawea bicolorata (Mileewinae: Mileewini) to set the minimum divergence time between Mileewa and Typhlocybinae at 44 MYA 22 .The analysis was run for 100 million Markov chain Monte Carlo (MCMC) iterations, or until it reached convergence, sampling every 10,000 steps after a 10% initial burn-in.We used Tracer 1.7.1 23 to visualise convergence, which was considered reached when all variables had an Effective Sample Size (ESS) > 200 and a bell-shaped posterior distribution.Substitution saturation was checked using DAMBE 24,25 considering Xia's observed index of saturation 26 . Morphological analysis Forty-two Arboridia individuals were collected from the same sampling locations as those used for molecular analyses: 21 from Turkey, 10 from Apulia, and 11 from Dalmatia.Abdomens were removed from specimens, soaked in KOH solution (10%), heated to boiling for a few seconds to dissolve soft tissues, washed in distilled water, and transferred to glycerin for further dissection and standard microscopy.Digital micrographs were taken using a LEICA S9i stereomicroscope with integrated HD camera (LEICA Inc., Wetzlar, Germany).Morphological characters associated with species were identified following previously published keys 13,14 .As for macroscopic features, the colour pattern of the face, vertex, pronotum, scutellum and forewings were noted.Differences in body size were estimated between populations and between individuals showing diffuse red chromatism by comparing mean head capsule width, hind tibia length and distance between the vertex and the tip of the scutellum.Measurements were taken using a LAS X Life Science Microscope software (LEICA Inc., Wetzlar, Germany).Photographs and drawings were modified with GIMP 2.10.12 software (GNU General Public License).Before morphological features between populations were compared, assumptions for normal distributions and homoscedasticity were tested 27, 28 .In cases where these assumptions were respected, a parametric one-way ANOVA was performed, otherwise a non-parametric Kruskal-Wallis test 29 was used.Data were analysed and visualised using Graphpad Prism software (GraphPad Software, Inc., La Jolla, CA, USA). Metataxonomics We characterised the whole microbiota of 14 Arboridia individuals for which COI was available collected from three European locations (five each from Apulia and Dalmatia; four from Crete).On the Animal, Environmental and Antique DNA Platform at the Fondazione E. Mach, the 16S rRNA gene V3-V4 region was amplified from the whole body in reactions of 25 μl containing 1X the KAPA HiFi HS ReadyMix Buffer (Roche), and the primers 341F_ILL (5'-TCG TCG GCA GCG TCA GAT GTG TAT AAG AGA CAG CCT ACGGGNGGC WGC AG-3') and 805R-2_ILL (5'-GTC TCG TGG GCT CGG AGA TGT GTA TAA GAG ACA GGA CTACNVGGG TWT CTA ATC C-3') 30,31 anchored with the Illumina forward and reverse overhang adapters (https:// suppo rt.illum ina.com/ docum ents/ docum entat ion/) to a final concentration of 0.3 μM each and 100 ng of DNA (50 ng/μl).PCR amplification controls (reactions with no DNA template) were included in each amplification process.The PCR conditions were 3 min at 95 °C, 35 × (30 s at 95 °C, 30 s at 55 °C, 90 min at 72 °C), 7 min at 72 °C, using a Veriti™ 96-Well Fast Thermal Cycler (Applied Biosystems, USA).Quality checks for amplification success and efficiency were performed by capillary electrophoresis using the QIAxcel Advanced System (QIAGEN).Bacterial amplicons were sequenced using Illumina MiSeq 2 × 300 bp with a minimum depth of 100,000 reads per sample, performed on the Sequencing and Genotyping Platform, Fondazione E. Mach. CutAdapt 32 was employed to remove adapters from the 16S V3-V4 reads.Subsequent analytical steps were performed in R version 4.1.2software 33 .The DADA2 package 34 was used to filter the reads by quality, remove errors, merge the forward and reverse reads, remove chimaeras, and assign the taxonomy to the resulting ASVs using the Silva v138 as reference database 35,36 .Decontam 37 was used to remove contaminant sequences defined by the negative controls.The phyloseq package 38 was used to compute abundance and richness plots and statistics. Origin of the Apulian invasion The COI phylogenetic tree (Fig. 1) suggested three clusters with high node support within Clade A: the 'blue' group composed of two Apulian samples (4 and 5) and two Turkish samples (4 and 5); the 'green' group composed of three Cretan samples (3, 4 and 5) and four Dalmatian samples (1, 2, 3 and 5); the 'orange' group composed of three Apulian samples (1, 2 and 3), two Cretan samples (1, 2) and one Dalmatian sample (4).The presence of individuals from different geographical locations in all three clusters indicated a complex geographic structure; (1) in fact, the origin of the Apulian invasion cannot be identified from the COI sequences generated here.However, this result is compatible with a fragmented history of geographic isolation likely due to a recent spread of this species attributable to human activities, such as intensive Mediterranean trade related to viticulture.Wine-and viticulture-associated products have been traded across the Mediterranean basin as far back as 7000 B.C. 39 , therefore many sporadic gene flow events among the different leafhopper populations analysed might have occurred during the last 9000 years, leading to the complex pattern of genetic differentiation noted here. The Apulian specimens belonged to two different groups in Clade A ('blue' and 'orange'; Fig. 1), suggesting several possible evolutionary scenarios.For example, invasive Arboridia may be highly variable or there may have been more than one invasion event.Because A. adanae 4 and 5 from Turkey were genetically related to the Clade A, one invasion route may have been directly from Turkey.However, we cannot pinpoint the exact origin of additional invasions, since Apulian specimens 1, 2, and 3 were related to both Cretan and Dalmatian individuals. Molecular and morphological evidence of a unique Arboridia species in the Balkans and Turkey As shown in the COI phylogenetic tree, two specimens, both originally assigned to A. adanae from Turkey (A. adanae 4 and 5), clustered with 100% support within Arboridia, but were in a separate cluster from the rest of Europe (Fig. 1).In particular, these two samples were closely related to two of the individuals sampled from Italy (Arboridia Apulia 4 and Arboridia Apulia 5) with ML support of 73/100 and posterior probability of 0.93.All specimens collected in Crete formed a cluster alongside Dalmatian A. dalmatina and Apulian Arboridia, indicating that three populations and Turkish 4 and 5 individuals belong to the same clade (Clade A, Fig. 1). These findings have raised doubts regarding the exact taxonomic status of A. adanae and A. dalmatina; these doubts were further substantiated by the barcoding gap analysis (Fig. 2).Whereas the distance among Clade A specimens fell within the distribution of intraspecific distances (Fig. 2 red circle), the distance between Clade A specimens and the rest of Turkish Arboridia (hereafter called Clade B, Fig. 1) fell inside the barcoding gap (Fig. 2, blue cross).These results indicate that we cannot taxonomically separate A. dalmatina from A. adanae on the basis of COI sequences. Results from morphological observations were also consistent with phylogenetic analysis.Body length of individuals (shown in Figs. 3, 4, 5) varied between 2.6 and 3.1 mm for females (average 2.72 ± 0.12 mm; n = 20) and between 2.4 and 2.9 for males (2.87 ± 0.12 mm; n = 21).Dorsally, adults were light to dark yellow with orange streaks running along the forewings, and dark brown tergites.Three prominent dark spots were present on the scutellum and other two on the vertex (Fig. 3).Smaller dark marks were present at the front of the pronotum.Ventrally, the legs were light yellow and sternites dark brown.Two brown stripes ran in parallel on each side of the postclipeus (Fig. 4).In some but not all of the Turkey specimens, a bright-red enlarged streak ran from the vertex to the anteclipeus, flanked by two whitish spots (Figs. 3 and 4).None of the morphological features measured were significantly different between populations (width of the cephalic capsule: One-Way Anova: F = 1.06, p = 0.35; distance vertex-scutellum: One-Way Anova: F = 2.45, p = 0.10; length tibia: Kruskal-Wallis test: H = 2.89, p = 0.24; Fig. 5).Similarly, male genitalia from all populations shared the same description: genital styles apically widened with a short ventral spur and a long, curved and pointed dorsal process (Fig. 6; n. 1).Aedeagus articulated to connective.In lateral view (Fig. 6; n. 2), the aedeagal shaft was markedly curved and apically tapered, with a long distal lobe at the apex.In ventral view (Fig. 6; n. 3), the aedeagus was symmetrical and straight, with distinct V-shaped processes placed basally, well-separated from the shaft, and shorter than the shaft.Connective fused with the aedeagus, with a short stem ending in an enlarged anterior lobe.Pygofer dorsal appendages (Fig. 6; n. 4) simple, movably articulated, and slightly curved in ventral view.Subgenital plates (Fig. 6; n. 5), in lateral view, slightly exceeded the pygofer, distally sclerotised and twisted, laterally bearing four to six macrosetae and numerous irregularly arranged short setae on the ventrolateral part of each plate.When comparing the genitalia of males belonging to the three sampling locations, no relevant morphological differences were observed at 50X magnification (Fig. 3C). The reduced interspecific phylogenetic distance combined with the lack of significant morphological differences between Turkish and European Arbordia spp., means we cannot exclude either that the two species form a species complex with the possibility of interbreeding, or that these two species are actually subspecies of the same species, as was suggested by Dlabola (1963) 40 .Dlabola morphologically analysed both Turkish and Dalmatian specimens and considered Dalmatian specimens a subspecies of A. adanae, which he named A. adanae vitisuga 40 .In addition, the descriptions of genitalia published by Dlabola (1963) 40 and Novak and Wagner (1962) 41 are not distinct.Therefore, it is not clear why Dworakowska (1970) 42 declared A. adanae vitisuga as a junior synonym of A. dalmatina.As for the body colour patterns, the red chromatism on the fore body (especially vertex and frons) of some Turkish specimens was not associated with distinct genetic or morphological traits and thus, it is likely that this variability is associated with insect phenology, a phenomenon that is well-known in the Erythroneurini tribe.For example, the grapevine leafhopper, Zygina rhamni Ferrari, 1882, widespread in the southern Mediterranean, is characterised by a variable pattern of large red markings and streaks found only on overwintering individuals, not those belonging to summer generations 43 .Such seasonal variability may also characterise Arboridia, although a phenological investigation would be required to clarify this aspect. The recent origin and divergence of the A. dalmatina-adanae complex To calibrate the phylogeny, we utilised sequences from highly divergent taxa, specifically Mileewa spp.and Dikrella cruentata, and employed DAMBE to assess the presence of substitution saturation; this analysis revealed a negligible to minimal saturation level (P-invariant = 0.219, Iss = 0.565, Iss.c = 0.717, p-value = 0.001).According to the divergence times estimated by a molecular clock analysis of the COI gene (Fig. 7), the radiation of the A. adanae-dalmatina clade occurred 2.94 million years ago (95% high posterior density between 1.11 million years ago and 6.59 million years ago) straddling the Pliocene and Pleistocene.Subsequently, Clade A diverged from the Clade B and initiated its current radiation about 1.09 million years ago (95% high posterior density between 386,000 years ago and 2.47 million years ago), in the mid Pleistocene, characterised by alternating glaciation and warming events 44,45 .During the late Pliocene, temperatures decreased leading to the glaciation events that happened during Pleistocene 46 .At this time, Turkey, southern Italy, Dalmatia, and the southern Balkans were glacial refugia 47,48 , areas where species could have survived the more northerly glaciations and then recolonised the surrounding areas following glacial retreat.This suggests that this species might have lived in the eastern Mediterranean area, from the Balkans to Turkey, evolving at the proximity of the ice limit of this region, in a paleoecological scenario characterised by climate conditions that were similar to those of its current distribution.Following the retreat of the glaciers, they enlarged their distribution, but were not able to reach the Apulian peninsula, with only limited genetic divergence between Turkish and Balkanic populations.This scenario is supported by our molecular and morphological data suggesting a species complex or two subspecies rather than two www.nature.com/scientificreports/different species.Importantly, from a pest management perspective, with increasing temperatures due to climate change and the geographical conformation of the Italian peninsula, which aids both natural and human movement of the leafhoppers along its coastlines, the invasion and establishment of eastern Mediterranean Arboridia is increasingly likely, and represents a potential threat to vineyards and, in general, to the temperate ecosystems of the rest of Italy and of other Mediterranean regions. Microbial profiles and presence of Wolbachia Regarding the microbiota of the Arboridia studied here, the abundance plot (Fig. 8; Table S2) illustrates bacterial genera with a number of reads greater than 1.5% of total reads.Individuals from the three European locations did not differ significantly in terms of alpha diversity (both Chao1 and Shannon indices) at either genus and phylum levels, or in terms of beta diversity at the phylum level (Fig. S1).However, we found beta diversity was significantly different among populations for genera (PERMANOVA R 2 = 0.28, P = 0.041), although pairwise differences were only significant between Apulian and Cretan populations (pairwise adonis R 2 = 0.47, P = 0.024).These results support the hypothesis that the three European populations are not different species and that there is a relatively recent mixing, especially in Apulia, likely due to trades.It is difficult to assess whether these genera have positive or negative implications on Arboridia biology and/or on its management, since pathogenicity or other characteristics of microorganisms are often related to species or strains rather than genera.For example, Pseudomonas fluorescens positively affects plants 49 , while P. syringae causes diseases in many crops 50 .However, the only genus common among all specimens is Pseudomonas.Rickettsia and Tsukamurella are present only in the Dalmatian specimen 4, while other abundant genera, except for Wolbachia, are widespread among all specimens.However, we were able to identify 270 species of bacteria present in our samples, among which three notable plant pathogens: Clavibacter michiganensis in Apulian specimen 5, Curtobacterium flaccumfaciens in Cretan specimen 1, and Xanthomonas citri in Apulian 5 and in Cretan www.nature.com/scientificreports/ 2 specimens.Clavibacter michiganensis (gram-positive) is known for its pathogenic activity in alfalfa, maize, wheat, and its ability to cause bacterial wilt and canker in tomato 51 , while Curtobacterium flaccumfaciens (grampositive) causes bacterial wilt or tan spot of edible dry beans 52 .Xanthomonas citri (gram-negative)causes citrus canker in all commercial citrus varieties 53 .These bacteria have not been associated previously with grapevine diseases, and their primary mode of transmission is through wound infections; therefore, Arboridia should pose no greater threat to vineyards than other organisms.Indeed, thus far, the primary causes of the above disease outbreaks have been attributed to infected seeds, transplantation, or the use of contaminated tools [51][52][53] .From a management standpoint, the significance of Wolbachia in A. adanae should be addressed further.Wolbachia is a genus of obligate intracellular bacteria found in over 65% of insect species 54 and plays various critical roles as symbionts within their hosts 55 .Nevertheless, their most notable attribute is their capacity to proliferate within specific insect populations, ultimately instigating reproductive changes that facilitate their own transmission.Wolbachia bacteria are maternally-inherited intracellular insect-parasites that can induce different reproductive phenotypes through cytoplasmic incompatibility and other processes [55][56][57][58][59] , and has been intensely studied for its potential as a pest control strategy.Wolbachia was found with remarkably high abundance in three Apulian specimens (1, 2, and 3, > 70%) as well as in specimen 4 with a relatively low abundance (< 5%), but it was not found in any of the other individuals sampled outside Italy.Despite the relatively low number of samples processed here, the presence of Wolbachia only in the Apulian samples is puzzling.One possibility is that the invasive Italian individuals originate from infected populations that were not sampled for this study.Since the Apulian sampling location has specimens originated from at least two different invasions, one must have been infected by Wolbachia.An alternative hypothesis is that Wolbachia might have been recently transferred horizontally to Arboridia through parasitoids [54][55][56] in Italy, after the leafhopper invasion.For example, it is known that parasitoids from Mymaridae and Trichogrammatidae can infect various Arboridia species and therefore horizontal transmission of Wolbachia might happen [63][64][65] . Conclusion and future perspectives In this article we characterised an invasive Mediterranean Arboridia species by combining results from COI phylogenetics and divergence estimates, with morphological studies and microbiota studies.Our complementary set of results has allowed a first general evaluation of the evolutionary biology of this insect pest, showing the value of a multidisciplinary approach in invasive species research. We found the first molecular evidence that Turkey and the Balkans may host the same species of Arboridia.Through analyses of COI sequences, we observed unlikely phylogenetic relationships between individuals previously identified as A. adanae and A. dalmatina; in addition, there were no clear morphological differences between individuals from different regions, neither in body size nor in male genitalia.Therefore, we propose to merge A. adanae and A. dalmatina into a single species, which is A. adanae Dlabola, 1957, as a junior synonym of A. dalmatina Wagner, 1962.Phylogenetic analysis also showed that the three genetic clusters of Arboridia living on grapevines in the Mediterranean basin were very closely related and geographically heterogeneous.Although this makes it difficult to assess the origin of the Apulian invasion, it suggests that the introduction of this species was a relatively recent event, possibly attributable to human activities.Indeed, the Apulian organic vineyard where we have sampled is located near commercial harbours connected to the Balkans, Greece, and Turkey (approximately 12 km).Overall, because Clade A is composed entirely of Turkish samples, but Turkish samples are also present in Clade B; our phylogenies indicate that Dalmatian and Greek population originated from Turkish population as well.Our clock Figure 1 . Figure 1.Phylogenetic tree of COI gene.In red sequences from the Apulian samples.The two numbers at the nodes are the number of bootstrap replicates under a ML framework and the posterior probability in a Bayesian framework, respectively.Supports were reported only for well-supported nodes considering both frameworks.Clade B (grey) is composed only of Turkish Arboridia, Clade A (light grey) is composed of Apulian, Cretan, Dalmatian and Turkish specimens.The three coloured squares define three genetic clusters in Clade A. Figure 2 . Figure 2. Barcoding gap of COI gene using Erythoneurini sequences.For Clade A, composed by Cretan, Dalmatian, Apulian individuals and Turkish specimens 4 and 5, the pairwise distance falls inside the set of intraspecific distances (red circle).In contrast, the pairwise distance of A. adanae samples 4 and 5 from Turkey and the rest of Turkish A. adanae (in Clade B) falls outside the intraspecific barcoding distance and inside the barcoding gap (blue cross). Figure 3 . Figure 3. Dorsal view of the forebody (head and thorax) of Arboridia specimens collected from Turkey, Apulia and Dalmatia.In the top line, Turkish individuals show a clear colour polymorphism. Figure 4 . Figure 4. Ventral view of the forebody (head and thorax) of Arboridia specimens collected from Turkey, Apulia and Dalmatia.In the top line, Turkish individuals show a clear red chromatism on frons and postclypeus. Figure 5 . Figure 5. Violin plots of the morphological features considered here to compare the size of Arboridia specimens from the three sampling locations (10 Apulian specimens in orange, 11 Dalmatian specimens in green, 21 Turkish specimens differentiated in red with red pigment and in blue without the red pigment).(a) Analysis of the width of the cephalic capsule.(b) Analysis of the length of tibia.(c) Analysis of the distance between vertex and scutellum.No comparisons were statistically significant. Figure 6 . Figure 6.Male genitalia of the examined Arboridia did not show evident morphological differences.1. style; 2. aedeagus, lateral view; 3. aedeagus, ventral view; 4. pygofer dorsal appendages; 5. subgenital plate.(A) General view of the male genitalia.(B) Schematic drawing of the different parts composing the male genitalia.(C) Micrographs of the different parts of the male genitalia showing comparison between the examined populations. Figure 7 . Figure 7. Molecular divergence of Arboridia COI gene.The estimated marginal density function is shown on each node.The clock has been calibrated at node a (minimum 44.4 MYA) and b (17.5-90MYA).Values of x-axis are MYA.Along the x-axis (periods and epochs): P = Pleistocene, Pli = Pliocene, Q = Quaternary.
2024-01-28T06:17:26.846Z
2024-01-26T00:00:00.000
{ "year": 2024, "sha1": "fd231163558d910d9fec6062a0268b8b5edbb687", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "59e5ac32cd7fa19c1d262e3c395d1aab07646b12", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16842679
pes2o/s2orc
v3-fos-license
Physico-chemical and biological characterization of anopheline mosquito larval habitats (Diptera: Culicidae): implications for malaria control Background A fundamental understanding of the spatial distribution and ecology of mosquito larvae is essential for effective vector control intervention strategies. In this study, data-driven decision tree models, generalized linear models and ordination analysis were used to identify the most important biotic and abiotic factors that affect the occurrence and abundance of mosquito larvae in Southwest Ethiopia. Methods In total, 220 samples were taken at 180 sampling locations during the years 2010 and 2012. Sampling sites were characterized based on physical, chemical and biological attributes. The predictive performance of decision tree models was evaluated based on correctly classified instances (CCI), Cohen’s kappa statistic (κ) and the determination coefficient (R2). A conditional analysis was performed on the regression tree models to test the relation between key environmental and biological parameters and the abundance of mosquito larvae. Results The decision tree model developed for anopheline larvae showed a good model performance (CCI = 84 ± 2%, and κ = 0.66 ± 0.04), indicating that the genus has clear habitat requirements. Anopheline mosquito larvae showed a widespread distribution and especially occurred in small human-made aquatic habitats. Water temperature, canopy cover, emergent vegetation cover, and presence of predators and competitors were found to be the main variables determining the abundance and distribution of anopheline larvae. In contrast, anopheline mosquito larvae were found to be less prominently present in permanent larval habitats. This could be attributed to the high abundance and diversity of natural predators and competitors suppressing the mosquito population densities. Conclusions The findings of this study suggest that targeting smaller human-made aquatic habitats could result in effective larval control of anopheline mosquitoes in the study area. Controlling the occurrence of mosquito larvae via drainage of permanent wetlands may not be a good management strategy as it negatively affects the occurrence and abundance of mosquito predators and competitors and promotes an increase in anopheline population densities. Background Mosquitoes are not only a nuisance, but are also responsible for the spread of a wide range of diseases including malaria, yellow fever, dengue, West Nile virus and Rift Valley fever [1][2][3]. These mosquito borne diseases, infecting more than 700 million people around the world each year, result in as many as two million deaths annually [4]. One of these diseases, malaria, is transmitted between humans by adult female mosquitoes of the genus Anopheles. Malaria is endemic in tropical and subtropical regions where it causes over 300 million acute illnesses and at least one million deaths each year [5]. In spite of the recent scale-up of control programs, malaria continues to be a major public health problem in most tropical countries and its control is becoming increasingly difficult due to the spread of resistance of the parasite to anti-malarial drugs, resistance of the vector to insecticides and land-use changes [6,7]. Land-use and land-cover changes, such as deforestation, agricultural expansion, infrastructure development, urbanization and human population growth contribute to the proliferation of breeding sites of mosquitoes [5,8]. These environmental or land-use modifications also affect climate processes [9] that are likely to support rapid development of mosquitoes and parasites in regions where there has previously been a low-temperature restriction on transmission. Current episodes of climate variability in Africa are likely to intensify the transmission of malaria in the eastern and southern highlands [10,11]. Moreover, dams and small irrigation projects also contribute to an increase in the mosquito population by, increasing the number of suitable larval habitats, prolonging the breeding season and allowing the expansion of their distribution range. Small dams built for irrigation and mega hydropower dams have been shown to favour malaria transmission in Ethiopia due to habitat creation [12,13]. Several studies have examined the relationship between habitat characteristics and mosquito larval abundance and distribution in Africa [14][15][16][17][18]. Anopheles arabiensis, the principal malaria vector in Sub-Saharan Africa, prefers shallow clean water and sunlit temporary habitats such as sand pools, brick pits and rain pools [15,16]. The presence of An. arabiensis immature stages in aquatic habitats is mainly influenced by water temperature, emergent plant cover, water current, turbidity, canopy cover, substrate type, and presence of predators and competitors [15][16][17]. Shililu et al. [15] indicated that in low-and highlands in Eritrea, water temperature was positively correlated with larval density. Higher temperatures encourage better development of eggs or allow the development of more microorganisms that are used as food by the larvae [14]. On the other hand, high emergent plant cover of aquatic habitats is likely to reduce mosquito larvae by obstructing gravid females from ovipositing and supporting a high diversity of predators [17]. The occurrence of predators and competitors is also a key determinant for the presence of An. arabiensis larvae. Muturi et al. [17] indicated that gravid females of An. arabiensis would avoid ovipositing in habitats where members of the family Heptageniidae are present, presumably to avoid direct competition. Furthermore, An. arabiensis is virtually absent or present at low abundance in habitats where there are predators such as fish (Tilapia, Oreochromis sp.), dragonfly larvae, water bugs and water beetles [19]. Malaria vector control has been largely dependent on the use of chemical insecticides. Only 12 insecticides belonging to four insecticide classes are recommended for public health use either for indoor residual spraying or to treat mosquito nets [20]. Unfortunately, resistance to insecticides has been reported from many malaria vector species. Resistance spreads rapidly, which constitutes a serious threat to malaria control initiatives [20]. In Ethiopia, populations of An. arabiensis, the major malaria vector in the country, developed resistance to three (organochlorines, organophosphates and pyrethroids) out of the four insecticide families commonly used for public health use [21,22]. Therefore, alternative malaria vector control tools, targeting mosquito immatures either alone or as part of integrated vector management, should be envisaged to reduce human-vector contact and hence malaria transmission intensity. Adult mosquitoes are difficult to control since they can fly relatively long distances and survive in a wide range of microhabitats, including the soil and in holes in rocks and trees [23]. Effective mosquito larval control can be achieved through larval habitat management [14,24]. Larval control through environmental management has gained a lot of attention during the last decades [25,26]. Environmental management involves changes in potential mosquito breeding areas to prevent, eliminate or reduce the vector's habitat [26]. Techniques include draining man-made and natural wetlands, land levelling, filling small ponds or water collecting depressions and changing banks of water impoundments [25]. However, draining natural water bodies such as wetlands may affect the composition and structure of mosquito predators and species diversity in general more than they do reduce mosquito breeding sites [27]. Even after a wetland has been drained, it may often still hold enough water after a rain event to serve as a breeding site for mosquitoes [28]. In addition, drainage of wetlands often reduces important regulating ecosystem services such as mitigating floods, recharging aquifers, micro-climate stabilization and improving water quality [29]. So, draining wetlands does not seem to be a good strategy to reduce the habitat of mosquito vectors. In order to include mosquito larval habitat management as part of an integrated vector management program, detailed knowledge on the ecology of the aquatic immature stages is crucial [30]. To this end, habitat suitability modelling has been increasingly used to determine the presence of malaria vectors and estimating their population levels. Such information is the basis for risk assessment of mosquito-borne diseases [31,32]. Habitat suitability models take into consideration the occurrence and/or abundance of species in relation to biotic and abiotic environmental factors, evaluating the habitat quality or predicting its effect on species occurrences as a result of environmental changes within the habitat [33]. However, species-habitat relationships are influenced by regional conditions and hence, the generality of these models needs to be tested [34]. Therefore, we developed data-driven models using decision trees and generalized linear models in order to assess the relationship between abiotic and biotic environmental factors and the occurrence and abundance of anopheline mosquito larvae in Southwest Ethiopia. This could help decision makers to identify priority habitats to be targeted for the control of anopheline mosquito larvae. We specifically addressed the question of whether permanent marshlands in the neighbourhood of Jimma (the main city in the Gilgel Gibe catchment), which are bio-diverse areas that are under serious threat by land encroachment and which are perceived as mosquito breeding grounds, are indeed a preferred habitat for anopheline mosquito larvae. These marshlands fulfil many ecosystem services so their destruction would entail important losses and a good and integrated management is therefore required. Study area This study was conducted in the Gilgel Gibe I watershed situated in Southwest Ethiopia, lying between latitudes 7°37'N and 7°53'N and longitudes 36°46'E and 37°43'E ( Figure 1). The elevation of the study area ranges from 1,650 to 1,800 meters above sea level. The mean annual temperature in the area is between 15°C and 22°C, and the mean annual precipitation is between 1800 mm and 2300 mm, with maximum rainfall from June till early September and minimum precipitation between December and January [35]. The study area is characterized by different land use patterns. The main socio-economic activities of the inhabitants are farming and small stock rearing, with maize (Zea mays) and teff (Eragrostis tef ) being the main crops cultivated in the area. The region is, however, also known for its coffee production. The average population density in this area is approximately 100 to 110 people/km 2 . Characterization of larval habitats A total of 220 samples were taken at 180 different sampling locations (larval habitats) between August and October 2010 and September to November 2012. Selection of surveyed sites was based on previous reports on surface water quality monitoring [36] and distribution of disease vectors in the region [22]. Sampling sites situated in permanent habitats such as natural wetlands, reservoir and streams were selected along a gradient of visible disturbance including point source pollution, land use pattern, hydrological modification and accessibility. Sampling sites situated in temporary breeding habitats were randomly selected from six villages located up to 8 km from the Gilgel-Gibe hydroelectric dam and from temporary pools located around permanent habitats. Permanent habitats were sampled at exactly the same location during both years, while the sampling location of temporary habitats changed depending on the availability of water. Temporary habitats are those containing water for a short period of time (approximately two weeks after the end of rainy season). Semi-permanent habitats are those containing water for 2 to 3 months after the rainy season ends. Permanent habitats are those containing water throughout the year (fed by surface or ground water) and are more stable systems. Surveyed habitats included: natural wetlands (n = 60), breeding habitats around the shore of the dam reservoir (n = 13), natural ponds (n = 10), streamed pools (n = 30), farm ditches (n = 25), pits for plastering (n = 40), rain pools (n = 20), vehicle ruts (n = 12) and animal hoof prints (n = 10) ( Figure 2). Detailed information on habitat condition, water quality, presence of anopheline larvae and mosquito predators and competitors was collected during the survey. Data on size of the water body (area), substrate type, vegetation cover, canopy cover and land use pattern were collected for each larval habitat. Water depth was measured using a metal ruler at different points of each habitat and average depth was recorded. Substrate was classified into clay, silt, sandy, gravel and artificial substrate (concrete, tire, plastic and mud pot). The emergent, submerged and floating plant cover of a habitat was visually estimated as the percentage cover of these aquatic macrophytes within a 500 metre stretch for large aquatic habitats and the entire area for smaller habitats. Plant cover was categorized as very low (<10%), low (10-35%), moderate (35-65%), high (65-90%) and very high (>90%) [37]. Canopy cover was defined as the amount of vegetation covering the water surface. Canopy within or the surrounding of the sampling site was estimated visually based on the percentage of shade [38]. The type of land use adjacent to each sampling site was also recorded and checked with the available GIS data on land use. The map templates including land use types were obtained from the Ethiopian Ministry of Water and Energy. Habitat characterization, including dissolved oxygen, conductivity, pH and water temperature were measured using a multi-probe meter (HQ30d Single-Input Multi-Parameter Digital Meter, Hach). A hand-held hygrometer (RH87) was used to measure ambient air temperature and relative humidity. Turbidity was measured using Aqua-Fluor Handheld Fluorometer/Turbidimeter. Water chemistry analysis was carried out by sampling 2 l of water from each sampling site. The water sample was stored in an icebox and transported to the Laboratory of Environmental Health Science and Technology, Jimma University. The samples were then analysed for total dissolved solids (TDS), alkalinity, hardness, chloride, and orthophosphate and nitrate concentration following standard methods [39]. Geographic coordinate readings were recorded for all sampling sites using a hand-held global positioning system unit (GPS) (Garmin GPS 60, Garmin international Inc., and Olathe, Kansas, USA). Coordinate readings were integrated into a GIS database using Arc MAP 10 GIS software. All digital data in the GIS were displayed in the World Geodetic System (WGS) 1984 Coordinate system. Mosquito larvae sampling and identification To collect mosquito larvae, one to ten dip samples were taken from each habitat using a standard 350 ml dipper (Clarke Mosquito Control Products, Roselle, IL) depending on the habitat size. Mosquito larvae were also sampled using 5 ml graduated pipettes from water bodies, which were too small to use standard dippers. For small habitats such as hoof prints, several hoof prints were pooled to get the required sample volume. Quantitative sampling from small habitats may overestimate larval density as compared to large habitats since larvae may not escape in small habitats where whole water can be sampled [40]. The use of different sampling methods may affect the analysis of abundance data, which could be considered as a limitation of the study. Water collected by dippers was emptied into a white enamel sorting tray and mosquito larvae were sorted and identified to genus level as either anopheline or culicine. The presence of mosquito immature stages was defined by the presence of at least one larva or pupa found in any of the ten dips. Mosquito predator and competitor sampling and identification A rectangular frame net (30 × 20 cm) with a mesh size of 250 μm was used to sample mosquito predators and competitors at the same sampling sites where mosquito larvae sampling was carried out. Each collection entailed a 10 minute kick-sample with a hand net over a distance of 10 metres in the habitats that were sufficiently large [41]. Time was allotted proportionally to the percentage cover of different mesohabitats (i.e., bottom, mid-water, surface, and near debris). Small habitats (e.g. farm ditches, road puddles and pits) that could not be sampled by kicknet were sampled using sweep nets. Contents collected in the sweep or kick-net were emptied on a white sorting tray to enhance visibility and counting of the sampled organisms. Fish and tadpoles were recorded and released at their site of capture. Macroinvertebrates were sorted in the field, kept in vials containing 75% ethanol for later identification and enumeration. Macroinvertebrates were identified to family level in the laboratory using a stereomicroscope (10 × magnifications) and standard identification key [42]. Each family was categorized into one of the five functional feeding groups (FFG): gatherer-collector, filterercollector, predator, scraper, and shredder [43]. When multiple possible FFGs were identified for a particular family, the most commonly occurring classification was used. All identified macroinvertebrates, their frequency of occurrence in the study area and their FFG are presented (Additional file 1). Filter-collectors such as tadpole, black fly (Simuliidae), bivalve molluscs (Sphaeriidae) caddisfly larvae (Hydropsychidae) and culicine larvae were considered as competitors of anopheline larvae [44]. Fish and aquatic invertebrates belonging to the orders Hemiptera (water bugs), Coleoptera (Water beetles) and Odonata (dragonflies and damselflies) were considered as predators [44]. Presence or absence (1/0) of invertebrate predators and competitors were used as independent variables in the classification tree models. Data analysis Twenty five input variables were used to identify the main predictors of mosquito larvae occurrence and abundance (Table 1). We used classification and regression tree (CART) models and ordination analysis to investigate the relationship between anopheline mosquito larvae occurrence and abundance and different explanatory variables. In addition, occurrence and abundance of anopheline larvae were analysed using logistic and Poisson regression models (Additional file 2 and Additional file 3). CART analysis is a form of binary recursive partitioning that can be used to classify observations [45]. It has a number of advantages over traditional generalized linear models. First, it is well suited for analysis of complex ecological data with high-order interactions [45,46]. Second, it captures nonlinear relationship between explanatory and response variables [46]. Third, it does not rely on the assumptions that are required for parametric statistics and the analysis is not restricted by multicollinearity in predictor variables [47]. Fourth, missing values are not dropped from the analysis, instead variables containing information similar to that contained in the primary splitter are used [47]. CART trees are also relatively simple for non-statistician to interpret [47]. However, CART may produce different models depending on the selection of input variables [48]. Ordination methods are widely used for community analysis [49], and typically assume that abundance of individual species vary in a linear or uni-modal manner along environmental gradients [50]. Classification and regression tree models (CART) Classification tree (CT) models were used to model the occurrence (presence/absence) of anopheline larvae based on measured environmental factors. The CT models were built using the J48 algorithm [51], a java reimplementation of the C4.5 algorithm, which is a part of machine learning package WEKA [52]. Likewise, regression tree (RT) models were used to model the abundance of anopheline larvae [52]. The RT models were built using the M5 algorithm in WEKA [51]. Regression tree models have been previously successfully used in malaria studies [53]. Default parameter settings were used to induce the decision trees. Model training and validation were based on a three-fold cross-validation procedure [51]. The dataset was randomly shuffled into three equal subsets and each subset in turn was used for validation, while the remaining two subsets were used for training. The crossvalidation process was then repeated three times each time with one of the three subsets used as the validation dataset. The predictive performance (based on the percentage of correctly classified instances and Cohen's kappa statistic) of the subsets were averaged to produce a single prediction of the dependent variable. The variation was also assessed based on the difference between the outcomes of the subsets. The mean percentage of correctly classified instances (CCI) [51] and Cohen's Kappa statistic (κ) [54] were used to evaluate the predictive performance of the classification tree model. The CCI is the percentage of the true positive (TP) and true negative (TN) predictions, whereas Cohen's Kappa statistic simply measures the proportion of all possible cases of presences or absences that are predicted correctly by a model, accounting for chance effects. Models with a CCI higher than or equal to 70% and κ higher than or equal to 0.4 were considered reliable [55]. CCI is affected by the frequency of occurrence of the taxon being modelled [55]. Unlike CCI, κ takes a correction into account for the expected number of correct predictions due to randomness, which is strongly related to taxon prevalence [55]. We used the following ranges of κ recommended by [55] for model performance evaluation: poor (κ = 0), slight (κ = 0-0.2), fair (0.2-0.4), moderate (κ = 0.4-0.6), good (κ = 0.6-0.8) and nearly perfect (κ = 0.8-1). We used the determination coefficient (R 2 ) Habitat type 9 types (see Table 2 value to evaluate the performance of the regression tree models [46]. The closer the value to one, the better the model performed. A conditional analysis was performed in order to see how different values of a predictor variable influence the abundance of anopheline larvae. For each of the three regression tree submodels developed (based on the three folds), the influence of predictor variables on the abundance of anopheline larvae was analysed. Regression equations obtained from the submodels were then used to calculate the abundance of anopheline larvae. This was achieved by taking minimum and maximum values of the predictor variables, while other variables, which were present in the model were kept constant at average values. Hence, for each of the three different subsets (folds) a line was plotted showing the relationship between the predictor variables and the abundance of anopheline larvae. Ordination analysis To determine whether a linear or unimodal type of response was present along environmental gradients, the data-set was first analysed using a detrended correspondence analysis (DCA) in CANOCO for Windows version 4.5 [56]. Redundancy analysis (RDA) was then used because all environmental gradients were shorter than 2 standard deviation units. In all RDA analyses, the abundance of anopheline larvae, predators and competitors were considered as response variables, whereas environmental variables were treated as independent variables. A preliminary analysis was performed to test multi-collinearity in environmental variables. When two or more variables had a variance inflation factor of greater than 5, one of these variables was removed from the analysis. Based on a stepwise forward selection, twelve environmental factors were selected as independent variables. Species and environmental data, except for pH, were log transformed [log(x + 1)] prior to analysis to stabilize the variance. The statistical significance of eigenvalues and species-environment correlations generated by the RDA were tested using Monte-Carlo permutations. Analysis of abundance of mosquito predators and competitors in different habitat types We made Box-and Whisker plots in STATISTICA 7.0 [57] to visualize the abundance of mosquito predators and competitors in different habitat types. Abundance data were log transformed [log(x + 1)] prior to analysis. We used a non-parametric, Kruskal-Wallis test at a significance level of 0.05, to determine whether significant differences in the abundance of invertebrate predators and competitors existed between different habitat types. Occurrence and distribution of mosquito larvae A total of 220 samples were collected from 180 sampling sites. Anopheline larvae occurred more frequently in pits dug for plastering, vehicle ruts and farm ditches and less frequently in natural wetlands and ponds ( Table 2). Overall, 1220 anopheline larvae individuals were found in 151 samples (69% frequency of occurrence). A total of 496 culicine larvae individuals were found in 62 samples (28% frequency of occurrence). The anopheline positive habitats were mainly located in agricultural and agro-pastoral land use types (Figure 3). Anopheline larvae were sparsely distributed in natural wetlands. Influence of environmental factors on the occurrence of anopheline mosquito larvae Based on the three models (one model for each fold or subset) developed, the most frequently selected variables were habitat permanency (100%) and occurrence of predators and competitors (67%). Moreover, habitat permanency was selected as the root of the tree for all models, indicating that this was the most important variable determining the presence/absence of anopheline larvae. The classification tree of subset one (Figure 4a) has five leaves and eight branches. Habitat permanency was selected as root of the tree. Anopheline larvae were present in both temporary and semi-permanent habitats. In contrast, anopheline larvae were absent in permanent habitats when predators or competitors were present. This classification tree model had a good predictive performance, with a CCI of 86% and κ of 0.63. The classification tree model based on subset two (Figure 4b) has six leaves and ten branches. Similar to subset 1, habitat permanency was selected as a root of this tree. Anopheline larvae were present in both temporary and semipermanent habitats. In contrast, anopheline larvae were absent in permanent habitats when predators were present and water temperature was less than 20°C. This classification tree model had a good predictive performance, with a CCI of 82.4% and κ of 0.63. The classification tree model based on subset three (Figure 4c) has twelve leaves and nineteen branches. Habitat permanency was selected again as root of the tree. Anopheline larvae were present in temporary habitats. The occurrence of anopheline larvae in permanent habitats was influenced by several biotic and abiotic factors. This classification tree model had a very good predictive performance, with a CCI of 86.5% and κ of 0.71. The importance of biotic factors such as invertebrate predators and competitors and abiotic factors such as permanency on the occurrence of anopheline larvae was also indicated by Generalize Linear Models (GLMs) (See Additional file 2). Influence of environmental factors on the abundance of anopheline mosquito larvae The regression tree model based on subset one predicting the abundance of anopheline larvae has a determination coefficient of 0.44. If the abundance of predators was less than or equal to 12 individuals per sample, LM1 was applied, in case the abundance was higher than 12 individuals, LM2 was used (Figure 5a). According to LM1, the abundance of anopheline larvae increased with increasing water temperature, total dissolved solids, nitrate concentration and decreased with increasing predator abundance and dissolved oxygen concentration. For LM2, the abundance of anopheline larvae increased with increasing water temperature, alkalinity and nitrate and decreased with increasing abundance of predators. The regression tree model based on subset two has three leaves and a determination coefficient of 0.44 (Figure 5b). If the abundance of predators was lower than 2 individuals per sample and water temperature was lower than 28°C, LM1 was applied. In case the temperature was higher than 28 LM2 was applied, whereas if the abundance of predators was higher than 2, LM3 was used (Figure 5b). The regression tree model indicated that the abundance of anopheline larvae increased with increasing water temperature and decreased with increasing predator abundance. The regression tree model based on subset three has three leaves and a determination coefficient of 0.42 (Figure 5c). If water temperature was lower than or equal to 27°C, the linear model LM1 was applied. In case temperature was between 27-29°C LM2 was applied, whereas when temperature was higher than 29°C LM3 was applied. According to the model the abundance of anopheline larvae increased with increasing water temperature, total dissolved solids and turbidity and decreased with increasing predator and competitor abundance. A conditional analysis of the regression tree model (all 3subsets) showing the effect of water temperature on the abundance of anopheline larvae is shown in Figure 6a. A slight increase in anopheline larval abundance was noted at a temperature between 17°C and 28°C, whereas an abrupt increase was observed between 28°C and 34°C. On the other hand, the abundance of anopheline larvae declined with increasing abundance of macroinvertebrate predators (Figure 6b). The importance of water temperature on the abundance of anopheline larvae was also indicated by GLMs (see Additional file 3). The detrended correspondence analysis (DCA) gave a length of gradient smaller than 2 standard deviation units, implying that anopheline larvae exhibit a linear response to environmental gradients [58]. The association between anopheline larvae and the selected environmental variables was found to be significant (p < 0.05) for both the first axis and all canonical axes together (Figure 7). The variance of the RDA-biplot of anopheline larvae and environmental variables based on the first two axes explained 33% of the variance in anopheline data and 94% of the variance in the correlated and class means of anopheline larvae with respect to the environmental variables. The eigenvalues of the first two axes were 0.27 and 0.06, respectively. In this ordination, the anopheline larvae-environment correlation for the first two axes was 0.77 and 0.67, respectively. The first axis of the RDA ordination revealed a gradient primarily associated with habitat permanency. This axis was negatively correlated with the occurrence of anopheline larvae (r = -0.8, p < 0.05). The second canonical axis described the emergent plants and mosquito predators and TDS gradient. Relationship between the abundance of mosquito predators and competitors and habitat types Box-and Whisker-plots indicated that, there was a statistically significant difference in the abundance of invertebrate predators (χ2 = 93.2, df = 2, p <0.05) and competitors (χ2 = 15.9, df = 2, p < 0.05) among different habitat types (Figure 8). Permanent habitats support a significantly higher abundance of macroinvertebrate predators and competitors than temporary habitats (P < 0.05). Discussion A fundamental understanding of the ecology of anopheline mosquito larvae is important in order to plan and implement effective malaria vector control intervention strategies [19]. In the present study, habitat permanency, canopy cover, emergent plant cover and occurrence and abundance of predators and competitors were found to be the main variables determining the abundance and distribution of anopheline larvae in aquatic habitats. Temporary water bodies such as farm ditches, rain pools, open pits for plastering and clay mining, vehicle ruts and hoof prints were the most preferred habitats (in terms of occurrence and abundance) for anopheline larvae. These habitats were either man-made or associated with anthropogenic activities. It should be noted that although many of these habitats, and especially hoof prints, are very small, they are very abundant in the landscape. Increasing human population in the catchment resulted in increased anthropogenic activities including deforestation, agricultural expansion, livestock rearing and brick making which could create suitable habitats for mosquito larvae [6,59]. Clearing and drainage, often for agricultural expansion creates favorable habitats for mosquitoes, thereby increasing malaria transmission [58,60]. In addition, agriculture can cause increased sedimentation due to erosion, which can slow or block streams and decrease the water depth, creating shallow waters ideal for mosquito breeding [59]. Earth excavation for brick making, pot making and pits dug for wall plastering provide a large number of mosquito larval habitats. In this study area, brick making activities were carried out in natural wetlands, where clay soil was used for brick making. In addition to creating mosquito breeding habitats, brick making is also considered as an important cause of deforestation, as it uses a huge amount of fire wood from wetland riparian forests. Deforestation may in turn alter the local microclimate and biodiversity [61], which in turn influences the distribution of malaria vectors. Anopheline larvae were more abundant in small temporary habitats exposed to sunlight with low emergent plant and canopy cover. Emergent plants and/or canopy cover reduces the amount of sunlight reaching the aquatic habitats, thereby reducing water temperature [17]. Low water temperature causes a decline in microbial growth upon which mosquito larvae feed [17]. Smaller water bodies are generally characterized by high water temperature, which eventually led to rapid larval development time [62]. In this study, anopheline larvae occurred less frequently and were found at lower abundance in permanent habitats such as ponds, stream margins and natural wetlands. These habitats are home to a wide diversity of vertebrate and invertebrate predators and competitors and their presence is likely suppresses the density of mosquito larvae [63]. Several studies pointed out that aquatic insects belonging to the orders Coleoptera, Odonata and Hemiptera are responsible for significant reductions in mosquito populations and could be considered in integrated vector management programs [1]. Predators reduce the abundance of mosquito larvae directly via predation, avoidance of oviposition or indirectly via competition for food resources [64]. Some predators (especially those with chewing mouthparts) eat their prey (Odonata), but others suck the body fluid (hemolymph) of the prey (many beetle larvae and Hemiptera) [1]. Some species of mosquito larvae reduce the chance of predator detection by reducing their activities [65,66]. However, this has the disadvantage of reducing feeding efficiency, which in turn prolongs larval development and is also likely to result in smaller adults with probably a reduced longevity and fecundity [65]. Previous studies have reported that the occurrence and abundance of mosquito larvae reduced in response to predator cues [67]. For example, backswimmers (Notonectidae) released predator cues (kairomone) that have a potency to repel ovipositing female mosquito over a week [1]. The predator's cues not only affect mosquito oviposition, but also cause a decrease in mosquito survival, delayed immature development and reduction in body size of emerged mosquitoes [1,67]. The abundance of anopheline larvae can be limited by the presence of competitors in permanent habitats (e.g. natural wetlands). Molluscs and anurans are the most common competitors, which feed on the same type of food as mosquito larvae. Several studies have shown that competitors decrease mosquito longevity and increase the developmental time of mosquito larvae [1]. In this study, Box-and Whisker-plots showed that permanent habitats support a significantly higher abundance of macroinvertebrate predators and competitors than semi-permanent and temporary habitats ( Figure 8). The conditional analysis and ordination diagram demonstrated that the abundance of anopheline larvae was negatively related to invertebrate predators. The decision tree models, redundancy (RDA) analysis and the GLMs indicated that both biotic and abiotic environmental factors influence the abundance and distribution of anopheline larvae. Our results indicate that preferred (in terms occurrence and abundance) anopheline breeding sites were temporary habitats, most notably, pits for plastering and clay mining, agricultural trenches, rain pools, vehicle ruts and small natural sunlit temporary breeding habitats such as animal hoof prints and rain pools. The overall suitability of these temporary habitats was mainly influenced by water temperature, vegetation cover, and presence of predators and competitors. Permanent habitats such as natural wetlands in the vicinity of Jimma town were less suitable as breeding sites for anopheline larvae (Figure 3). This may be due to the high abundance and diversity of non-mosquito invertebrates and fish in these habitats (Figure 8), which could suppress mosquito population by predation and competition. This suggests that conservation of permanent habitats such as natural wetlands could be one strategy in the integrated malaria control program. The use of predaceous insects to control mosquito larvae is not only ecologically friendly but also a means by which more effective and sustainable control can be achieved [1]. However, detailed knowledge on the interaction between mosquito larvae and their predators is crucial for implementing successful vector control interventions. Contrarily, environmental modifications (e.g. drainage) of permanent habitats such as natural wetlands for malaria control could reduce the natural predator and competitor population densities, and thus be counter-productive and enhance the occurrence and abundance of mosquito larvae. The findings of this study suggest that malaria vector control intervention strategies in the study area should target (man-made) temporary water bodies. In view of the presence of insecticide resistant anopheline mosquito populations in the study area, targeting these temporary water bodies for anopheline mosquito larval control should be considered as an alternative to reduce vector density and hence prevalence and/or incidence of malaria at a local scale. The use of microbial insecticides such as Bacillus thuringiensis can be more environmentally friendly in natural systems [68]. However, the use of chemical insecticides in natural systems may pose deleterious effects on non-target organisms such as predators and competitors. The relationships found in this study between anopheline larvae and biotic and abiotic variables are mainly valid for the most common species Anopheles arabiensis found in the region [22]. The main limitation of the present study is that the results may be applicable to some areas where the same or similar species predominate, but not to the other areas with different species. Therefore, it would be interesting to further investigate whether these relationships can be generalized for other regions and different species.
2016-05-14T03:14:47.493Z
2013-11-04T00:00:00.000
{ "year": 2013, "sha1": "efd976981f051f2091801ca978a3b04130095c2e", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/1756-3305-6-320", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db8c88c91e1571d64e5c79bc195770f9c9cc5199", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18243206
pes2o/s2orc
v3-fos-license
Patient comfort during flexible and rigid cystourethroscopy Introduction Cystourethroscopy (CS) is an endoscopic method used to visualize the urethra and the bladder. Aim In this study, we prospectively evaluated pain in men undergoing cyclic cystoscopic assessment with rigid and flexible instruments after transurethral resection of bladder tumor (TURB). Material and methods One hundred and twenty male patients who were under surveillance after a TURB procedure due to urothelial cell carcinoma and who had undergone at least one rigid cystourethroscopy in the past were enrolled in the trial. Patients were prospectively randomized to age-matched groups for flexible (group F) or rigid (group R) CS. Patient's comfort was evaluated on an 11-grade scale, ranging from 0 (free from pain) to 10 points (unbearable pain). Results The patients described the pain during the previous rigid CS as ranging from 4 to 10 (mean: 6.8) in group F and from 0 to 10 (mean: 5.8) in group R. Group R patients described the pain during the current rigid CS as ranging from 0 to 10 (mean: 5.7). No mean change in the grade was observed between the two pain descriptions (no change 11 patients, weaker pain 25 patients, stronger pain 24 patients, gamma 0.51, p < 0.0001). Group F described the pain as 1 to 5 (mean: 2.1). In the case of flexible CS the pain experience was greatly lowered compared to the previous rigid CS. All flexible CS patients reported lowered pain (by 1 to 9 grades). Patients’ age did not influence the comfort of the flexible CS or the change in pain level. Conclusions Flexible CS is better tolerated than rigid cystoscopy by male patients regardless of patients’ age. Introduction Cystourethroscopy (CS) is an endoscopic method used to visualize the urethra and the bladder. It is commonly used by urologists for evaluation of hematuria, voiding symptoms, performance of minor procedures such as foreign body removal and surveillance of urothelial carcinoma. Cystourethroscopy may be performed using either rigid or flexible cystourethroscopes, and both rigid and flexible devices were shown to have equal efficacy in identifying tumors in the bladder [1,2]. Compared with flexible endoscopes, rigid instruments offer better image quality, a wider lumen of the working channel, improved irrigant flow and superior handiness. However, flexible cystourethroscopes provide more options for patient positioning, enable smooth passage over an elevated bladder neck or median lobe, facilitate full inspection of the bladder because of their movable tip and, what is most important, significantly improve patient comfort. Only a few old and two recent studies comparing patient comfort are available; however, they are based on heterogeneous groups of patients and/or disorders, and some of them have unclear methodology or include CS in general anesthesia, which is no longer routinely performed nowadays [3][4][5][6]. There are not enough objective data on patient comfort in the literature to demonstrate authoritatively superiority of either of these methods. Aim In this study, we prospectively evaluated pain in men undergoing cyclic cystoscopic assessment with rigid and flexible instruments after transurethral resection of bladder tumor (TURB). Material and methods Appropriate institutional ethics committee approved the study. We evaluated pain perception of male patients who had undergone at least one rigid cystourethroscopy procedure in our department as standard surveillance after a TURB procedure due to urothelial cell carcinoma (UCC) and were scheduled for the next CS procedure. From the group of 120 patients, 60 patients were randomly assigned to have current flexible CS (group F), and the other 60 patients underwent current rigid CS as controls (group R). All procedures were performed in the Urology and Urologic Oncology Department of Wrocław Medical University, and one urologist conducted all flexible procedures. Exclusion criteria included CS with any type of intervention, patients with ureteral catheters, history of any but TURB surgery on the genitourinary tract and urinary tract infection. Patient's comfort was evaluated with a numeric rating scales (NRS) [7]. An 11-point numerical scale, ranging from 0 (free from pain) to 10 points (unbearable pain) was used. Comfort was estimated a few minutes after every CS. Procedures were performed with a rigid Storz 20 French and flexible 15 French Wolf cystourethroscope. All instruments were inserted and advanced under direct operator vision. All procedures started with standard disinfection of external genitalia with an antiseptic agent. Injection of a lubricant containing 2% lidocaine was performed at least 5 min before instrument insertion. As is well known, this maneuver reduces pain and enhances male patient comfort [8]. All actions were performed without any systemic sedation or analgesia. No antimicrobial prophylaxis was used routinely. All CS procedures were carried out in the dorsal lithotomy position. Statistical analysis Statistical analysis was performed using Statistica 12.0 (StatSoft, Poland). The distribution changes were analyzed with the χ 2 test. The correlations between scores and age were performed with Spearman correlation and comparison of two score results with g statistics. Results The patients described the pain during the previous rigid CS as ranging from 4 to 10 (mean 6.8) in group F and from 0 to 10 (mean: 5.8) in group R (Figure 1). No influence of age on the pain experience was observed (rs = -0.062, p = 0.50). Group R patients described the pain during the current rigid CS as ranging from 0 to 10 (mean: 5.7). No mean change in the grade was observed between the two pain descriptions (no change 11 patients, weaker pain 25 patients, stronger pain 24 patients, g = 0.51, p < 0.0001). Again, after the 2 nd rigid CS no influence of age was observed (rs = -0.024, p = 0.85). Group F described the pain as 1 to 5 (mean: 2.1). In the case of flexible CS the pain experience was greatly lowered compared to the previous rigid CS (Figure 1, p < 0.0001). All flexible CS patients reported lowered pain (by 1 to 9 grades). Patients' age did not influence the reported comfort of the flexible CS (rs = 0.046, p = 0.73) or the change in the pain level ( Figure 2). Discussion To date only two studies with reliable methodology evaluating patients' comfort during CS have been conducted. In the EAU guidelines there are no recommendations suggesting any of the techniques as more favorable. There are no evidence-based data indicating which type of CS is better for a given patient and what the risk factors of elevated pain perception during the procedure are. We analyzed a homogeneous group of 120 men who were under cystoscopic surveillance after TURB treatment because of UCC. We concluded that rigid CS was associated with significantly greater pain experienced by the patient during the procedure. Additionally, similar to the results of Seklehner et al., rigid CS was often associated with severe or even unbearable pain [5]. However, we observed that there was no statistically significant difference in pain perception be-tween younger and older patients, which is inconsistent with Seklehner's findings. Rather we can conclude that patients have a given susceptibility to pain perception, which in the case of repeated rigid CS manifested in correlated pain scores (g = 0.51, p < 0.0001). In the case of group F patients, who reported largely decreased pain scores after the 2 nd CS, no correlation between feelings was noted (g = 0.10, p = 0.37). As the pain perception during rigid CS seems to be a characteristic of a given patient, and it is largely decreased when flexible CS is performed, we believe that risk factors should be defined to identify patients who are particularly sensitive to pain. These patients should be referred to a hospital in which it is possible to perform flexible CS, because the fear of pain can be a sufficient reason to abandon the periodic surveillance after surgical treatment of UCC. For those patients rigid CS under general anesthesia is an option, but it is related to higher cost and higher risk of complications and cannot be performed in an outpatient department. We chose not to assess pain several days after the procedure, because it was proven that patient comfort was similar in both groups at a later time after CS [5]. This study has some limitations. Firstly, rigid cystoscopy was not conducted by one surgeon. Secondly, the studied group was small; however, it was homogeneous.
2018-04-03T01:01:46.532Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "caa0f2d8cb0625a40536a604d48715f51f372a7b", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-42/pdf-27793-10?filename=patient%20comfort.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "caa0f2d8cb0625a40536a604d48715f51f372a7b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
58920209
pes2o/s2orc
v3-fos-license
A possible model to high TC ferromagnetism in Gallium Manganese Nitrides based on resonation properties of impurities in semiconductors The high TC ferromagnetism in (Ga,Mn)N were observed and almost all results are approximately similar to the experimental results in (Ga,Mn)As except the value of TC. Though all standard experiments on magnetism clearly support the results, the value is unexpectedly high. This work present and discuss the possibility of high TC ferromagnetism, after brief review of the experimental results. The key speculation to Bosonization method in three dimensions is resembled with the problems in Anderson localization. Introduction Ferromagnetism on Diluted Magnetic Semiconductors (DMS) has come to be center of attention to the researchers because of the possibilities to new functionalities in the main stream of the application to electronics. Besides the potential of such applications, it is specially noticed that the problems on the diluted impurity systems or low carrier density in semiconductors seem to provide us common interesting physical problems. For example, high efficiency in thermo-electric transformation effect in room temperature and high T C super conductivity are considered to be the typical phenomena. The highly efficient thermo-electric transformation in BiTe 1 and high T C superconductivity materials have the carrier concentrations less than ~10 27 per m 3 (~10 21 1/cc ) and the impurity concentration of several percents. The high T C ferromagnetism is also observed in the semiconductors with these electronic conditions 10 25~1 0 28 per m 3 . It is noticed that these effects are appeared in relatively high temperature range, though the impurities are randomly distributed and in low concentration. Thus, these results make us infer that some unrecognized common property may exist in the diluted impurity systems in some semiconductors. The effects also appear under these conditions as the "unthinkable phenomena" in relatively high temperature range. In the present work, we will try to look for the physical key concept through the detailed investigation to the high T C ferromagnetism in (Ga,Mn)N. The first observation of DMS ferromagnetism was reported for the ferromagnetism with T C of 110 K on Ga 1-x Mn x As (1 ≧ x ≧ 0) ( the family of the Mn-doped crystals Ga 1-x Mn x As is simply written by (Ga,Mn)As and similar abbreviation is also used for any other doped crystals) 2 . The value of T C = 110 K is relatively high for the Mn impurities with the concentration of 3%. After this work, a number of observations on DMS-ferromagnetism have been reported for various materials until now. Recently, our group reported the extremely high T C value of 940 K in (Ga,Mn)N 3,4 . Besides these reports, ferromagnetism of Ca 1-x La x B 6 has been specially attracted our attention, because all ions in the crystal are not magnetic. Moreover, the estimated T C is about 600 K 5 . However, some careful experiments show the existence of the iron impurities with the concentration of about 100ppm 6 and the Auger experiment shows that thickness of the Fe impurities are deeper than 1 µm 7 . Here, the problem of "what is the origin of the ferromagnetism for the extremely low concentration of iron ions?" is emerged. Though some models have been proposed to this question, one of the most possible one is considered to be the model of DMS-ferromagnetism. We, at first, experimentally confirm the ferromagnetism and unbelievably high T C value on (Ga,Mn)N in comparing with the possibility of super-paramagnetism. Though these experiments are quite standard ones, but some theorists have strongly disapproved the possibility of high T C value. Thus, we will point out some naive questions to their calculation based on the physical picture as the experimentalist and moreover, we want to speculate and present one possible model to explain the experimental results of the high T C ferromagnetism on (Ga,Mn)N. The important hint to the problems on high T C ferromagnetism is given by the idea to the lasing effect. The most remarkable point of the lasing effect is that the lasing action is considered as a kind of phase transition in high temperature 8 . The important point is that the lasing action is appeared in high temperature and is produced in the randomly distributed rare impurity media. In present work, referring to the conditions of lasing action, the possibility of the high TC ferromagnetism in (Ga,Mn)N is to be discussed. After that, we will briefly argue about the adequacy and applicability of the key concept used in the model. The crystal structure and stacking structure of layers in the sample is shown in Fig. 2. The stacking structure is a typical one used in present work for (Ga,Mn)N film growth on a sapphire(0001) substrate: Before the growth of (Ga,Mn)N films, non-doped GaN layers were grown as the buffer layer with the thickness of 200 nm (2000 Å). Two types of buffer layers were used in present work. For one sample, the buffer layer was grown by NH 3 -MBE method, in which NH 3 gas is used in MBE as a Nitrogen source. As for another method, the buffer layers were grown by rf-plasma of Nitrogen gas and the sample films are grown in the different vacuum chamber of MBE. Mn concentrations x in (Ga 1-x Mn x N) of the films were estimated with the aid of an Electron Probe Micro Analyzer (EPMA). As a typical data for Mn distribution along the growth direction, Magnetic measurements As is easily understood from the data in Fig.1, the value of T C is unbelievably high. The hysteresis curve clearly indicates typical property of ferromagnetism. The remnant moment means spontaneous spin-polarization and existence of domain area. The domain size is given by the energy balance between exchange energy and summation of spin dipole energy in the domain and the usual size of the area is larger than several hundreds nanometer. This means that the ferromagnetic particles should be larger than several hundreds nanometer, if the ferromagnetism of (Ga,Mn)N were originated from assemble of ferromagnetic grains. However, the experiments of EPMA and SIMS clearly deny the grain like structure. dependence to the carrier density in Fig.4 was obtained from the data of Hall resistance. It is noticed that for the Mn-concentration higher than 3% in (Ga,Mn)N, the carrier is p-type one, while pure GaN grown by NH 3 -MBE method has n-type conduction. The GaN film has n-type carrier with the density of 10 27 m -3 (10 21 cm -3 ), but in the case of a sample with Mn-concentration of 6.8%, for example, the carrier of (Ga,Mn)N has hole carrier density is 10 26 m -3 (10 20 cm -3 ). The GaN produced by NH 3 -method has electron conduction and metallic temperature characteristics. A most possible model for the electron conduction in the GaN is a given by some lattice defect such as anti-site structure. As the concentration of Mn impurities increase, the carrier type changes from n-type to p-type and the ferromagnetism in (Ga,Mn)N is appeared above the Mn concentration of 3%. Though the hole density becomes higher with increasing Mn concentration, but the ratio of the ferromagnetic part to the total magnetization does not increase so much. From the Hall resistance data, the temperature dependence on the carrier density can be reviewed as Fig. 5. As is seen in Fig.5, the increase of resistance in low temperature strongly depends on the decrease of carrier density. The hole conduction can be explained by the electron hopping motion from Mn 2+ to Mn 3+ . The existence of the mixed valence state of Mn ions in (Ga,Mn)N is recently reported by our group 10 . From both data of Fig.5, the main reason of the increase in resistivity is considered to be due to the decrease of carrier density. Furthermore, the decrease of carrier concentration also coincide with the decrease of spontaneous magnetization, as is seen in Fig.4 (D). As is discussed in ref. 4, the double exchange mechanism explains the experimental results of the ferromagnetism in (Ga,Mn)N. The double exchange model is firstly proposed by the present authors in the previous work 4 and recently, the model is also theoretically discussed and supported. Electron hopping conduction between Mn atoms decrease with decreasing temperature. Therefore, the spontaneous magnetization arising from double exchange mechanism may be decreased at low temperatures. The dotted line arrow in fig 4 (c) and (d) indicates that the the threshold temperature below which the carrier trapping becomes markedly large correspond to the temperature below which the spontaneous magnetization shows the reduction. From the experimental results, following physical picture is described: some Mn 3+ ions in Mixed valence states of Mn 2+ and Mn 3+ and double exchange mechanism based on the hopping conduction is supported by experiments of EXAFS and XANES spectra, transport property and strong coincidence in the anomalous region between transport and magnetic properties. Especially, it is quite noticeable that the zero-field spin polarization in High T C exceeding room temperature is sharply decrease below 10 K. This effect directly means that the conduction electrons produce the ferromagnetism, because the temperature range is clearly consistent with decreasing region of the hole carriers. Thus, the problem is "what is the origin of the High T C ferromagnetism in (Ga, Mn)N with the several % concentration of Mn impurities". 3. A possible model and the application to other problems. Question to the theoretical calculation. Though ferromagnetism in (Ga,Mn)N is experimentally proved, some theorists clearly deny the experiments based on their calculation 9 . According to the theory, the band structure of (Ga,Mn)N is calculated by so-called KKR-CPA method. The calculation method to the band structure is a kind of APW-methods and the crystal space are separated into free electron-and atomic sphere-regions (or muffin-tin type potential region). In this method, the atomic potential region is restricted within the sphere having characteristic radius R. The radius is included in the Wigner-Seits cell. For such procedure, we have a question that the segmentation into the Wigner-Seitz cells might be inadequate, because the imperfect shielding length in low carrier density on DMS materials is quite longer than the metals. In the case of metals, there exist enough electrons and the size of imperfect electron shielding around the cation is short enough in comparing with the size of Wigner-Seits cell and the segmentation method is quite adequate. The imperfect shielding length in (Ga,Mn)N is simply estimated by use of Thomas-Fermi approximation 12 . According to the method, the shielding length rshield is estimated from the carrier density and other basic constants. The role of atomic potential is relatively important in the imperfect shielding region. In fact, the hydrogenic exciton spectra in (Ga,Mn)N were already observed in ultraviolet region 11 . Appling the formula to the case of (Ga,Mn)N, the estimated value of the shielding size (called "Thomas-Fermi diameter" in this work) is longer than 7 nm. Thus, the size is considered to be much longer than the longest lattice constant of c = 0.517 nm. For example, the Muffin-tin radii are 0.1016 nm for cations and 0.9252 nm for anions in the theory 13 . Though the free electron region is restricted as narrow as possible in APW method and the smooth connection between core and free electron wave functions, the spin state is fairly restricted by the properties of plane wave functions. The free electron part exhibits the spin singlet state for every energy level. Such a singlet spin state is given by the resultant contribution of Coulomb-and exchangeintegrals based on the plane wave functions. Thus, the Thomas-Fermi effect might give us some severe problems to the numerical estimation. Because the long range effect in the Coulomb interaction is more important in the Thomas-Fermi region, it can be considered that all magnetic ions of (Ga,Mn)N in the Thomas-Fermi region directly can contribute to the magnetic order. A model to high T C ferromagnetism in (Ga,Mn)N. Though ferromagnetism in (Ga,Mn)N is experimentally proved, but the remaining question is "why such high T C is realized in the conditions of diluted Mn ions, the random distribution and low carrier density". In our model, the story of the ferromagnetic order is given by the double exchange mechanism based on the virtual bound states coupled by spin correlated electron hopping. The spin correlated transfer is generated by the s-d Hamiltonian. This model is schematically given by Fig.6. The condition of low carrier density arises from the experimental result in Fig. 4(D). These conditions are quite disadvantage for the magnetic order in usual materials. As is noticed in previous section, the important hint to the problems on high T C ferromagnetism is suggested by the idea to the lasing effect. The most remarkable point is that the lasing action is considered as a kind of phase transition in high temperature 8 . The important point is that the lasing action can work in high temperature, though the lasing media has the conditions of randomly distributed and low concentration lasing atoms. In present work, referring to the conditions of lasing action, the possibility of the high T C ferromagnetism in (Ga,Mn)N is to be discussed. At first, we consider the one-dimensional case for the simplicity and later we will discuss about the three-dimensional case. It is noticed that the Mn impurities can be scattering center to the conduction electron and the ions on substitutional sites can form a resonator to the conduction electrons, because some electron waves can form the standing wave in the space between two Mn ions. Thus, if we assume that the role of laser light corresponds to the role of electron wave in the conduction band and a couple of Mn impurities are considered as a resonator in one-dimension. Here, we temporally call the resonator made from Mn impurities "impurity resonator" in this work. The impurity resonator to the electron waves corresponds to the resonator in laser action. On this background, we propose a lasing like mechanism to high T C ferromagnetism in (Ga,Mn)N as a possible model by using following corresponding: Looking at the correspondence to the conditions of both Laser and High T C ferromagnetism, we can understand the similarity of them, except the statistics in 11) in the Table I. The condition 6) is satisfied as followings. At first, we want to emphasize that this work is based on following experimental results on (Ga,Mn)N. a) Existence of mixed ion states of Mn 2+ and Mn 3+ together with hole conduction in the ground state (Concentration dependence shows that there exist both n-and p-conduction types, but p-type is majority in ferromagnetic state with Mn-concentration higher than 3%, as was discussed in ref. 4 . b) GaN crystals without Mn ions made by ammonia-method has n-type conductivity. c) Appearance of ferromagnetism is strongly related with conductivity as is given in the text and ref. 4. These characteristics are originated from the properties as the ground states in this crystal. For these results in (Ga,Mn)N, we may assume that the hole conduction is produced by hopping conduction between Mn 2+ and Mn 3+ states. This model just follows the "double exchange mechanism". This directly means the ground ferromagnetic state coupled with the hopping conduction. If the ferromagnetism is represented by the electron states in conduction band and two Mn ion states (mixed valence state) on Ga-sites of GaN, the ferromagnetic state looks like some excited state. In the case of (Ga,Mn)N, a lot of holes of Mn 2+ already exists as Mn 3+ ions (which is the excited state generated by the ionization of Mn 2+ ). The population inversion of holes is realized in the ground state when the population of Mn 3+ is higher than that of holes in virtual bound states of Mn 2+ , which is across the Fermi level. Observation of the hole conduction supports existence of such mechanism. The structure is produced in the process of crystal growth of (Ga,Mn)N. The condition 11) is possible to satisfy in the case of one dimensional case, because the scattering phenomena near ε F in the electron system can be describes by Bosonization representation. The basis of the Bosonization representation is given by standing wave states and the standing wave representation is produced by Bogoliubov transformation to the free electron states and the transformed creation annihilation operators satisfy the Bose commutation relations. This means that the one-dimensional system satisfies the condition of lasing action. In the case of High T C ferromagnetism, the "lasing" electron is standing wave state and is considered to be a kind of pairing state. On these backgrounds, we may speculate following model: 1) In the 3-dimensional case, all electrons cannot completely describe the Bosonization representation, but for the special electron modes may satisfy the condition. This possibility is similar to the problem in the Anderson localization in highly condensed electron system. If this speculation is true, the coexistence of ferro-and para-(super para-) magnetisms becomes quite intrinsic problem and the ratio of about 20% corresponds to the ratio of Boson state modes to the total states. 2) The electronic states are given by double exchange electronic states coupled with the lasing standing waves formed in the conduction bands. 3) Origin of high temperature working temperature is given by following physical picture: Thus, present model is considered to be a possible model to explain the coexistence of Ferroand Para-magnetisms and high T C value of (Ga,Mn)N. Discussion This model has a possibility to apply other phenomena produced by substitutionally doped mixed valence ions with low carrier density in semiconductors. One of the characteristic properties in the phenomena is the high working temperature. The characteristics are supported by some mode in the "impurity resonator" structure. In the three-dimensional crystal, a part of conduction mode is responsible up to high temperature. These effects should reflect to the electronic state. Some papers have been reported existence of narrow spectra near ε F even in high temperature for working range. In some X-ray photo-Emission research, such spectra are called coherent spectrum. The existence of coherent spectra have been reported in DMS ferromagnetic materials 14,15 . High T C superconductor 16 and other spin correlated materials 17 . The existence of these sharp bands in high temperature seems to support the model in present work. But the model is on the speculations that the existence of "impurity resonator" and the ratio of the standing wave in three dimensions. Present authors hope theoretical support for these problems. This is the elementally process of electron hopping.
2018-12-15T06:34:13.253Z
2006-07-27T00:00:00.000
{ "year": 2006, "sha1": "c550eabc4a4823c624d2d29f3e5ea77aa3e153b4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0607708", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "58652e711edf7270d46139294137749901866ecb", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
208000908
pes2o/s2orc
v3-fos-license
Molecular identification and antibiogram profiles of respiratory bacterial agents isolated from cattle reared in some selected areas of Mymensingh division, Bangladesh : Respiratory bacterial infections in cattle are very common all over Bangladesh causing high economic loss. This research was performed with a view to proper control of respiratory bacterial infections of cattle in Bangladesh. A total of 100 nasal samples were collected on the basis of clinical signs. From the collected samples isolation, identification and characterization of the bacterial agents was done using cultural, biochemical and molecular techniques. Antibiogram profiles of the isolated agents were studied by disc diffusion method. Pasteurella multocida , Staphylococcus aureus and Escherichia coli were successfully isolated and identified from the collected samples. The isolated Pasteurella multocida produced small, round, opaque colonies on blood agar; Staphylococcus aureus produced golden yellow colony in mannitol salt agar; E. coli produced black color colonies with metallic sheen on EMB agar. Pasteurella multocida showed Gram negative, bipolar rods. Staphylococcus aureus showed Gram positive, cocci shaped and E. coli showed Gram negative, small rod shaped. Among 100 nasal samples 16 were found to be positive for Pasteurella multocida, 21 for Staphylococcus aureus and 13 for E. coli on the basis of cultural and biochemical characteristics. The antibiogram study reflected that ciprofloxacin, tetracycline and chloramphenicol should be first choice of treatment of respiratory bacterial infections caused by the isolated 3 bacteria. Pasteurella multocida was further characterized by PCR where 16 isolates showed positive band at 460 bp and Pasteurella multocida type A at 1044 bp. The present research work covering antibiogram study is a preliminary report in the context of Bangladesh. Introduction Respiratory disease is among the most economically important diseases of cattle occurring in Bangladesh and all parts of the world.Annual losses to the US cattle industry are estimated to approach US$1 billion, whereas preventative and treatment costs are over US$3 billion annually (Griffin, 2006;Snowder et al., 2007).Respiratory diseases of cattle is typically not due to a single cause but is usually caused by a combination of several aspects, such as infectious viral and bacterial agents, as well as other factors that cause stress on the animal.Pneumonia is the most frequently occurring respiratory infection in domestic animals, since the etiologic agents are bacteria, viruses or viruses complicated with bacteria (Allan et al., 1991).Pasteurella multocida is the cause of various diseases in mammalian and avian species (Carter , 1967).Pasteurella multocida type A strains cause pneumonia in cattle, sheep and pigs, fowl cholera in birds, and "snuffles" in rabbits (Carter , 1967), strains of types B and E cause haemorrhagic septicaemia in cattle and buffaloes (Bain et al., 1982) and type-D strains cause pneumonia in cattle and atrophic rhinitis in pigs (Rutter, 1985).Staphylococus aureus, Streptococcus Pneumoniae (Beiter et al., 2006) andE. coli (Wessely et al., 2005) are the other pneumonic pathogens-but less frequently-could be recovered from pneumonic lungs.Bovine respiratory disease (BRD) causes increased death losses as well as medication costs, labor, and lost production.BRD accounts for approximately 75% of feedlot morbidity and 50 -70% of all feedlot mortality (Edwards, 2010).The percent of morbidity and mortality depends on the management system in place, prevention program and the kind of pathogens involved.Different antibiotics are used in the treatment of respiratory diseases in cattle but antibiotic resistance among bacteria is creating a serious threat throughout the world.In the past two decades the rise in antibiotic resistance has been reported in many countries including Bangladesh (Kapil, 2004).It might be due to indiscriminate use of antimicrobial agents (Nazir et al., 2005).This problem increases the importance of antibiotic sensitivity testing to identify accurate antibiotic for certain bacteria affecting respiratory system.To the best of our knowledge, not much work has been carried out in Bangladesh on molecular detection of the bacteria associated with respiratory diseases of cattle covering antibiogram study.This study was therefore designed to detect bacteria from cattle suffering with respiratory diseases using polymerase chain reaction (PCR) based approach including their antibiogram profiles. Sample collection, primary isolation and identification of the bacteria by conventional methods A total of 100 field samples (nasal secretion & swab) were collected from suspected cattle with respiratory problems of the study area including the dairy farm of BAU, Pirbari-Mymensingh Sadar, Poarkandi-Muktagacha, Dewangonj-Jamalpur, Mohangonj-Netrokona and Nakhla-Sherpur.The samples were collected aseptically by using sterilized cotton buds from the nostril, immediately after collection transported to the laboratory by inoculating into nutrient broth and incubated at 37 0 C overnight for enrichment.The broth culture was then streaked onto Nutrient agar, Blood agar, MacConkey agar, Eosin methylene blue (EMB) agar and Mannitol salt agar (MSA) where all of the media were brought from the Indian company, HiMedia.Suspected colonies were further analyzed by Gram's staining technique and biochemical tests for preliminary isolation and identification of bacteria from respiratory diseases of cattle (Cheesbrough, 2006). Molecular identification of the species and type A specific P. multocida by PCR DNA was extracted from the isolated bacteria using conventional boiling method.PCR was performed to detect P. multocida using the species specific primers KTT72 and KTSP61 (sequences mentioned in Table 2) having amplicon size 620 bp and to detect P. multocida type A using type specific primers CAPA-FWD and CAPA-REV (sequences mentioned in Table 2) having amplicon size 1044bp.PCR reaction mixture (25 µl) was prepared by mixing 12.5 µL master mixtures (Maxima Hot start Green, USA), 1μL of each primer, 8.5 µL deionized water and 2 µL DNA template.Amplification was performed in a thermal cycler for P. multocida species specific primers as follows: initial denaturation at 95°C for 5 min, followed by 30 cycles of denaturation at 95°C for1 minute, annealing at 49°C for 1 minute, elongation at 72°C for 1 minute, and a final extension at 72°C for 7 min.Amplification for P. multocida type A specific primers was as follows: initial denaturation at 95°C for 5 min, followed by 30 cycles of denaturation at 95°C for 0.3 minute, annealing at 55°C for 0.3 minute, elongation at 72°C for 1.3 minute, and a final extension at 72°C for 5 min.Electrophoresis was run at 100 Volt for 30 minutes on 1.5% agarose (Sigma-Aldrich, USA) gel after mixing PCR product with loading buffer along with 1-kbsize DNA marker (Promega, USA).Then agarose gel was stained with ethidium bromide (0.5µg/ml) and de-stained in distilled water and placed on the floor of UV transilluminator for visualization and image documentation. Results and Discussion Based on cultural (Table 1), staining and biochemical characteristics, among 100 nasal samples 16 were found to be positive for P. multocida, 21 for S. aureus and 13 for E. coli.The positive P. multocida isolates were further confirmed by polymerase chain reaction (PCR).The results of sugar fermentation test showed that P. multocida fermented sucrose, mannitol and dextrose producing only acid, but did not ferment lactose and maltose whereas all five basic sugars were fermented by E. coli with the production of acid and gas, and S. aureus producing only acid.The results of other biochemical tests showed that P. multocida was positive to indole, catalase and oxidase test but negative to MR-VP test whereas both E. coli and S. aureus were positive to MR, indole and catalase test but negative to VP test.DNA extracted from P. multocida isolates used in PCR assay for molecular identification.PCR with KMT1T7 and KMT1SP6 primers (species specific) identified 16 isolates as positive for P. multocida showing amplification of 460bp (Figure 5).PCR with CAPA-FWD and CAPA-REV primers identified 16 isolates as positive for P. multocida type A showing amplification of 1044bp (Figure 6). From the antibiogram study, it was revealed that P.multocida were highly sensitive to ciprofloxacin, chloramphenicol, tetracycline and streptomycin, intermediate to erythromycin, azithromycin but resistant to amoxicillin.For E.coli, chloramphenicol, kanamycin, nalidixic acid and tetracycline were sensitive, intermediate to ciprofloxacin but resistant to amoxicillin and erythromycin.For S. aureus, ampicillin, amoxicillin, ciprofloxacin, tetracycline, erythromycin and gentamycin were sensitive but resistant to amoxicillin.The results of antibiogram profiles are presented in (Table 3; Figures 7, 8 and 9).The objectives of the present research work were to isolate and identify the bacterial etiological agents using conventional as well as molecular techniques along with the antibiogram profile study of the isolated bacterial species from respiratory diseases of cattle in Bangladesh. Colony characters of P. multocida from cattle on blood agar, nutrient agar were similar to the findings of Naz et al., 2012;Ashraf et al., 2011;De Alwis, 1996.The morphology of the isolated P. multocida found Gram negative, bipolar rods, Single or paired in arrangement which was supported by Cowan, 1985 andAshraf, 2011. The sugar fermentation test revealed all the P. multocida isolates as fermenter of dextrose, sucrose and mannitol and produced acid and non-fermenter of maltose and lactose, supported by Buxton and Fraser, 1977.The isolates were also found as negative to MR test, VP test and positive to indole, catalase and oxidase test, as reported by Buxton and Fraser, 1977.Also in this study, colony of the isolated E. coli observed on NA, EMB and MacConkey agar showed similarities to the findings of Kalin et al., 2012Nazir et al., 2005.The E. coli isolates revealed a complete fermentation of 5 basic sugars by producing both acid and gas which was supported by Thomas et al., 1998.The isolates also revealed positive reaction in MR test and indole test but negative reaction in VP test were similar to the statement of Buxton and Fraser, 1977.The morphology of the isolated S. aureus found in Gram's staining was supported by Kitai et al., 2005.Isolates of S. aureus revealed a complete fermentation of 5 basic sugars and production of acid which was supported by Mckec et al., 1995 andOIE Manual, 2012.All of the isolated S. aureus revealed positive reaction in catalase, Indole and MR test but negative reaction in VP test as reported by Cheesbrough, 2006.In this study molecular detection, a 1044-bp band was seen in each lane with the product of the PCR for P. multocida type A. The isolates of P. multocida type A in this study were similar to the findings of the researcher Khalid et al., 2017.From the antibiogram study, it was revealed that P.multocida were highly sensitive to ciprofloxacin, chloramphenicol, tetracycline and streptomycin, intermediate to erythromycin, azithromycin but resistant to amoxicillin.For E.coli, chloramphenicol, kanamycin, nalidixic acid and tetracycline were sensitive, intermediate to ciprofloxacin but resistant to amoxicillin and erythromycin.Those findings were almost similar with the findings of Akond et al., 2009 andJeyasanta et al., 2012.For S. aureus, ampicillin, amoxicillin, ciprofloxacin, tetracycline, erythromycin and gentamycin were sensitive but resistant to amoxicillin.Almost similar sensitivity to these antibiotics was found by Farzana et al., 2004. Conclusions The findings of the present research work will certainly facilitate the field veterinarians to select the appropriate antibiotics against cattle respiratory diseases throughout the country and to overcome the bacterial antibiotic resistance issue.Considering the importance and impact of the respiratory disease in cattle and their drug resistance, steps should be taken by government to maintain strict hygienic measurement and proper use of antibiotics. Figure 6 . Figure 6.PCR image of P. multocida type A Lane M-1k bp DNA ladder, Lane 7-Negative control, Lane 6: Positive control species and Lane 1-5: Isolated sample of P. multocida type A. Table 1 . Results of cultural, staining and morphological characteristics of the P. multocida, S. aureus and E. coli isolated from cattle. Legends: NA = Nutrient agar, BA= Blood agar, EMB =Eosin Methylene Blue agar, MAC = MacConkey agar, MSA = Mannitol Salt agar
2019-11-09T03:56:31.795Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "966c00a4ea6d67536641283501bf41b978095a7a", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/AAJBB/article/download/64949/44058", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "966c00a4ea6d67536641283501bf41b978095a7a", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
256340405
pes2o/s2orc
v3-fos-license
Does wild-type Cu/Zn-superoxide dismutase have pathogenic roles in amyotrophic lateral sclerosis? Amyotrophic lateral sclerosis (ALS) is characterized by adult-onset progressive degeneration of upper and lower motor neurons. Increasing numbers of genes are found to be associated with ALS; among those, the first identified gene, SOD1 coding a Cu/Zn-superoxide dismutase protein (SOD1), has been regarded as the gold standard in the research on a pathomechanism of ALS. Abnormal accumulation of misfolded SOD1 in affected spinal motor neurons has been established as a pathological hallmark of ALS caused by mutations in SOD1 (SOD1-ALS). Nonetheless, involvement of wild-type SOD1 remains quite controversial in the pathology of ALS with no SOD1 mutations (non-SOD1 ALS), which occupies more than 90% of total ALS cases. In vitro studies have revealed post-translationally controlled misfolding and aggregation of wild-type as well as of mutant SOD1 proteins; therefore, SOD1 proteins could be a therapeutic target not only in SOD1-ALS but also in more prevailing cases, non-SOD1 ALS. In order to search for evidence on misfolding and aggregation of wild-type SOD1 in vivo, we reviewed pathological studies using mouse models and patients and then summarized arguments for and against possible involvement of wild-type SOD1 in non-SOD1 ALS as well as in SOD1-ALS. Background Amyotrophic lateral sclerosis (ALS) is an adult-onset neurodegenerative disease classically characterized by loss of motor neurons in the central nervous system including motor cortex, brainstem, and spinal cord [1]. The loss of motor neurons leads to inability to control voluntary muscles and ultimately results in respiratory failure. Only two drugs, Riluzole and Edaravone, are currently available, but their therapeutic effects are limited to the extent that the survival can be extended at most a few months [2]. Together with full elucidation of the pathomechanism, therefore, development of efficient cures for this devastating disease has long been demanded. In 1993, mutations in the gene encoding Cu/Zn-superoxide dismutase (SOD1) were first reported as a cause of ALS [3], and since then, more than 30 genes responsible for ALS have been identified [1]. A genetic cause/predisposition still remains unclear in most of ALS cases (~80%), and SOD1 mutations describe only approximately 3% of total ALS cases (called SOD1-ALS) [4]. Nonetheless, pathological examinations on SOD1-ALS cases provide us with important clues to understand disease mechanisms; namely, SOD1 proteins abnormally accumulate and form inclusions selectively in affected motor neurons [5]. Based upon such pathological observations, furthermore, a mechanism has been proposed where SOD1 proteins assume an abnormal conformation (or misfold) by an amino acid substitution corresponding to a pathogenic mutation, accumulate as oligomers/aggregates, and then exert toxicity to kill motor neurons [6]. Several researchers have attempted to extend the pathological roles of SOD1 misfolding in SOD1-ALS to more prevailing ALS cases, in which no mutations in the SOD1 gene are confirmed (non-SOD1 ALS). In other words, wild-type SOD1 could cause ALS when it somehow misfolds. Nonetheless, experimental results on the involvement of wild-type SOD1 in non-SOD1 ALS are not consistent among different research groups, making this issue highly controversial. In order to discuss SOD1 proteins as a potential target for the development of therapeutics to ALS, we comprehensively reviewed reports on possible roles of wild-type SOD1 in the pathology of ALS. Misfolded forms of SOD1 as a pathological hallmark of SOD1-ALS SOD1 is a metalloenzyme that catalyzes the disproportionation of superoxide anion into hydrogen peroxide and molecular oxygen [7]. The enzymatic activity in most of the patients with the SOD1 mutations was almost half as much as those in healthy controls [8], which had initially been considered to trigger pathological changes in ALS. Indeed, homozygous and even heterozygous knockout of the Sod1 gene in mice exhibited a wide range of phenotypes relevant to ALS such as slowly progressive motor deficits [8]. Recently, furthermore, human patients with a homozygous truncating variant c.335dupG (p.C112Wfs*11) in the SOD1 gene that leads to total absence of the enzymatic activity were reported, and the resulting phenotype was marked by progressive loss of motor abilities [9,10]. Heterozygous carriers of the c.335dupG variant had an approximately halved SOD1 activity when compared to normal controls but appear not to develop symptoms of ALS [10]. Also, the Sod1-knockout mice did not develop ALS-like pathologies [8]; instead, overexpression of mutant SOD1 in mice reproduces ALS-like pathological changes with a significant increase in the SOD1 enzymatic activity [11]. While any reduction in the SOD1 enzymatic activity might modify the ALS pathomechanism, mutant SOD1 is considered to cause the disease not through a loss of the enzymatic activity but by a gain of new properties exerting toxicity to motor neurons. As a pathological hallmark of SOD1-ALS, SOD1 proteins are known to abnormally accumulate in motor neurons (e.g. [5]), leading to prevailing idea that pathogenic mutant SOD1 gains toxicity through its misfolding into non-native conformations. While the abnormal accumulation of SOD1 in motor neurons does not necessarily mean the misfolding of SOD1, biophysical examinations in vitro using recombinant SOD1 proteins have strongly supported conformational changes of SOD1 by amino acid substitutions due to the pathogenic mutations. SOD1 is functionally and conformationally matured through post-translational processes including copper and zinc binding and disulfide formation [12]. The bound copper ion acts as a catalytic center, whereas the bound zinc ion and the intramolecular disulfide bond play roles in stabilizing the native structure [13][14][15]. Pathogenic mutations decrease the affinity of SOD1 toward the metal ions and/or the stability of the disulfide bond [16,17], thereby disturbing the native conformation of SOD1. In other words, the posttranslational maturation appears to be hampered in the mutant SOD1 proteins, resulting in an increased propensity of SOD1 to misfold into oligomers and aggregates. Indeed, in transgenic mice expressing human SOD1 with ALS-causing mutations (G37R and G93A), oral administration of a copper complex Cu II (atsm) facilitates the copper binding of mutant SOD1 in their spinal cords and improves the neurological phenotype and survival [18][19][20]. Also, further expression of CCS, which is a copper chaperone assisting the maturation of SOD1 in vivo [21,22], remarkably extends the survival of the transgenic mice administered with Cu II (atsm) [23]. In the absence of the Cu II (atsm) administration, overexpression of CCS in the transgenic mice (G37R and G93A) is known to dramatically reduce the mean survival (from 242 days to 36 days), to which mitochondrial dysfunction appears to contribute due to the perturbation of intracellular copper dynamics [24,25]. Increased amounts of CCS would supply most of the intracellular copper ions to overexpressed mutant SOD1 proteins; therefore, the copper ions are not recruited to the other copper-requiring enzymes such as cytochrome c oxidase in mitochondria. Indeed, overexpression of CCS did not influence the disease phenotypes of the transgenic mice expressing human SOD1 with L126Z or murine SOD1 with G86R mutation [24], which are considered to be unable to bind a copper ion. Also notably, marked acceleration of disease in the transgenic mice (G93A) with CCS overexpression was not observed when the mice had an additional mutation H80G in the SOD1 (G93A) transgene [26]. This is probably because the zinc-binding in G93A-mutant SOD1 was compromised by substitution of a zincligand (His80) to Gly. Given important roles of the zinc binding in conformational stabilization of SOD1 [14,27], H80G/G93A-mutant SOD1 was not able to receive a copper ion from the overexpressed CCS. Misfolding of SOD1 proteins in vivo as well as in vitro will hence be circumvented through their post-translational maturation of SOD1, which would eventually reduce the toxicity of mutant SOD1 proteins. Pathological roles of wild-type human SOD1 in transgenic mouse models of SOD1-ALS Given that wild-type SOD1 is misfolded in vitro when losing the bound metal ions and/or the conserved disulfide bond [28], SOD1 could exert the disease-causing toxicity even without the pathogenic amino acid substitutions. Actually, co-expression of wild-type human SOD1 in transgenic mice expressing ALS-linked mutant human SOD1 (G37R, G85R, G93A, and L126Z) is known to accelerate the disease onset, suggesting the toxicity of wild-type human SOD1 [29][30][31][32][33][34][35]. Also, mice did not develop ALS-like symptoms upon expression of A4V-mutant human SOD1, but co-expression of wildtype human SOD1 in the A4V-SOD1 expressing mice did trigger the progression of ALS-like disease [29]. Taking advantage of distinct electrophoretic mobilities of wild-type and mutant SOD1 proteins (G85R and L126Z), furthermore, wild-type human SOD1 was found to accumulate as detergent-insoluble aggregates with the mutant proteins in transgenic mice [29,31,33,34], while the interactions in the aggregates would not be simply a co-assembly of mutant and wild-type proteins [33]. A mechanism of disease-accelerating effects of wild-type SOD1 remains unclear, but heteromeric interactions between wild-type and mutant SOD1 appear to aggravate the aggregation and toxicity in cultured cell models [36] and have correlation with the disease severity [37]. It should be also noted that, in some studies, overexpression of wild-type human SOD1 did not affect the onset or duration of disease in mice expressing G85R-mutant human SOD1 [5] or G86R-mutant murine SOD1 [38]. Furthermore, disease-related phenotypes were not observed in transgenic mice expressing human SOD1 that has multiple mutations including those at copper and zinc binding sites (H46R/H48Q/H63G/ H71R/H80R/H120G) and two free Cys residues (C6G/ C111S) with an ALS-linked mutation, H43R, and coexpression of wild-type human SOD1 did not cause the disease [35]. Such apparent discrepancies would, nonetheless, indicate that expression levels of SOD1 as well as interactions between wild-type and mutant SOD1 play key roles in exerting toxicity of wild-type human SOD1. Even in the absence of ALS-causing mutant SOD1, overexpression of wild-type human SOD1 alone can exert motor neuron toxicity to mice. In hemizygous transgenic mice expressing wild-type human SOD1, their lifespan was not affected, but neurodegenerative changes appeared in old age including mitochondrial vacuolization, axonal degeneration and a moderate loss of spinal motor neurons [32,39,40]. Upon decreasing glutathione levels, the mice developed overt motor symptoms, and their lifespan was decreased [41]. Also, spinal cord homogenates from the hemizygous wild-type human SOD1 transgenic mice were found to contain age-dependent, progressive formation of high-molecular-weight SOD1 aggregates [40,42], which would be caused by oxidation of a unique tryptophan in SOD1 upon endoplasmic reticulum stress [42]. Furthermore, homozygous wildtype human SOD1 transgenic mice significantly increased the expression levels of wild-type human SOD1 and thereby developed ALS-like syndrome with formation of aggregated SOD1 in spinal cord and brain [43]. Even without any amino acid substitutions, therefore, wild-type human SOD1 could exert motor neuron toxicity to model animals under certain experimental conditions. Possible involvement of wild-type SOD1 in pathological inclusions of SOD1-ALS patients In contrast to the mouse models, pathological involvement of wild-type SOD1 is highly controversial in SOD1-ALS as well as non-SOD1 ALS patients. While most of SOD1-ALS patients express both wild-type and mutant SOD1 proteins, it is difficult to biochemically and immunohistochemically distinguish between wildtype and mutant SOD1 in tissues. In that sense, the involvement of wild-type SOD1 was examined in a SOD1-ALS patient with the G127insTGGG (G127X) mutation; such a truncated G127X-mutant SOD1 can be discriminated from the wild-type protein because of the difference in size and also of a non-native procession of the five amino acids following Gly127 in the variant [44,45]. Wild-type SOD1 was detected in a detergent-insoluble (0.1% Nonidet P-40-insoluble) fraction of the cervical ventral horn of the G127X patient, while no control patients were examined [45] . Also, G127X patients had aggregates in glial cell nuclei of spinal cords, some of which were stained with an antibody (Chi 131-153 ab) raised against a peptide sequence absent in G127Xmutant SOD1 (Asn131 -Gln153) [46]. Those Chi 131-153 ab-positive aggregates were not stained with a G127X-mutant specific antibody directed to the nonnative, C-terminal sequence of the five amino acids, suggesting pathological aggregation of wild-type SOD1 that is not co-localized with G127X-mutant proteins. As discussed later, however, even in control patients, significant amounts of wild-type SOD1 were present in the 0.1% Nonidet P-40-insoluble fraction [47]. Also, the same research group has published the paper showing that G127X-mutant but not wild-type SOD1 in the ventral horn of lumbar spinal cord of a G127X patient was sedimented by density gradient ultracentrifugation [44], implying no involvement of the wild-type protein in the mutant SOD1 aggregates. Some of the pathogenic fulllength as well as truncated mutant SOD1 proteins are known to exhibit distinct electrophoretic mobilities from that of the wild-type protein [48]; therefore, more biochemical analysis on tissue samples from SOD1-ALS patients will reveal any involvement of wild-type SOD1 in the abnormal accumulation of SOD1 proteins in spinal cord. Controversies on pathological involvement of wild-type SOD1 in non-SOD1 ALS Also in non-SOD1 ALS cases, which are much more prevailing than SOD1-ALS, there are harsh controversies on pathological roles of wild-type SOD1. While few studies have examined the metal binding and/or disulfide status of wild-type SOD1 in ALS, the lack of such posttranslational processes is expected to result in the decrease of its enzymatic activity. Indeed, SOD1 activity in brain homogenates of sporadic ALS cases was reported to be decreased [49], but another study confirmed little differences in the activity in several parts of the central nervous system between sporadic ALS cases and non-ALS controls [50]. It should be noted that only the activity but not the amount of SOD1 was compared in those previous reports; therefore, it remains to be concluded whether wild-type SOD1 becomes misfolded and enzymatically inactive under pathological conditions of ALS. SOD1 is ubiquitously and highly (10-100 μM) expressed as a soluble protein [51][52][53] (Human Protein Atlas available from http://www.proteinatlas.org) and diffusedly detected in most of subcellular compartments including cytoplasm [54], mitochondria [55], nucleus [56], and endoplasmic reticulum [57]. Based upon many studies using mouse models as well as purified proteins (e.g. [14,58]), a consensus has been reached on the significantly reduced solubility of SOD1 by ALS-causing mutations, which leads to the formation of detergentinsoluble SOD1 aggregates. It should, however, be noted that only a few studies confirmed the solubility changes of SOD1 proteins in spinal cord tissues of ALS patients (even in those of SOD1-ALS patients). Bosco et al. prepared insoluble pellets from spinal cord homogenates in detergent-free lysis buffer, where comparable levels of SOD1 proteins were detected among a SOD1-ALS case (A4V mutation), four sporadic ALS cases, and four non-neurological controls [59]. No differences were observed in the amount of 0.1% Nonidet P-40resistant SOD1 among two SOD1-ALS patients with the homozygous D90A mutations and two controls [47]. In contrast, when spinal cord homogenates were treated with 0.5% Nonidet P-40, significantly more amounts of SOD1 were detected in the insoluble fraction of a SOD1-ALS case (A4V mutation) than those of two familial ALS cases with unknown genetic causes, 12 sporadic ALS cases, and three controls [60]. Significantly more amounts of SOD1 were also detected in the 1% Nonidet P-40-insoluble pellets from two sporadic ALS cases (a non-SOD1 ALS and a case with C9orf72 mutation) as well as two SOD1-ALS cases (A4V and G72C mutations) than those of three Alzheimer's disease cases and four non-neurological controls [61]. Furthermore, a filter-trap assay using a 0.22 μm cellulose acetate membrane was examined to detect SOD1 aggregates in spinal cord homogenates containing Nonidet P-40 and sodium dodecyl sulfate; wild-type SOD1 aggregates trapped on the membrane were significantly augmented in the lumbar spinal cord of sporadic ALS cases (4 positive/7 total) compared with control subjects (0 positive/6 total) [42]. It is thus possible that SOD1 proteins form detergent-insoluble aggregates in pathological conditions of ALS cases even without SOD1 mutations (Fig. 1, left), but more numbers of studies will be required for conclusions. Given that SOD1 is highly expressed in most of intracellular compartments, an immunohistochemical method using anti-SOD1 antibodies may be suitable for detection of pathological changes occurring in wild-type SOD1 only if the protein is densely accumulated as inclusion bodies. Indeed, a subset of Lewy body-like (hyaline) inclusions in the anterior horn cells of 10 out of 20 sporadic ALS patients (albeit with no test on SOD1 mutations) were immunoreactive to anti-SOD1 antibodies, while skein-like inclusions and Bunina bodies were not [62][63][64]. Also, SOD1-immunoreactive inclusions were discerned against background staining in spinal cord motor neurons of a familial ALS patient without SOD1 mutation [50]. In the other study, however, no SOD1-immunoreactivity was confirmed in the hyaline inclusions of all sporadic ALS cases examined (17 cases, again with no mention about SOD1 mutations) [65]. While such a sharp discrepancy among those studies remains to be solved, different SOD1 antibodies were used for immunohistochemical analysis: a rabbit or sheep polyclonal antibody was raised against a holo form of human SOD1 in the former two studies [66], and a rabbit polyclonal antibody was raised against a SOD1 peptide corresponding to Asp124 to Lys136 in the latter [67]. These ideas are challenged by a report showing no SOD1-positive inclusions in non-SOD1 ALS cases with a rabbit polyclonal anti-SOD1 antibody or a mouse monoclonal anti-SOD1 antibody [68]. Nonetheless, misfolding of SOD1 is well expected to affect epitope availability; therefore, the choice of the antibodies is still a key factor to detect any misfolded forms of SOD1 proteins in vivo. Indeed, increasing numbers of studies have examined non-SOD1 ALS cases with conformation-specific antibodies that can discriminate misfolded SOD1 from the natively folded protein in vitro (called misfolded-SOD1 antibodies hereafter). Immunohistochemical examination on non-SOD1 ALS cases with misfolded-SOD1 antibodies As summarized in a recent comprehensive paper [69] as well as in an excellent review [70], a number of misfolded-SOD1 antibodies have been used for examination of sporadic ALS cases, and the results are sharply divided. In this review, we performed extensive search on the previous reports describing immunohistochemical and/or immunofluorescence examinations on human spinal cord tissues with misfolded-SOD1 antibodies, which is summarized in Table 1. As colored cyan in Table 1, some studies have claimed positive immunostaining of spinal cords (motor neurons and glial cells) selectively in sporadic and familial ALS with misfolded-SOD1 antibodies [46,50,59,61,69,[73][74][75]. As reviewed later in detail, a misfolded-SOD1 antibody (α-miSOD1) designed based on an antibody from the healthy elderly subjects was also found to stain spinal cord of sporadic as well as familial ALS patients but not of nonneurological controls [71]. In the other studies (colored orange in Table 1), however, no difference in the staining pattern was observed between ALS and non-ALS controls [72,74,[76][77][78][79]. Some of the misfolded-SOD1 antibodies in Table 1 (in particular, the ones reported from one research group: SEDI, USOD, AJ10, B8H10, 4A1, and A5E5) were found to immunostain spinal motorneurons in SOD1-ALS but not in non-SOD1 ALS, which might simply mean that misfolded conformations of wild-type SOD1 in non-SOD1 ALS are not the same with those of mutant SOD1 in SOD1-ALS. Immunostaining results using mouse monoclonal C4F6, 3H1, 10E11C11 and a rabbit polyclonal Ra 131-153 antibody have been reported from more than two research groups but still did not reach a consensus about the detection of misfolded SOD1 in non-SOD1 ALS cases (Table 1). Fig. 1 Schematic representation on possible changes of wild-type SOD1 in ALS. (Left) A natively folded SOD1 binds copper and zinc ions and forms an intramolecular disulfide bond. Pathological conditions might disrupt intracellular metal homeostasis and augment oxidative stress/ER stress, facilitating the formation of misfolded SOD1 even without any disease-causing mutations. Disulfide-crosslinked oligomers and insoluble aggregates of wild-type SOD1 have been detected in spinal cords of sporadic ALS. (Right) SOD1 has been known to constitutively secreted to extracellular fluid such as ISF and CSF, and recently, toxic wild-type SOD1 in abnormally misfolded conformations was detected in CSF of sporadic ALS. Misfolded SOD1 appears to be cleared by humoral immune response and/or glymphatic/intramural peri-arterial drainage systems, and their failure might contribute to the disease. There is no mention on the non-neurological controls in the paper. f In this review, the cases with cytoplasmic granular staining, rare round deposits, abundant round deposits, and globular inclusions are counted as misfolded-SOD1 positive, while the cases with no signal, sparse diffuse staining, and abundant diffuse staining are counted as misfoded-SOD1 negative. g Not available (no mention in the paper). h The control cases are described as "non-ALS controls". i In the paper, it was described that "no or only weak immunoreactivity was observed in motor neurons of most of the 41 spinal cord tissue samples from NNC patients" Much effort has been directed to resolve those discrepancies, which could be caused by differences in experimental procedures including tissue fixation, antigen retrieval, and working concentrations of primary antibodies [69]. Indeed, antigen retrieval treatments in a citrate buffer with heat (boiling, steaming, microwave) are considered to denature SOD1 proteins, which could efficiently expose the epitope for misfolded-SOD1 antibodies [72] but appears not to describe the discrepancy on the immunohistochemical detection of misfolded SOD1 (Table 1). In immunohistochemical/immunofluorescence analysis of tissues, the experimental procedures/conditions are often not described in detail; in particular, a working concentration of a primary antibody is usually indicated as a dilution factor but not a concentration of the antibody in many studies. These situations prevent us from comparing the previously reported staining results in detail; based upon Table 1, however, a trend can be found that a significant dilution of the misfolded-SOD1 antibodies fails to detect non-SOD1 ALS-specific immunostaining. The antibody C4F6 is commercially available from MediMabs, and the concentration was found to be < 0.05 mg/mL in our hands. Ayers et al. [72] and Da Cruz et al. [77] have reported the absence of C4F6positive staining in sporadic ALS cases by using the C4F6 antibody from MediMabs in 500-fold and 200-fold dilution (Table 1), which would correspond to < 0.1 and < 0.25 μg/mL of the working concentration, respectively. Instead, Bosco et al. successfully detected C4F6positive staining with 1.0 μg/mL C4F6 in some of sporadic ALS cases but not in non-neurological controls (Table 1) [59]. Also in the three papers by Grad et al. [61], Pokrishevsky et al. [75], and Da Cruz et al. [77], we have supposed that they used the antibodies 3H1 and 10E11C11 originated from the same source for immunohistochemical examination on misfolded SOD1 (we further assumed the same concentration of the original antibody solution in their studies). Successful detection of misfolded SOD1 in ALS tissues with a lower dilution rate of the antibodies was reported by Grad [46,50,69,73]. The antibody was then distributed to the other research group and used for the immunohistochemical examination; however, the Ra 131-153-positive immunostaining was observed in not only ALS but also non-neurological control cases [77], which might be due to an antigen retrieval step using Tris-EDTA-based solution [69]. Collectively, further investigations with more quantitative, detailed descriptions on the experimental procedures (a working concentration of antibodies, in particular) will be definitely required for evaluating immunohistochemical evidence of misfolded SOD1 proteins in non-SOD1 ALS cases. Immunoprecipitation from spinal cords of non-SOD1 ALS with misfolded-SOD1 antibodies Immunohistochemical examinations require several harsh treatments of tissue samples (depaffinization, antigen retrieval, etc.) that can significantly affect protein conformations; therefore, the presence or absence of misfolded wild-type SOD1 proteins in tissues may not be accurately evaluated. Instead, more accurate evidence on misfolded wild-type SOD1 in ALS could be provided by immunoprecipitation (IP) from unfixed spinal cord homogenates with misfolded-SOD1 antibodies, which are summarized in Table 2. Again, experimental details required for testing reproducibility were not fully described in most of the papers, and the results were sharply divided. Mutant SOD1 in all SOD1-ALS cases examined was successfully immunoprecipitated with any of misfolded-SOD1 antibodies listed in Table 2, and wild-type SOD1 in sporadic ALS cases without SOD1 mutations was also immunoprecipitated in the studies by Grad et al. [61] and Paré et al. [69]. In contrast, the other studies by Liu et al. [78], Kerman et al. [76], and Da Cruz et al. [77] have concluded that no wild-type SOD1 proteins are immunoprecipitated from spinal cords of sporadic ALS cases with misfolded-SOD1 antibodies. Nonetheless, we note that the interpretation on the immunoprecipitation results appears somewhat different among those studies; namely, no SOD1 proteins were observed in immunoprecipitates from sporadic ALS with SEDI (Liu et al. [78]) and USOD (Kerman et al. [76]) antibodies, while the misfolded-SOD1 antibodies (3H1, 4A1, A5E5) used in the Da Cruz et al. paper did immunoprecipitate SOD1 proteins in sporadic ALS cases but also in non-neurological controls [77]. Using the 3H1 antibody, furthermore, Grad et al. were found to immunoprecipitate wild-type SOD1 from spinal cords of sporadic ALS cases but not from those of nonneurological controls [61]. Again, it is highly possible that some differences in experimental procedures influence the detection of misfolded wild-type SOD1 in sporadic ALS tissues, and much more numbers of studies with detailed description on IP methods are definitely required. It is also important to note that wild-type SOD1 immunopurified with anti-SOD1 antibody from spinal cord homogenates of sporadic ALS inhibited anterograde but not retrograde fast axonal transport in the assay using isolated squid axoplasm through a mechanism possibly involving specific activation of p38 MAPK [59]. Such inhibition was no longer observed when the immunopurified SOD1 proteins were first mixed with the misfolded-SOD1 antibody C4F6 and then perfused into squid axoplasm. These results have thus supported toxic and pathogenic roles of misfolded wild-type SOD1 in sporadic ALS (Fig. 1, left). Misfolded forms of SOD1 in cerebrospinal fluid of ALS As described, SOD1 is localized mostly in the cytoplasm (Human Protein Atlas, see above), and the intraneuronal inclusions containing SOD1 are the pathological hallmark of SOD1-ALS [5]. Many researchers thus focused on the toxic/conformational properties of SOD1 within cells, even though SOD1 proteins were reported to be present also in the extracellular space by their active and constitutive secretion from cells (Fig. 1, upper) [80,81]. Recently, misfolded/aggregated proteins are considered to propagate between cells, which would contribute to the pathological progression in many of neurodegenerative diseases including SOD1-ALS [82][83][84][85][86]. For example, premature motor neuron disease in transgenic mice expressing human SOD1 with G85R mutation is triggered by inoculation of detergent-resistant fractions of SOD1 from a SOD1-ALS patient (G127Gfs*7) into the lumbar spinal cord [83]. Also, much attention has been paid on glymphatic system [87] and intramural peri-arterial drainage pathway [88], by which misfolded/aggregated proteins in interstitial fluid (ISF) of the brain and spinal cord could be drained into cerebrospinal fluid (CSF) and then cleared [89]. Regarding SOD1-ALS, indeed, the disease duration of transgenic mice expressing ALS-linked mutant SOD1 was shortened by deletion of aquaporin-4 [90], a water channel playing central roles in the extracellular clearance through the glymphatic system [87]. Furthermore, pathologies and amyloid-β accumulation in transgenic mouse models of Alzheimer's disease were aggravated by disrupting meningeal lymphatic vessels, which are proposed as a drain of macromolecules from ISF and CSF [91]. Therefore, SOD1 proteins that are secreted from neurons and glia and then possibly drained into CSF will be important in understanding the pathology of ALS. Indeed, SOD1 is well known as a constituent of CSF, and amounts of SOD1 in CSF tended to increase as a function of age albeit with a low correlation coefficient (r 2 = 0.1~0.2) [92][93][94]. In most studies, total SOD1 levels in CSF appear to be not significantly different between ALS and neurological/non-neurological controls [92][93][94][95][96]. Alternatively, absolute levels of SOD1 in CSF were reported to show substantial variability among individuals but with little variability in each individual over time [97]. In the same study [97], ALS cases and neurological controls were characterized by slightly higher levels of SOD1 in CSF compared to those of healthy controls; however, the amount of SOD1 in CSF did not correlate with the severity of ALS. In CSF, significant fractions of SOD1 were also reported to be Nterminally truncated, but the amount of such truncated proteins did not differ between ALS and controls, suggesting little pathological roles of the truncated SOD1 in ALS [93,95]. In electrophoretic analysis of CSF, furthermore, neither SOD1-positive smears nor highmolecular-weight ladders were observed, indicating that detergent-resistant oligomers/aggregates were not evident in CSF of ALS [93,95]. Based upon those reports, SOD1 in CSF appears to have no pathological roles in ALS. Nonetheless, it is quite notable that, in rats overexpressing wild-type human SOD1, half-life of the SOD1 protein was significantly longer in CSF (14.9 days) as well as in spinal cord (15.9 days) than that in liver and kidney (1.7 and 3.4 days, respectively) [98]. Also in CSF of human subjects, the turnover rate of SOD1 was found to be significantly slower (half-life: 25.0 +/− 7.4 days) than that of total proteins (half-life: 3.6 +/− 1.0 days) [98]. Accordingly, slow turnover rate of SOD1 in CSF as well as in spinal cord would allow sufficient time for SOD1 to become misfolded and to contribute to the development of pathological changes. To test if SOD1 becomes misfolded in CSF of ALS, CSF samples from 96 ALS cases (57 sporadic ALS, 22 SOD1-ALS, 17 Non-SOD1 familial ALS) and 38 neurological controls were examined with sandwich ELISA using misfolded-SOD1 antibodies (Ra 24-39, Ra 57-72, and Ra 111-127) [94]. Signals indicating the presence of misfolded SOD1 were found in all samples, but no significant differences were confirmed between ALS with and without SOD1 mutations and also between the ALS cases combined and the controls [94]. In contrast, by using other types of misfolded-SOD1 antibodies, we recently showed that wild-type SOD1 proteins were misfolded in CSF of sporadic ALS cases as well as of a SOD1-ALS case [95]. More precisely, sandwich ELISA was performed on CSF from 21 ALS cases (20 sporadic ALS, 1 SOD1-ALS) and 40 controls by using misfolded-SOD1 antibodies (C4F6, UβB, EDI, apoSOD, 24-39 and SOD1 int ). Among those, C4F6, UβB, EDI, and apoSOD were found to give significantly higher signals in CSF of ALS cases compared to those of controls; in contrast, no differences were observed with 24-39 and SOD1 int . It was also surprising to us that large fractions of SOD1 in CSF of sporadic ALS cases were immunoprecipitated with C4F6 antibody [95]. CSF collected from ALS patients has been known to exert toxicity toward motorneuron like cells NSC-34 [99], and we revealed that the toxicity was alleviated by removing the misfolded SOD1 from CSF with immunoprecipitation using C4F6 antibody [95]. It is also notable that misfolded SOD1 immunoreactive to C4F6 and UβB was observed, albeit with less amount, in CSF of a subset of patients with Parkinson's disease (PD) and progressive supranuclear palsy (PSP). Therefore, not all types of misfolded-SOD1 antibodies could detect pathological forms of wild-type SOD1 in CSF, but our study has suggested that wildtype SOD1 in CSF adopts a misfolded, toxic conformation(s) in pathological conditions of ALS and also a subset of PD and PSP. In that sense, it is important to note that levels of SOD1 in CSF of SOD1-ALS patients were reduced by oral medication with pyrimethamine [100]. Misfolding of wild-type SOD1 under oxidative environment of spinal cord and CSF Another important issue to be solved is where SOD1 is misfolded; in other words, it remains to be tested whether SOD1 is misfolded in CSF, or misfolded SOD1 in affected spinal cord (or some other tissues) is drained into CSF (Fig. 1). As of now, we do not have an answer to this question; nonetheless, one of the notable features observed commonly in spinal cord and CSF of ALS patients is significantly elevated levels of oxidative markers, which has been summarized in an excellent review [101]. It is thus plausible that oxidative environment in the spinal cord/CSF of ALS is important to understand any pathological changes occurring in SOD1. In accordance with this, we have detected abnormal SOD1 oligomers crosslinked via intermolecular disulfide bonds in spinal cord of SOD1-ALS cases as well as transgenic mice expressing human SOD1 with ALS mutations (G37R, G93A, and L126Z) [31,102]. While the disulfide-crosslinked SOD1 oligomers were not evident in CSF of sporadic ALS cases and a SOD1-ALS case [95], reductant (DTT)-sensitive aggregates of wild-type SOD1 were detected in affected spinal cord of sporadic ALS cases [42]. Furthermore, Xu et al. suggested the oxidation of Cys111 in SOD1 to a sulfenic acid (−SOH) in CSF of a subset of sporadic ALS cases [103], and we also found that Cys111 was oxidized to a sulfonic acid (−SO 3 H) in CSF of a subset of ALS, PD, PSP, and AD cases [95]. In our experiments in vitro [104], followed by the sulfenylation of Cys111 in metal-bound SOD1 with H 2 O 2 , dissociation of the bound metal ions from the protein was found to allow another free Cys residue (Cys6) to attack the sulfenylated Cys111. SOD1 has a canonical intramolecular disulfide bond between Cys57 and Cys146; therefore, oxidation with H 2 O 2 led to the formation of abnormal SOD1 (SOD1 2xS-S ) with two intramolecular disulfide bonds (Cys6-Cys111 and Cys57-Cys146), and SOD1 2xS-S was prone to aggregation and also toxic to motor-neuron like cells NSC-34 [104]. As summarized above, Cys is considered to be the most susceptible to oxidation among amino acids and would hence be a key residue for oxidative modifications under pathological conditions. Notably, several other oxidized forms of SOD1 have been also reported in cell lines, transgenic mice, and purified SOD1 proteins. For example, SOD1 proteins with oxidized carbonyl groups were detected in lymphoblasts derived from sporadic ALS with bulbar onset [105]. SOD1 oxidized at tryptophan (Trp32) was found to accumulate in the microsomal fractions purified from spinal cord of transgenic mice expressing wild-type human SOD1 [42] and was also detected in human blood and the blood isolated from transgenic mice expressing wild-type or ALSlinked mutant human SOD1 [106]. Furthermore, several His residues as well as Trp32 are also susceptible to oxidation, which has been proposed to trigger the aggregation of SOD1 in vitro [107][108][109][110]. It, however, remains to be tested whether the His and/or Trp oxidations occur on SOD1 in ALS patients. Misfolded SOD1 in extracellular fluid as a potential immunotherapeutic target As reviewed above, formation of misfolded and plausibly toxic SOD1 species in extracellular fluid is well expected as a pathological change occurring in ALS cases. This could in turn open the way to alleviate the disease by removing such extracellular SOD1 proteins with the humoral immune response. Indeed, the survival of transgenic mice expressing ALS-linked mutant SOD1 was extended by vaccination with full-length misfolded SOD1 proteins [111,112] and with peptides corresponding to the region available only in misfolded SOD1 [113,114]. Passive immunization with several misfolded-SOD1 antibodies was also reported to be beneficial to the SOD1-ALS model mice [112,[115][116][117] except for one study [118]. Furthermore, sera from sporadic ALS patients were found to contain IgM antibodies reacting with misfolded SOD1 (recombinant SOD1 oxidized with 10 mM H 2 O 2 ), and the sporadic ALS cases with higher levels of the IgM antibodies (n = 153) exhibited a longer survival of 6.4 years than the subjects lacking those antibodies (n = 127) [119]. Notably, Maier et al. screened human memory B cell repertoires from a large cohort of healthy elderly subjects and successfully generated a monoclonal antibody (α-miSOD1) that can react selectively with misfolded/ oxidized SOD1 but not with native SOD1 [71]. Based upon the presence of B cell memory against misfolded SOD1 in a majority of those healthy elderly subjects, Maier et al. suggested that misfolding of SOD1 and the subsequent humoral immune response are frequent events in the elderly [71]. This antibody, α-miSOD1, was found to stain motor neurons of the spinal cord samples from ALS including sporadic as well as familial cases with and without SOD1 mutations, but not from nonneurological controls (Table 1) [71]. Furthermore, intracerebroventricular infusion and also intraperitoneal injections of α-miSOD1 antibody to transgenic mice expressing ALS-linked mutant human SOD1 (G37R and G93A) delayed the onset of motor symptoms and extended survival [71]. Therefore, clearance of misfolded SOD1 by utilizing the immune system would be a potential treatment for patients with sporadic as well as familial ALS; nonetheless, it should be also noted that, in sera of sporadic ALS subjects, higher levels of IgG antibodies reacting with normal wild-type SOD1 associated with a shorter survival of 4.1 years [119]. For successful immunotherapy to treat ALS, it will be critical to develop antibodies specifically recognizing toxic, misfolded SOD1 and/or to design antigens efficiently producing such antibodies. Conclusions While misfolding of ALS-linked mutant SOD1 has been established as a pathological change occurring in SOD1-ALS, roles of wild-type SOD1 in more prevailing non-SOD1 ALS have long been debated. Even in SOD1-ALS, involvement of wild-type SOD1 in the pathology remains obscure. As reviewed above, we performed an extensive literature search and found that a number of studies supported the presence of misfolded wild-type SOD1 in spinal cord and CSF of non-SOD1 ALS cases (Fig. 1). Nonetheless, not all studies detected misfolded wild-type SOD1 proteins in non-SOD1 ALS, possibly suggesting the importance of experimental conditions in their immunohistochemical and immunochemical detection. Also, some of misfolded-SOD1 antibodies gave positive signals in SOD1-ALS but not in non-SOD1 ALS, which may indicate distinct conformations of misfolded SOD1 between SOD1-ALS and non-SOD1 ALS. As we recently reported [95], CSF of non-SOD1 ALS contained misfolded forms of wild-type SOD1. The misfolded SOD1 in CSF was toxic to cultured cells, but it still needs to be tested whether it is a pathogenic species causing degeneration of motor neurons. Quite notably, misfolding of SOD1 could occur in the healthy elderly, and the humoral immune response to the misfolded SOD1 would be a key to prevent ALS. Consistent with beneficial results of immunization-based treatment of transgenic mouse models, therefore, immunological modulation of misfolded SOD1 in extracellular fluids such as CSF would be a promising strategy to delay onset and/or relieve symptoms of ALS.
2023-01-29T14:37:58.122Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "43c21d34ef2c1f1fb0220c4374d629ba4d442abc", "oa_license": "CCBY", "oa_url": "https://translationalneurodegeneration.biomedcentral.com/track/pdf/10.1186/s40035-020-00209-y", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "43c21d34ef2c1f1fb0220c4374d629ba4d442abc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
224868085
pes2o/s2orc
v3-fos-license
Well-posedness theory for electromagnetic obstacle problems This paper develops a well-posedness theory for hyperbolic Maxwell obstacle problems generalizing the result by Duvaut and Lions (1976) [5]. Building on the recently developed result by Yousept (2020) [30], we prove an existence result and study the uniqueness through a local H(curl)-regularity analysis with respect to the constraint set. More precisely, every solution is shown to locally satisfy the Maxwell-Ampère equation (resp. Faraday equation) in the region where no obstacle is applied to the electric field (resp. magnetic field). By this property, along with a structural assumption on the feasible set, we are able to localize the obstacle problem to the underlying constraint regions. In particular, the resulting localized problem does not employ the electric test function (resp. magnetic test function) in the area where the L2-regularity of the rotation of the electric field (resp. magnetic field) is not a priori guaranteed. This localization strategy is the main ingredient for our uniqueness proof. After establishing the well-posedness, we consider the case where the electric permittivity is negligibly small in the electric constraint region and investigate the corresponding eddy current obstacle problem. Invoking the localization strategy, we derive an existence result under an L2-boundedness assumption for the electric constraint region along with a compatibility assumption on the initial data. The developed theoretical results find applications in electromagnetic shielding. © 2020 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Introduction More than four decades ago, Duvaut and Lions [5,Chapter 7,Section 8] proposed and analyzed a (hyperbolic) Maxwell obstacle problem describing the propagation of electromagnetic waves in a polarizable medium with an obstacle constraint on the electric field of the form |E(x, t)| ≤ d(x) a.e. in × (0, T ) (1.1) for some d : → [0, ∞]. Based on the method of vanishing (curl-curl) viscosity and constraint penalization, they proved a global well-posedness result for the proposed obstacle problem [5,Theorem 8.1]. Some years later, Milani [15,16] extended their theory to the case of a timedependent upper bound d = d (x, t). See also Miranda and Santos [19] for the non-Hilbertian extension of the electromagnetic antenna problem in [5,Chapter 7]. Maxwell (quasi)-variational inequalities play as well a profound role in the mathematical modeling of type-II superconductivity. See Bossavit [2], Prigozhin [22], Barrett and Prigozhin [1], Elliott and Kashima [6], Jochmann [11], Rodrigues and Santos [24,25], Pan [20,21], Yin et al. [29], and our previous works [27,32] for results in this direction. From among many other contributions towards obstacle problems, we refer to the monographs [9,12,23] and the pioneering works by Fichera [7,8], Brézis and Stampacchia [3], and Lions and Stampacchia [13,14]. Quite recently, the author [30] examined the mathematical analysis for Maxwell variational inequalities of the second kind. Based on a local boundedness assumption for the governing subdifferential, [30,Theorem 3.3] proved a global well-posedness result. However, as shown in [30,Example 3.6], the local boundedness assumption fails to hold for indicator functionals. As a remedy, through the use of the minimal section operator and the Nemytskii operator of the governing subdifferential, a more refined existence result was derived in [30,Theorem 3.11]. Though this result applies to a wider class of nonlinearities, including indicator functionals, it merely affirms the existence of a rather weak solution, which does not necessarily belong to the effective domain of the nonlinearity. More crucially, the uniqueness of the solution is not guaranteed. This paper discusses and explores obstacle problems for Maxwell's equations with a general feasible set structure. Our study is mainly motivated by electromagnetic shielding applications to block or redirect undesired electromagnetic fields in a certain domain of interest by means of barriers (obstacles) made of conductive or magnetic materials. Typical materials used for the barriers in electric shielding include highly conductive sheet metals and metallic alloys, while materials with high magnetic permeability such as ferromagnetic materials are typically used for magnetic obstacles (see [4]). From the mathematical point of view, electromagnetic shielding phenomena fall into the class of obstacle problems: In the free region, the electromagnetic fields satisfy the fundamental Maxwell equations, whereas in the shielded area obstacle constraints are applied to the electric field (electric shielding) or the magnetic field (magnetic shielding). Let ⊂ R 3 be an open set (not necessarily bounded, Lipschitz, or connected) representing an anisotropic medium where the electromagnetic fields are acting. Inside , we consider two open subsets c E , c H ⊂ . The set c E (resp. c H ) represents the region where an obstacle constraint is imposed on the electric field E (resp. magnetic field H ). The free regions for the electric and magnetic fields are denoted, respectively, by The exact mathematical formulation of the Maxwell obstacle problem under consideration is presented in (P). The novelties of this paper include existence and uniqueness results for (P) (Theorems 1 and 2) generalizing [5, Theorem 8.1] since [5] solely considers the case (1.1) and c H = ∅, i.e., in the absence of an obstacle for the magnetic field. Furthermore, our results hold true without the higher regularity assumption on the initial data [5,Eq. (89), p. 370] and without the piecewise constant assumption on the material parameters [5,Eq. (3.19), p. 338]. Differently from [5], our analysis is not based on the method of vanishing viscosity. Here, the proof of the existence result is built on the recently developed result [30,Theorem 3.11]. Along with the existence result, we show that every solution locally satisfies (i) the Maxwell-Ampère equation in the electric free region E , (ii) the Faraday equation in the magnetic free region H . On this basis, the uniqueness analysis is studied. First, in the case where the obstacle constraint is applied only either to the magnetic field H or to the electric field E, i.e., c E = ∅ or c H = ∅, a uniqueness result is obtained from (i)-(ii) (see Theorem 2). The uniqueness question becomes more challenging if both c E and c H have positive measure. To tackle this case, we propose structural assumptions on the constraint set ( c E ∩ c H = ∅) and the tangential components across the interfaces between the free and obstacle regions (Assumption 1.1). From the physical point of view, the proposed separation assumption c E ∩ c H = ∅ is reasonable since the electric and magnetic fields are coupled to each other by Maxwell's equations. See Fig. 1 and Example 1.1 for an exemplary physical model satisfying Assumption 1.1 related to electromagnetic shielding. Making use of Assumption 1.1, we are able to localize (P) into the constraint regions c E and c H . In particular, the resulting localized problem does not employ the magnetic test function (resp. electric test function) in c E (resp. c H ), i.e., in the region where the L 2 -regularity of the rotation of the corresponding field is not a priori guaranteed. This localization strategy is the central ingredient of our uniqueness proof under Assumption 1.1. The final part of this paper considers the case where the electric permittivity is negligibly small in the electric constraint region c E . We investigate the resulting eddy current obstacle problem (1.19) by neglecting in c E and derive an existence result for (1.19) (Theorem 3). The proof is based on the above-mentioned localization strategy together with an L 2 -boundedness assumption for the constraint set on c E and a compatibility assumption on the initial data (Assumption 1.2). For earlier contributions towards eddy current (semistatic) approximations for nonlinear Maxwell's equations, we refer to Milani and Picard [17], Jochmann [10], and Yin [28]. The remainder of this paper is organized as follows. In the upcoming subsection, we introduce our notation and formulate the Maxwell obstacle problem (P) under investigation. The first two main results are Theorems 1 and 2 regarding existence and uniqueness results for the Maxwell obstacle problem (P). The final main result is Theorem 3 concerning an existence result for the eddy current obstacle problem (1.19). The proofs for these three theorems are presented, respectively, in Sections 2, 3, and 4. Problem formulation and main results For a given Hilbert space V , we use the notation · V and (·, ·) V for a standard norm and a standard scalar product in V . A bold typeface is used to indicate a three-dimensional vector function or a Hilbert space of three-dimensional vector functions. The electric permittivity and the magnetic permeability in the medium are matrix-valued functions: , μ : → R 3×3 . Moreover, they are assumed to be of class L ∞ ( ) 3×3 , symmetric, and uniformly positive definite in the sense that there exist positive constants , μ > 0 such that ξ T (x)ξ ≥ |ξ | 2 and ξ T μ(x)ξ ≥ μ|ξ | 2 for a.e. x ∈ and all ξ ∈ R 3 . (1.2) Given a symmetric and uniformly positive definite matrix-valued function α ∈ L ∞ ( ) 3×3 , let L 2 α ( ) denote the weighted L 2 ( )-space endowed with the weighted scalar product (α·, ·) L 2 ( ) . The pivot Hilbert space for our analysis is where the operator curl is understood in the sense of distributions. As usual, C ∞ 0 (O) stands for the space of all infinitely differentiable three-dimensional vector functions with compact support contained in O. We denote the closure of . It is well known that the Hilbert space H 0 (curl, O) satisfies In the following, let K ⊂ X be a closed and convex set containing the origin (0, 0) ∈ K. This set denotes the feasible set for the electromagnetic fields E and H . As pointed out in the introduction, the open subsets E , H ⊂ represent the free region for E and H , respectively. In other words, K satisfies Having introduced all the required function spaces, let us now formulate the Maxwell obstacle problem under investigation: Let T ∈ (0, ∞). Given initial data By (1.5) and (1.3), the variational inequality in (P) can be compactly written as (1.9) Let us point out that Theorem 1 yields a rather weak existence result for (P), which does not necessarily satisfy the prescribed electric boundary condition and the global D(A)-regularity, i.e., (E, H ) ∈ L 2 ((0, T ), D(A)) is not guaranteed by Theorem 1. This is the reason why it is difficult to derive a uniqueness result in Theorem 1 since classical energy arguments cannot be directly applied here. Note that Theorem 1 affirms that the electric boundary condition E ∈ L ∞ ((0, T ), H 0 (curl)) holds in the case of H = , i.e., if there is no obstacle constraint imposed on the magnetic field. Of course, if K is assumed to additionally satisfy K ⊂ D(A), then the prescribed electric boundary condition and the global D(A)-regularity are pointwisely satisfied. However, the additional assumption K ⊂ D(A) is rather restrictive since our analysis requires that K is closed in X. Note that a closed set in D(A) is not necessarily closed in X. For this reason, we do not focus on the additional restrictive assumption K ⊂ D(A) in our analysis. As pointed out earlier, if the obstacle constraint is applied only either to the magnetic field or to the electric field, i.e., if E = or H = , then (1.9) leads to a uniqueness result for (P) (see Theorem 2). For a more general case, we propose the following structural assumption on the feasible set: There is no particular physical reason why the magnetic constraint region c H ⊂ is chosen just to be open (not necessarily Lipschitz). This choice is considered solely to make our result mathematically more general. On the other hand, the Lipschitz regularity for the domain U E = U \ c E ⊂ E is required for the application of the extension theorem [10,Appendix] in (3.6). In the real applications, both electric and magnetic constraint regions c E , c H are typically given by bounded Lipschitz polyhedral domains. We note that the assumption (1.11) is also related to obstacle problems with curl constraints (see Miranda et al. [18] for recent mathematical results on parabolic nonlinear obstacle problems with curl constraints). An example for such a set is given in Example 1.2. As a consequence of Assumption 1.1, if ∂ ⊂ ∂ c H , then the solution to (P) satisfies the magnetic boundary condition, i.e., H ∈ H 0 (curl, c H ). Fig. 1) is closed, convex and satisfies (0, 0) ∈ K, (1.6), and Assumption 1.1. In this case, c H models a medium with a magnetic shielding property, and c E describes an electromagnetic coil with electric insulation on ∂ c E (for δ ≈ 0). By the above construction, we see that (1.11) is readily satisfied, and K c is a closed and convex subset of H 0 (curl, c E ) × H 0 (curl, c H ). Moreover, it is also obvious that K ⊂ X is convex and contains (0, 0). Let us now show that K ⊂ X is closed. To this aim, let {v n , w n } ∞ n=1 ⊂ K be a strongly converging sequence in X, i.e., By the definition of K and in view of In conclusion, K ⊂ X is closed, convex and satisfies (0, 0) ∈ K, (1.6), and Assumption 1.1. Theorem 2 (Uniqueness Our final theoretical finding is an existence result for the eddy current approximation to (P) in the case where the electric permittivity is negligibly small in the electric constraint region c E . As pointed out in the introduction, our result relies on the following additional assumption: with an open set c E ⊂ satisfying |∂ c E | = 0 and E = \ c E . If Assumption 1.1 holds, then the initial data (E 0 , H 0 ) ∈ D(A) ∩ K is assumed to satisfy the following compatibility condition: is assumed to satisfy the following compatibility condition: Let us remark that the L 2 -boundedness assumption (1.14) is reasonable since c E is exactly the region where the obstacle constraint is applied to the electric field. Preliminaries Let us first make a preparation by recalling some well-known results. For a nonempty, convex, and closed subset K ⊂ X, let I K : X → R := (−∞, ∞] denote the indicator functional of K: By definition, for every (p, q) ∈ X, the subdifferential ∂I K (p, q) is given by Furthermore, for every λ > 0, let J λ : X → X and λ : X → X denote, respectively, the resolvent and the Yosida approximation of the subdifferential ∂I K , i.e., where I d : X → X denotes the identity operator. Since K ⊂ X is nonempty, convex and closed, the indicator functional I K : X → R is proper, convex, and lower semicontinuous. As a consequence (see [26, Proposition 1.5, p. 157]), the subdifferential ∂I K : X → 2 X is m-accretive, which implies that J λ : X → X is non-expansive, and λ : X → X is m-accretive and Lipschitzcontinuous with the Lipschitz constant L λ = λ −1 (see [26, Theorem 1.1, p. 161]). We also make use of the Hilbert projection operator on K denoted by P K : X → K, i.e., for every (p, q) ∈ X, P K (p, q) ∈ K is given by the unique minimizer of (1.23) Lemma 4. Let K ⊂ X be nonempty, convex and closed. Then, it holds that J λ = P K for all λ > 0. Proof of Theorem 1 We split the proof into two parts. 1. Step: Existence for (P). Let {λ n } ∞ n=1 ⊂ (0, ∞) be a null sequence. For every n ∈ N, let To this aim, we make use the fact that A is skew adjoint and (2.1) to deduce that By the monotonicity of λ n , it holds for all t ∈ [0, T ], n ∈ N and (v, w) ∈ D(A) that Using again the fact that A is skew-adjoint, we obtain from (2.1) that (2.7) On the other hand, from Lemma 4, we know that J λ n = P K ∀n ∈ N. (2.8) Altogether, due to (2.3), (2.5), (2.7) and (2.8), passing to the limit n → ∞ in (2.6) leads to Consequently, as D(A) ⊂ X is dense and (I d − P K ) : X → X is continuous, we obtain that τ (p, q)), (p, q)) X ≥ 0. Inserting (v, w) = (E, H )(t) + τ (p, q) with τ > 0 and (p, q) ∈ X, we have Then, letting τ → 0, we deduce from the continuity of (I d − P K ) : X → X that from which it follows that Proof of Theorem 2 We split the proof into three steps. Consequently, by the local regularity property (3.3), (1.4) implies Let us underline that in (3.5) the test function w (resp. v) does not appear in the region c E (resp. c H ), i.e., in the region where the L 2 -regularity of curl H (resp. curl E) is not guaranteed. This particular structure allows us to deduce that the third integral in (3.5) We modify the extended vector field intõ The modified vector field η belongs as well to H (curl). Indeed, as η ∈ H (curl), the distributional definition of the curl-operator yields Thus, from which it follows that η ∈ H (curl) and curlη = H (curlη | H ) where H : L 2 ( H ) → L 2 ( ) denotes the zero extension operator (2.9). As a consequence, since (0, 0) ∈ K, (1.6) ensures that (0, η) ∈ D(A) ∩ K. Therefore, for arbitrarily fixed τ ∈ [0, T ) and h ∈ (0, T − τ ), we may insert Since η was chosen arbitrarily, similarly to the proof of Theorem 1 (2. Step), it follows that for a.e. τ ∈ (0, T ) and all η ∈ H (curl, E ) with η | c H ∈ H 0 (curl, c H ). (3.8) In particular, thanks to (3. T ), D(A)) and (v, w)(t) ∈ K for all t ∈ [0, T ]. Therefore, considering this specific test function in (3.11) leads to from which it follows, after passing to the limit h ↓ 0, that Furthermore, thanks to (3.14), we may insert (y, H 2 )) to obtain after adding the resulting inequalities that ∈ (0, T ). Multiplying the second equation in (3.16) with h and then integrating the resulting equality over E ∩ H , we obtain from the above identity that ∈ (0, T ). Step 3: Uniqueness for (P) in the case of H = or E = . We only consider the case H = . The proof for the case E = is completely analogous. In view of H = , we have that K = K E × L 2 μ ( ) for some closed and convex subset K E ⊂ L 2 ( ) containing the zero element. Let (E, H ) ∈ W 1,∞ ((0, T ), X) denote a solution to (P). According to Theorem 1, since holds for a.e. t ∈ (0, T ). Then, applying (3.17) to (P) leads to the following variational inequality for the electric field E: with τ ∈ (0, T ), h ∈ (0, T − τ ) and z ∈ H 0 (curl) ∩ K E , we get the following variational inequality for E without the time integration: for a.e. t ∈ (0, T ) and all z ∈ H 0 (curl) ∩ K E . (3.18) Suppose that (E j , H j ) ∈ W 1,∞ ((0, T ), X), j = 1, 2, are solutions to (P). We set (e, h) : in the variational inequality (3.18) for E = E 1 (resp. E = E 2 ) and adding the resulting inequalities, we obtain μ∂ t e(t) · e(t) − h(t) · curl e(t) dx ≤ 0 for a.e. t ∈ (0, T ). On the other hand, inserting w = H 2 (t) (resp. w = H 1 (t)) in the variational equality (3.17) for H = H 1 (resp. H = H 2 ) and adding the resulting equalities, we obtain μ∂ t h(t) · h(t) = −curl e(t) · h(t) for a.e. t ∈ (0, T ). In conclusion, it holds that d dt e(t) 2 μ ( ) ≤ 0 for a.e. t ∈ (0, T ), which yields that (P) admits at most only one solution. This completes the proof. In the following let (P n ) denote (P) with replaced by n . Applying this equality to the previous inequality leads to 1 2n ∈ (0, T ). Conclusion In this paper, we developed an existence and uniqueness theory for the electromagnetic obstacle problem (P). While the existence is guaranteed for a general closed and convex set K ⊂ X containing the origin, the uniqueness is shown under two different assumptions. The general one is based on a localization strategy, leading to a localized variational inequality (3.15) on the electric and magnetic constraint regions c E and c H . The established well-posedness result finds applications in the mathematical modeling of electromagnetic shielding. Therefore, it serves as a fundament for the numerical simulation and shape design of electromagnetic shielding materials that requires a substantial extension of the developed techniques [31,33]. In particular, the numerical analysis of (P) requires a Sobolev regularity property for the electric field of the type E ∈ L 1 ((0, T ), H s ( )) for some s > 0. We aim at investigating this Sobolev regularity issue in our future research related to the numerical analysis of (P).
2020-06-25T09:07:48.455Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "1a2250d95dff62c676b10e7488e49d02862ad0b0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jde.2020.05.009", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "85a44357a1f801ac5f0675519b05a20422cd075d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
133901948
pes2o/s2orc
v3-fos-license
Strategy for Community Adaptation in Facing Flood Natural Disasters in Pesisir Selatan District , West Sumatra Adaptation of the community to flood natural disasters is part of the flood mitigation natural disaster that often occurs in the rainy season. The aims of this research are to analyze landform units and land characteristics that have flood hazards and community adaptation strategies in dealing with flood natural disasters. The method used in this research is the survey method, which is to collect data on land characteristics as characteristics or characteristics of flooded areas and interviews with local communities about adaptation strategies undertaken to deal with flood natural disasters. The results showed that the landform units formed due to the flood process in the study area were in the form of floodplains, back swamps, alluvial terraces, depression inter beach ridge, and alluvial plain complexes. The unit characteristics of landforms generally have flat morphometry with slopes ranging from 0 2%, the genesis of these landform units due to fluvial and marine processes. The constituent material in this area is mud to coarse sand. The rock conditions in this area are rocks originating from volcanoes and undergoing a process of destruction due to processes from the river so that the rocks in this area are gravel, rough sand, fine sand. Soil conditions in each unit of landform also vary from the formation of soil to on newly developed land. Vegetation that grows in each unit landform of this is in the form of natural vegetation and lovely water vegetation or vegetation which requires a lot of water for its growth and development. The community strategy in dealing with flood natural disasters is in the form of staging houses, knowing the time of occurrence of tides, and opening the river estuary if the river estuary is covered by sediment. Introduction Indonesia is a country that has a very complex potential for disasters, this is due to Indonesia's geographical location in the equator.Indonesia is also located at the confluence of three large plates of the world, namely the Indo_australian plate, the Eurasian plate, and the Pacific plate.As a result of Indonesia's geographical location, Indonesia has the potential for natural disasters in the form of floods, flash floods, droughts, volcanic eruptions, landslides, earthquakes, tsunamis, forest and land fires (Hermon, 2012). Natural disasters that often occur during the rainy season are floods, flash floods, and landslides (Su Rito et al., 2011;Anggara et al., 2013;Muh Aris et al., 2014;Oktorie, 2017).The intensity of natural disaster events in the Indonesian region tends to increase from year to year, this causes losses due to natural disasters tend to increase along with the intensity of the occurrence of natural disasters.Regions that have the potential for natural disasters generally have their own characteristics, which are formed as a result of past natural disasters (K.J. Gregory et al., 2008;Hermon, 2015).To find out the potential of natural disasters that occur in an area can be seen from the constituent material of the land that is characteristic of the land due to natural disaster activities that occurred in the past.Most of the people who live in areas that have the potential for natural disasters have adaptation patterns to adapt to the natural disasters that will occur in the future (Nick et al., 2005;ISDR, 2009;Hermon, 2014;Sampei et al., 2016).This community adaptation is generally inherited by their ancestors, who had already lived in the area (Jeannette et al., 2006;Hongjian et al., 2016;Hermon, 2016;Hermon, 2017).The development of science and technology in recent years has also led to changes in the adaptation of people living in areas that have the potential for floods.One of the areas in West Sumatra that often occurs in natural disasters is Pesisir Selatan District, which is located in the southern part of the city of Padang (Hermon, 2010).The occurrence of this natural catastrophic flood often occurs in plain areas near the coast, causing loss of property and also the human soul, as happened in the Kambang area.The flooding that occurred in this area caused losses in the form of the destruction of several houses, the breakdown of the Painan-Bengkulu highway and human casualties.To reduce losses due to natural floods, mitigation measures need to be taken in the form of knowing the landform units and characteristics of the land that have the potential for natural floods, as well as community adaptation strategies in the face of floods. Method The method used in this study is the survey method, namely by taking measurements in the field and interviews with the community.Measurements in the field were carried out to determine the characteristics of the land that has the potential for floods and interview techniques to determine the community's adaptation strategies in the face of floods that often occur in the rainy season. Stages of Research The research is carried out through several stages, namely the pre-field stage, the field stage, and the post-field stage, while the details of the activities are as follows; Pre-field stage The activities carried out at this stage are preparing literature related to the problem of good research from previous research reports and journals related to flood natural disasters.Prepare satellite imagery and maps needed to carry out data collection in the field and determine sample points for field data retrieval, especially those related to the characteristics of land which has the potential for flood natural disasters. Stage Field At this stage, the sample map is matched with the actual conditions in the field, after which sampling characteristics of the land that has the potential for a natural catastrophic flood are carried out.The characteristics of land collected in the field are data collection, geology, geomorphology, soil, hydrological conditions, and land use.To find out the adaptation star great to flood natural disasters, interviews were conducted with communities living in areas that have the potential for floods. Post-Field Stages The post-field stage is an activity carried out after completion of collecting data in the field.Activities carried out at this stage are classifying data, tabulating data, analyzing data, and interpreting data to draw conclusions.Make a map of results and research reports. Data analysis Analysis of the data used to answer this research question is as follows; a.To determine the landform units that have the potential for natural disasters floods are to use a geomorphological approach and interpretation of satellite images to determine the boundaries of landform units.The landform units that have the potential for flooding can be seen from the constituent material, vegetation type, and regional morphometry (K.J. Gregory et al., 2008;Ellen et al., 2011), b.To find out the community's strategy in dealing with floods natural disasters are conducted by interviewing local communities, especially those who live in areas that often experience flooding (Su Rito et al., 2011;Muh Aris et al., 2014). Results and Discussion Land characteristics are a characteristic of land that distinguishes one land from another.The characteristics of the land in the flood area can be seen from the constituent material of the land which is in the form of material carried by the river and deposited on the left side of the river (Jeannetteet al., 2006;Barry et al., 2011;Kai Kai et al., 2018;Putra et al., 2017).Differences in land characteristics can be seen from landform units found in the area around the river flow.The characteristics of the land as a marker for flood areas can be seen in the following table: Based on the table above the characteristics of the land in the flood area can be seen from the characteristics of the area.The characteristics of the land due to the flooding process can be seen from the units of land formed around the river in the form of floodplains, rear swamps, alluvial terraces, depression between the sandbanks, and the alluvial plain complex.The unit of the floodplain landform has a morphometry in the form of a plain located on the left-right side of the river, this landform unit is formed due to sediment from material carried by flooding, the land in the floodplain landforms has not developed and is still fresh, rough and gravel sand that is still fresh .Groundwater conditions are generally good and vegetation that grows in the form of plants that like water or plants that need a lot of water.For more details, can be seen in Figure 1 as follows.Based on the figure on the floodplain in the study area still has natural vegetation in the form of grass and white plants, this shows that this flood plain still has a direct influence from flood natural disasters.The unit landform of the back swamp has morphometry in the form of a basin that is often flooded by water.The constituent material in the rear swamp landform unit is a material that has a fine size such as sand, fine sand, and mud.The vegetation that grows in the rear swamp is a plant that requires a lot of water, and is not good for use as agricultural land.The potential for groundwater is relatively large but usually has a pH of acidic water because it is always flooded by water.The unit of the form of alluvial terraces is formed due to the process of river erosion that is vertical in nature, this is because the river bedrock is not hard so it is easily crushed by river water.The alluvial terrace morphometry is in the form of terraced terrain, the farther away from the river flow shows the earliest terrace formed and the closer to the river flow shows the newly formed terrace.Old alluvial terraces generally have developed from the ground but still have a shallow soil solum that is 20-25 cm, while on the alluvial terrace that is still young has not shown the development of the soil, which is a fine sand fine material to coarse sand and gravel and has little natural vegetation.The potential of groundwater is generally good because the alluvial terrace landform unit has a constituent material in the form of sand and gravel.The unit of the shape of the depression between the sandstones is a form of land formed by a process originating from the sea.Unit morphometry in the form of depressed land between pistons in the form of a basin so that it has the potential for flooding.The constituent material in this landform unit is in the form of mud until fine sand is formed due to material carried by rising river water or due to the sea water that is installed.The hydrological potential of depression among the pebbles can be brackish water, whereas in the depression between old physical shelters the potential for water is in the form of fresh water but usually has a low or acidic water pH.The vegetation that usually grows in units of depressive land forms between the sandbanks and mangroves.In units of depressed land forms between old physical shelters can already be used as rice fields because they have no influence from sea water.The unit of landform of the complex alluvial plain is a unit of land formed by two processes, namely due to river flow and tidal sea water (Qinget al., 2016;Putra et al., 2013;Tomet al., 2017).As a result of the two masses of water that meet at the mouth of the river resulting in overflow around the river mouth.The constituent material in the landform unit of the alluvial plain complex is in the form of mud and fine sand, this is due to the fact that two water masses lose their driving energy so that the water mass seems to stop flowing and results in overflow.Generally the landform units of the alluvial plain complex have brackish groundwater potential because of the influence of the tidal sea water.The vegetation that grows in this landform unit is natural vegetation and is not good for farming.For more details, please see the following Flood Hazard Map in South Coastal District; Based on the picture above, the flood hazard in the study area is spread almost throughout the study area, this shows that the plains in the South Coastal District are formed due to the influence of the sedimentation process both from the land and from the sea.The widest distribution of flooding is in Lunang Silaut District.Most of this area is used for oil palm plantations.For more details, see the following Tables and Figures: Table 2 Source: Data analysis, 2017. Based on the table above the distribution of hazardous floods is found in Lunang Silaut Subdistrict, which is 64314.63ha, most of this area is behind swamps which have now been used for oil palm plantations.The smallest flood hazard is found in Subdistrict IV Nagari Bayang Utara, this is because this area has a morphometry in the form of mountains with very steep slopes.Based on the network image above, it can be seen that Lunang Silaut Subdistrict which has the potential to spread experiencing floods, this is because most of this area is in the form of very large swamps which are close to the sea.The swamps in this area are made of channels to remove swamp water so that this area can be used as oil palm land both by the community and the company.Community adaptation to the dangers of flooding is an action taken by the community to adjust to the hazard of flooding that often occurs when the rainy season comes.The adaptation of the people in Pesisir Selatan District in the face of flood natural disasters is to make a house on stilts, for more details can be seen in Figures 2 and 3 as follows; Based on the picture above, the residents' houses were made a stage which aimed to avoid the entry of flood water into the house, this was made by self-help of the people living in areas that often experienced floods.The flood height in this area reaches 240 cm so that to anticipate the entry of water into the house, the stilt house is made with a height of more than 240 cm.This stage house also functions not to disrupt the flow of water, so that the water reaches the river mouth faster and the duration of the flood is not too long.People choose the form of stilt houses in areas that have a flood hazard because the frequency of flooding in this area can reach 3 x a year with the duration of flooding generally less than 24 hours, this indicates that this area has the potential to quickly receive floods (time to peak) when reaching the peak of the rapid and fast floods also floods to recede.Flood characteristics in this study area are very much determined by the pattern of river flows which are generally parallel ie perpendicular and the duration of rapid flooding is caused by very close river mouths.The community's adaptation strategy to deal with floods is to know the timing of sea tides, this is because flood events often coincide with the tide time.At the mouth of the river meets two masses of water, causing water from the mainland to be unable to enter the sea and sea water cannot enter the land through the river, this is what causes floods in the plains.Generally the flooding that occurs around the river mouth is in the form of overflow flooding because the river flow cannot accommodate the existing water.Knowledge of sea tide time is very much needed by the community because some of the people in the plains, especially those near the sea, have fishermen livelihoods, so knowledge of tide is very necessary to find fish in the sea when it is not in the rainy season.The community adaptation to overcome the flood disaster is by keeping the river estuary from being covered by sediments from land and sea.Usually people will see the river mouth when river water starts high, this is done to see the tidal conditions of the sea and also see the estuary door closed by sediment or not.Usually the community will work together to open the estuary from the sand deposits that cover the river mouth so that the river water flows faster into the sea and the faster the flood water will recede.This condition was carried out at the Batang Tarusan estuary by a young man who lives on Muaro Pulau Karam.Based on the above description of the community strategy in dealing with floods in the South Coastal District, conclusions can be drawn as follows; 1. Landform units that have the potential to be affected by floods are units of landforms of floodplains, rear swamps, alluvial terraces, depressions between sandstones, and alluvial terrain complexes.2. The strategy of the people in facing floods is to make houses on stilts in areas that are frequently flooded, to know the time of tides, and to work together to open river mouths if sediment closure occurs. Conclusion To improve the community adaptation strategy in the face of flood natural disasters are as follows; 1. Increasing people's knowledge, especially in landform units that have the potential to be affected by floods, 2. Increasing people's knowledge especially about the time to reach the peak of the flood, (time to peak) so that the community knows how to mitigate floods, 3. Increasing community participation, especially in training on community adaptation strategies to deal with flood natural disasters. Figure 1 . Figure 1.Flood Plain in Batang Kapas District Source: 2017 Research Documentation Table 1 . Characteristics of Land in Flood Areas . Flood Spatial Hazard Distribution in Pesisir Selatan District
2019-04-27T13:13:11.648Z
2018-12-16T00:00:00.000
{ "year": 2018, "sha1": "8d0cde2b87a3f93fd82bf4f981abc0c910091415", "oa_license": "CCBYSA", "oa_url": "http://sjdgge.ppj.unp.ac.id/index.php/Sjdgge/article/download/170/122", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8d0cde2b87a3f93fd82bf4f981abc0c910091415", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
221462081
pes2o/s2orc
v3-fos-license
Endogenous CCN5 Participates in Angiotensin II/TGF-β1 Networking of Cardiac Fibrosis in High Angiotensin II-Induced Hypertensive Heart Failure Aberrant activation of angiotensin II (Ang II) accelerates hypertensive heart failure (HF); this has drawn worldwide attention. The complex Ang II/transforming growth factor (TGF)-β1 networking consists of central mechanisms underlying pro-fibrotic effects; however, this networking still remains unclear. Cellular communication network 5 (CCN5), known as secreted matricellular protein, mediates anti-fibrotic activity by inhibiting fibroblast-to-myofibroblast transition and the TGF-β1 signaling pathway. We hypothesized that endogenous CCN5 plays an essential role in TGF-β1/Ang II networking-induced cardiac fibrosis (CF), which accelerates the development of hypertensive HF. This study aimed to investigate the potential role of CCN5 in TGF-β1/Ang II networking-induced CF. Our clinical retrospective study demonstrated that serum CCN5 decreased in hypertensive patients, but significantly increased in hypertensive patients taking oral angiotensin-converting enzyme inhibitor (ACEI). A negative association was observed between CCN5 and Ang II in grade 2and 3 hypertensive patients receiving ACEI treatment. We further created an experimental model of high Ang II-induced hypertensive HF. CCN5 was downregulated in the spontaneously hypertensive rats (SHRs) and increased via the inhibition of Ang II production by ACEI. This CCN5 downregulation may activate the TGF-β1 signaling pathway, which promotes direct deposition of the extracellular matrix (ECM) and fibroblast-to-myofibroblast transition via activated Smad-3. Double immunofluorescence staining of CCN5 and cell markers of cardiac tissue cell types suggested that CCN5 was mainly expressed in the cardiac fibroblasts. Isolated cardiac fibroblasts were exposed to Ang II and transfected with small interfering RNA targeting CCN5. The expression of TGF-β1 together with Col Ia and Col IIIa was further promoted, and alpha-smooth muscle actin (α-SMA) was strongly expressed in the cardiac fibroblasts stimulated with Ang II and siRNA. In our study, we confirmed the anti-fibrotic ability of endogenous CCN5 in high Ang II-induced hypertensive HF. Elevated Ang II levels may decrease CCN5 expression, which subsequently activates TGF-β1 and finally promotes the direct deposition of the ECM and fibroblast-to-myofibroblast transition via Smad-3 activation. CCN5 may serve as a potential biomarker for estimating CF in hypertensive patients. A novel therapeutic target should be developed for stimulating endogenous CCN5 production. INTRODUCTION Cardiovascular disease is the leading cause of deaths, accounting for 17.7 million deaths of 55 million deaths worldwide in 2017 . Hypertension is the main risk factor for cardiovascular disease and may lead to increased morbidity of coronary artery disease, heart failure (HF), and myocardial infarction. The worldwide prevalence of hypertension and the associated complication, especially HF secondary to hypertension, have drawn attention (Dagenais et al., 2019). Long-term high blood pressure (BP) may promote the development of pathological cardiac structural and functional deterioration, leading to left ventricular (LV) hypertrophy and cardiac fibrosis (CF). These irreversible cardiac remodeling responses always culminate into HF eventually (Lai et al., 2019). Over-activation of the renin-angiotensin-aldosterone system (RAAS), as the major cause of juvenile hypertension, is often characterized by aberrant activation of angiotensin II (Ang II). Over-expression of Ang II affects regulation of high BP and CF, eventually leading to HF (Berk et al., 2007;Singh and Karnik, 2019). In this high Ang II-induced hypertensive HF, Ang II type 1 receptor, bound to Ang II, may activate transforming growth factor-b 1 (TGF-b 1 ), which subsequently promotes deposition of the extracellular matrix (ECM) and sensitize fibroblast-tomyofibroblast transition (Nagpal et al., 2016). Downregulation of Ang II expression by blocking the conversion of angiotensin I (Ang I) to Ang II using an angiotensin-converting enzyme inhibitor (ACEI) prevents cardiac function deterioration from HF in hypertensive patients (Zhang et al., 2019). In clinical practice, ACEI has high recommendation level in treatment of high Ang II-induced hypertensive HF (Yancy et al., 2017). The cellular communication network (CCN) family, known as a group of matricellular proteins, has been described with variant cell functions in regulating fibrosis, angiogenesis, cell differentiation, and wound repair (Xu et al., 2015;Jeong et al., 2016). Several members of the CCN family play essential roles in the development of pressure overload-induced myocardial fibrosis. CCN2 (cellular communication network 2), also called as connective tissue growth factor, is a pro-fibrotic mediator in the development of CF, which can be induced by TGF-b 1 in cardiac fibroblasts and cardiomyocytes (Ye et al., 2019). Besides these pro-fibrotic effects of CCN2, the anti-fibrotic potential of cellular communication network 5 (CCN5, Wisp-2) (Grunberg et al., 2018). As secreted proteins, CCN2 and CCN5 play opposing roles in the development of CF. The possible mechanisms underlying anti-fibrotic effects involve in blocking fibroblast-to-myofibroblast transition, endothelial-mesenchymal transition, and the TGF-b 1 signaling pathway (Jeong et al., 2016). Although several studies have reported the anti-fibrotic effects of exogenous CCN5 in HF, the roles of endogenous CCN5 in high Ang II-induced hypertensive HF still remain unclear (Yoon et al., 2010;Jeong et al., 2016). We hypothesized that endogenous CCN5 plays an essential role in TGF-b 1 /Ang II networkinginduced CF which accelerates the development of hypertensive HF. We aimed to investigate the potential role of CCN5 in TGFb 1 /Ang II networking-induced CF. METHODS AND MATERIALS For expanded and detailed information about the human study population, reagents, BP measurement, echocardiography, histopathology, ELISA estimation, reverse transcription and real-time quantitative polymerase chain reaction, protein extraction, Western blotting, neonatal rat cardiomyocytes, and cardiac fibroblasts culture, small interfering RNA transfection, and immunofluorescence assay, please refer to the Supplementary Materials. Human Study Population All protocols were approved by the Ethical Committee Board of Tianjin Union Medical Center, and all subjects provided informed consent. Animals Model of Hypertensive Heart Failure Spontaneously hypertensive rats (SHRs) obtained by selective inbreeding of the Wistar-Kyoto rats (WKYs; Vital River, Beijing, China) with a genetic basis for high BP were selected for mimicking hypertension. Normotensive WKY (Vital River, Beijing, China) were chosen as the negative controls. Twentyfour 13-week-old SHRs (weight 200 ± 20 g) were equally divided into the model group or enalapril group based on whether enalapril treatment [SFDA approval number H20170298, 5 mg/tablet, Merck Sharp & Dohme (Australia) Pty. Ltd] was given or not. Additionally, the control group comprised 13week-old WKYs (n = 12). The animals were housed in a 12-h light/dark room and given free access to tap water and chow feed under laboratory conditions. This experiment proceeded after acclimatization in an on-site facility for 1 week. The enalapril group animals were administered with enalapril [1.05 mg. (Kg. day) −1 ] orally in a 1 ml of distilled water; accordingly, animals in the control and model groups were administered with 1 ml of distilled water using a disposable plastic syringe. Statistical Analysis Data were analyzed using SPSS version 17.0 (SPSS Inc., Chicago, USA). Discrete variables were expressed as numbers and percentages. Mean ± SD or median with interquartile (IQ) 25% and 75% (Q25-Q75) were based on the normality for continuous variables. Normality for continuous variables was performed by the Kolmogorov-Smirnov tests. Categorical variables were analyzed using chi-square tests. To compare groups, we used the Mann-Whitney U-test followed by Tukey's multiple comparison tests or Kruskal-Wallis tests to analyze non-normally distributed continuous variables. Spearman correlation analysis was performed for non-conformity analysis. In all analyses, statistical significance was accepted at P < 0.05. Demographic Characteristics A total of 380 hypertensive patients and 39 normotensive subjects (control) were enrolled into this study. All characteristics of hypertensive patients across BP categories [grade 1 (n = 50), mean BP 149.76/87.84 mmHg; grade 2 (n = 110), mean BP 163.69/94.22 mmHg; grade 3 (n = 220), mean BP 190.21/104.03 mmHg] were demonstrated. There were no differences in age, nor history of hypertension between the hypertensive patients and the normotensive subjects (P > 0.05) (Supplemental Table 1). Compared to normotensive subjects, the heart rate levels of all hypertensive patients increased significantly (P < 0.05). The distribution of sex, drinking, and smoking was comparable between the hypertensive patients and normotensive subjects (P > 0.05). Additionally, no statistical difference was demonstrated in the lipid metabolism and renal function between hypertension and control groups (P > 0.05). Although the median body mass index (BMI) increased gradually with hypertension grade in patients, no significant difference was demonstrated among diverse sub-groups (P > 0.05). Cardiac structure deterioration was observed in the hypertensive patients, especially in patients with grade 3 hypertension [left atrium (LA), LV, left ventricle posterior wall (LVPW), interventricular septum (IVS), P < 0.001]. Cardiac systolic function [left ventricular ejection fraction (LVEF): P > 0.05] showed no difference among diverse sub-groups. However, the E/A ratio, an echocardiographic index for evaluating diastolic dysfunction, decreased significantly with increasing hypertension grade (P < 0.05). To indirectly evaluate the extension of CF, we calculated the left ventricular mass index (LVMI) value for each subject. The mean LVMI level were significantly increased with increasing hypertension grade (P < 0.01). Contrarily, CCN5 levels decreased along with the increased BP (grade 1: median 344.17 pg/ml, grade 2: median 284.45 pg/ml, grade 3: median 224.01 pg/ml, P < 0.05) ( Figure 1C). ACEI, which inhibits the conversion of Ang I to Ang II, could attenuate the expression of Ang II. CCN5 can be secreted into circulating blood from multiple vital organs including the heart, lung, and adipose tissue. To evaluate the effects of lung and adipose on serum CCN5 levels, the expression of CCN5 in the lung and adipose tissue was determined via Western blotting assay. No differences were showed in the expression of CCN5 between WKY and SHR in the lung and , and CCN5 (C, F) concentrations in different of blood pressure levels among hypertensive patients with or without ACEI. Data were expressed as median with interquartile range (IQR); whiskers represent ± 1.5 IQR (boxplots). *P < 0.05 vs. control, # P < 0.05 vs. grade 1, $ P < 0.05 vs. grade 2. † P < 0.05 vs. No ACEI. grade 1/2/3 were defined as the grade of blood pressure in hypertensive patients. Ang II, angiotensin II; CCN2, cellular communication network factor 2; CCN5, cellular communication network 5; ACEI, angiotensin converting enzyme inhibitor. adipose tissue. However, the expression of CCN5 in adipose tissue was higher than that in the lung (WKY: P < 0.05, SHR: P < 0.05). This suggested that the change in serum CCN5 levels mainly be affected by production and secretions of the heart in normotensive and hypertensive subjects ( Figure S1). Moreover, we explored whether the downregulation of Ang II could affect the serum CCN2 or CCN5 levels in hypertensive patients ( Figures 1D-F). We cataloged all hypertensive patients across ACEI usage rates. Serum Ang II levels were decreased after ACEI treatment in hypertensive patients (P < 0.05). Furthermore, we found that hypertensive patients using ACEI had elevated CCN2 and lower CCN5 levels (P < 0.05). Association Between Ang II and CCN2/ CCN5 Spearman analysis was performed to evaluate whether serum CCN2 and CCN5 are related to serum Ang II. To investigate whether downregulating Ang II had an influence on this association, all hypertensive patients were divided into two sub-groups according to the usage of ACEI (Figures 2A-D). Serum CCN2 levels (r = 0.286, P < 0.01) and serum CCN5 levels (r = −0.347, P <0.01) correlated with serum Ang II levels in hypertensive patients without ACEI treatment. Coincidently, these Spearman rank relationships were further enhanced in hypertensive patients using ACEI (CCN2: r = 0.340, P < 0.01; CCN5: r = −0.406, P < 0.01). Afterward, we investigated this association between Ang II and CCN5 further from grade 1 to grade 3 hypertensive patients with ACEI or not. These results demonstrated that negative association between Ang II and CCN5 was found in grade 2, and 3 hypertensive patients with the treatment of ACEI respectively (grade 2: r = −0.544, P < 0.01; grade 3: r = −0.401, P < 0.001) ( Figures 2E, F). Characterization of High Ang II-Induced Hypertensive Heart Failure After 14 weeks, the SHR model group exhibited a higher expression of Ang II in both serum and myocardial tissue than the WKYs of the control group. Moreover, the Ang II expression in the serum and myocardial tissue could be downregulated using enalapril (Figures 3A-C). Over the 14-week observation period, both SBP and DBP of the model group (SHR) were increased markedly compared to those of the control group (WKY) ( Figure 3D). Enalapril decreased systolic blood pressure (SBP) and diastolic blood pressure (DBP) levels immediately. There was no difference in the ratio of the heart and body weight between the model and control groups until week 28 ( Figure 3E). Echocardiography was performed at distinct time points of 24-and 28-week to determine the success of the experimental model of hypertensive HF ( Figures 3F, G). LV mass, as one of the essential cardiac hypertrophic indices, was increased in the model group with decreased LVEF and fraction shortening (FS) value at 24 weeks. The cardioprotective effects of enalapril from inhibiting Ang II, was not apparent until week 28. Enalapril protected LVEF, FS, and LV mass from deterioration. Enarapril could ameliorate cardiac dysfunction in the high Ang II-induced HF, but this protective effect depended on the persistent inhibition of Ang II. Subsequently, serum BNP and sST2 (soluble suppression of tumorigenicity-2) levels were detected, indicating that enalapril could attenuate high Ang II-induced hypertensive HF ( Figures 3H, I). After evaluation of cardiac structure, and function, our results indicated that long-term stimuli of high BP could induce CF, hypertrophy, and even severe HF. We evaluated the morphological changes of the heart tissue further. Myocyte hypertrophy occurred significantly in the model group from week 24; however, this kind of hypertrophy was reversed by enalapril at week 28 ( Figure S2). Myocardial fibrosis participated in the entire cardiac hypertrophy process. Collagen deposition occurred in the model group at week 24 and further worsened at week 28. Enalapril could effectively attenuate this collagen deposition in SHRs at 24 and 28 weeks ( Figures 4A, B). These results coincided with the expression of collagen Ia and collagen IIIa ( Figure 4C). A previous study demonstrated that CCN5 could block the TGF-b 1 signaling pathway and fibroblast-to-myofibroblast transition (Jeong et al., 2016). In our study, we found that TGF-b 1 and alpha-smooth muscle actin (a-SMA) were also elevated, which indicated that pro-fibrotic pathways and fibroblast-to-myofibroblast transition were activated in the model group ( Figures 4D). These results demonstrated that endogenous CCN5 might create a link between Ang II and TGF-b 1 and a-SMA. Thereafter, we investigated the interaction of CCN5 and Ang II-TGF-b 1 signaling axis in fibrotic pathways. The expression of myocardial CCN5 was significantly reduced in the model group and in reverse increased after inhibition of Ang II by enalapril ( Figure 4D). The results of Western blotting analysis also revealed decreased endogenous CCN5 and an activated TGF-b 1 signaling pathway and fibroblast-to-myofibroblast transition ( Figures 5A-H). To confirm the main resource of CCN5 in cardiac tissues, we isolated the rat neonatal cardiomyocytes (CMs) and cardiac fibroblasts from neonatal WKY. Then we performed double immunofluorescence staining of CCN5 and cell markers (TnI or vimentin) of CMs and cardiac fibroblasts. CCN5 was mainly expressed in cardiac fibroblasts (P < 0.05 vs. CMs) ( Figure 6A). Additionally, we performed the double immunofluorescence staining of CCN5 and CD31 (cell marker of cardiac endothelial cells). No clear co-localization was found between CCN5 and CD31 in cardiac tissues, and this demonstrated that CCN5 may not be mainly expressed in cardiac endothelial cells ( Figure 6B). To evaluate the essential role of endogenous CCN5 in the Ang II induced profibrotic pathophysiology, the isolated cardiac fibroblasts were exposed to Ang II (0.1 mM). The siRNA targeting CCN5 was synthesized and transfected into cardiac fibroblasts to suppress the CCN5 expression. After stimulation with Ang II, CCN5 expression was significantly down-regulated in the cardiac fibroblasts, and this downregulation was further enhanced after on using siRNA (*P < 0.05 vs. control; # P < 0.05 vs. Ang II/scrambled siRNA) ( Figure 7A). The expression of TGF-b 1 , Col Ia, and Col IIIa was also upregulated on use of siRNA. These results confirmed that Ang II promoted the TGFb 1 induced CF by down-regulating the expression of CCN5. We investigated the effects of downregulation of CCN5 on fibroblastto-myofibroblast transition. siRNA could significantly promote expression of a-SMA in the cardiac fibroblasts, suggesting that fibroblast-to-myofibroblast transition was enhanced by downregulation of CCN5 expression ( Figure 7B). DISCUSSION In this study, we demonstrated that CCN5 downregulation might be closely related to Ang II expression in hypertensive HF. CCN5 expression could be elevated by inhibiting Ang II, which provided a cardioprotective effect in hypertension-induced HF. Serum CCN5, CCN2, and Ang II concentrations were tested between hypertensive patients and healthy controls, and we further investigated on the association between Ang II and matricellular proteins of CCN5 and CCN2. Using our experimental model of high Ang II-induced hypertensive HF along with elevated expression of Ang II in both serum and myocardial tissue, we evaluated whether downregulation of CCN5 could affect the cardiac structure, function, and myocardial fibrosis. Moreover, we elucidated the indispensable role of endogenous CCN5 in high Ang II-induced hypertensive HF. Our clinical results demonstrated that serum CCN5 levels reduced significantly because of the increased severity and history of high BP in hypertensive patients. Additionally, this negative association was described between serum Ang II and CCN5, especially in grade 2 and 3 hypertensive patients using oral ACEI regularly. Our rat model of essential hypertensive HF revealed a significant decrease of CCN5 in high Ang II-induced hypertensive HF. Expression of CCN5 was upregulated after ACEI treatment, which further reversed myocardial fibrosis and protected heart function via inhibition of TGF-b 1 signaling and fibroblast-to-myofibroblast transition. CCN5 has multiple biological functions (Bornstein and Sage, 2002). Unlike the other CCN family proteins, CCN5 specially lacks a cysteine-rich carboxyl-terminal repeat domain, suggesting that it may be an alternative regulator of other CCN family proteins. Comparison studies of CCN2 with prominent pro-fibrotic activity in cardiac remodeling, in which CCN5 is best characterized, reported anti-hypertrophic and anti-fibrotic effects in the heart (Jeong et al., 2016;Ye et al., 2019). Besides cardiac tissue, lung and adipose tissue show high CCN5 expression (Hammarstedt et al., 2013;Fiaturi et al., 2018;Grunberg et al., 2018). To exclude interference of serum CCN5 from secreted CCN5 from lung and adipose tissue, we compared the expression of CCN5 in lung and adipose tissue between WKY and SHR. The results suggested that CCN5 expression in lung and adipose tissue might not cause the differentiated expression of CCN5 in the serum. Therefore, we assessed the serum CCN5 levels to predict the expression of CCN5 in cardiac tissue. To the best of our knowledge, these protective effects of CCN5 in the heart were verified via supplementation of exogenous CCN5. However, whether expression of endogenous CCN5 could be modulated by exogenous stimuli and play an indispensable role in anti-hypertrophic and anti-fibrotic activity remained unclear. Ang II is known as the primary regulatory factor in a series of RAAS-induced physiological and pathophysiological actions, which participates in homeostatic control of arterial pressure, tissue perfusion, and extracellular volume (Park et al., 2019). Active Ang II is converted from Ang I by ACE, which is cleaved by renin. High Ang II expression induced by RAAS over-activation contributes to the pathophysiology of diseases such as hypertension and hypertensive HF (Rosenkranz, 2004). Our results demonstrated that Ang II increased gradually in hypertensive patients with increasing BP levels. Additionally, we created an experimental model of hypertensive HF with increased Ang II in the serum and myocardial tissue. These results implied that our experimental model mimicked a high Ang II-induced hypertensive HF. To verify the essential role of CCN5 in hypertension, firstly we detected decreased CCN5 levels, but elevated CCN2 levels in all hypertensive patients. A previous study indicates opposite effects of CCN2 and CCN5 on the regulation of CF, which is consistent with our results (Jeong et al., 2016). Through the comparison of CCN5 levels in hypertensive patients with and without ACEI treatment and further associated analysis, we found that patients with higher Ang II levels had lower concentrations of CCN5, which suggested that an interaction might exist between Ang II and CCN5. In our experimental model, we found that CCN5 was mainly expressed in cardiac fibroblasts, but not cardiomyocytes or cardiac endothelial cells. High Ang II expression could downregulate CCN5 expression to promote CF and deteriorate cardiac systolic and diastolic functions. Meanwhile, Ang II can be attenuated using ACEI, followed by ameliorateion of myocardial fibrosis and cardiac function. During this process, we detected activated TGF-b 1 , which both promoted direct deposition of ECM, and fibroblast-to-myofibroblast transition via activated Smad-3 (Nagpal et al., 2016). Fibroblasts within a healthy working heart control the secretion and maintenance of ECM components and more importantly regulate the transmission of mechanical and electrical stimuli (Santiago et al., 2010). In hypertensive-induced cardiac hypertrophy and fibrosis, the phenotype conversion of cardiac fibroblast-to-myofibroblast is a critical event, this could precipitate HF (Czubryt, 2019). With elevation of BP, cardiac fibroblasts become overactivated and converted to myofibroblasts in response to pressure overload. During this process, several key phenotypic markers of myofibroblast are recognized including a-SMA and periostin (Bagchi et al., 2016). In this study, high a-SMA expression was found in the myocardial tissue of SHRs, opposed to the decreased expression after downregulation of Ang II using ACEI. After inhibiting the expression of CCN5 in the cardiac fibroblasts, we found a significant increase of a-SMA in the cardiac fibroblasts. These results demonstrated that Ang II might promote the phenotype conversion of cardiac fibroblast-tomyofibroblast by directly inhibiting the expression of CCN5. Although the fibroblast-to-myofibroblast conversion has been described previously, the signaling mechanisms governing this conversion were not yet clearly elucidated. Phenotype fibroblastto-myofibroblast conversion can be induced by mechanical tension, or TGF-b 1 stimuli, which aids in quick ECM pathological remodeling (Roche et al., 2015). The TGF-b 1 -Smad signaling pathway, which is arguably one of the most potent inductive mechanisms, is involved in this process (Roche et al., 2015). Obviously, healthy myocardial tissue was devoid of myofibroblasts, but myofibroblasts became abundant after receiving several stimulating factors, which promoted hypersecretion of ECM components such as collagen type I, periostin, and fibronectin (Tomasek et al., 2002). Excessive ECM components produced by myofibroblast accelerate CF and even HF. Taken together, the results of this study showed that serum CCN5 was reduced significantly in hypertensive patients and increased in hypertensive patients using ACEI. The negative association between CCN5 and Ang II in the serum indicated that Ang II interacted with CCN5. Our experimental model of high Ang II-induced hypertensive HF revealed that CCN5 was downregulated in the high Ang II SHR and increased via Ang II production inhibition by ACEI treatment. Thereafter, this downregulation of CCN5 activates TGF-b 1 , which promotes direct deposition of ECM, and fibroblast-to-myofibroblast transition via activated Smad-3. The current study highlights the essential role of endogenous CCN5 in CF. CCN5 participates in the Ang II/TGF-b 1 networking. However, this networking is a vastly and complicated process, endogenous CCN5 may interact with multiple signaling factors including matrix metalloproteases and metalloproteases within the Ang II/TGF-b 1 networking (Jeong et al., 2016). This study cannot cover all of biological functions of endogenous CCN5 within this networking. Further work will focus on the crucial role of endogenous CCN5 in degradation of the ECM. In summary, we verifiy the essential role of endogenous CCN5 in high Ang II-induced hypertensive HF. Elevated Ang II inhibit CCN5 expression, which subsequently activates TGFb 1 and finally promotes direct deposition of ECM and fibroblastto-myofibroblast transition via Smad-3 activation. CCN5 can be used as a potential biomarker for estimating CF in hypertensive patients. A novel therapeutic target can be developed for stimulating endogenous CCN5 production. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Tianjin Union Medical Center. The patients/ participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by Tianjin Union Medical Center. AUTHOR CONTRIBUTIONS AH designed and completed the experiments, analyzed the data, and drafted the manuscript. HL analyzed the data, collected the clinical data, and analyzed the clinical data. CZ and WC completed the experiments and collected the clinical data. LW revised the draft. XQ conceived this study and finalized the manuscript. All authors contributed to the article and approved the submitted version. FUNDING The study was supported by the major projects of Science and Technology Committee of Tianjin (grant number 16ZXMJSY 00060); the Tianjin Health Bureau Key Project Fund (grant number 16KG155); and the Science and Technology Project of Tianjin Union Medical Center (grant number 2019YJZD001).
2020-09-03T09:14:26.173Z
2020-09-03T00:00:00.000
{ "year": 2020, "sha1": "b89c886f84db62b74ccc02be0059e2f3b1c2fc83", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.01235/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b81fb14918de0ae9772706ba3b57d63ac84a45c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1223720
pes2o/s2orc
v3-fos-license
The P2X7 Receptor Supports Both Life and Death in Fibrogenic Pancreatic Stellate Cells The pancreatic stellate cells (PSCs) have complex roles in pancreas, including tissue repair and fibrosis. PSCs surround ATP releasing exocrine cells, but little is known about purinergic receptors and their function in PSCs. Our aim was to resolve whether PSCs express the multifunctional P2X7 receptor and elucidate how it regulates PSC viability. The number of PSCs isolated from wild type (WT) mice was 50% higher than those from the Pfizer P2X7 receptor knock out (KO) mice. The P2X7 receptor protein and mRNA of all known isoforms were expressed in WT PSCs, while KO PSCs only expressed truncated versions of the receptor. In culture, the proliferation rate of the KO PSCs was significantly lower. Inclusion of apyrase reduced the proliferation rate in both WT and KO PSCs, indicating importance of endogenous ATP. Exogenous ATP had a two-sided effect. Proliferation of both WT and KO cells was stimulated with ATP in a concentration-dependent manner with a maximum effect at 100 µM. At high ATP concentration (5 mM), WT PSCs, but not the KO PSCs died. The intracellular Ca2+ signals and proliferation rate induced by micromolar ATP concentrations were inhibited by the allosteric P2X7 receptor inhibitor az10606120. The P2X7 receptor-pore inhibitor A438079 partially prevented cell death induced by millimolar ATP concentrations. This study shows that ATP and P2X7 receptors are important regulators of PSC proliferation and death, and therefore might be potential targets for treatments of pancreatic fibrosis and cancer. Introduction ATP is an extracellular signal that stimulates purinergic receptors in many different tissues. In pancreas ATP is released from acinar cells, pancreatic duct cells and from b-cells [1][2][3]. In 1998, a novel cell type was discovered in pancreas, namely the pancreatic stellate cell, PSC [4,5]. The importance of the PSCs function in pancreas is becoming apparent, especially in the context of pancreatic disease such as chronic pancreatitis and pancreatic cancer [6]. Little is known about PSCs physiology and the role of purinergic signaling in these cells. PSCs have a mixed phenotype and a protein expression profile overlapping with several different cell types. They express a smooth muscle actin (aSMA), which is typically expressed in fibroblasts that are able to contract, and glial fibrillary acidic protein (GFAP), an intermediate filament protein of astrocytes. These proteins are therefore not specific to PSCs, however, their combination, together with vitamin A rich lipid granules in freshly isolated cells, are specific markers for PSCs [4]. Similar stellate cells are found in many tissues in the body and the best characterized are the cells originating from the liver, named hepatic stellate cells [7]. In a healthy pancreas, PSCs are inactive and surround predominantly acinar cells. Only a few PSCs are found around ducts [8]. Upon pancreatic damage, metabolic stress and pancreatic cancer, PSCs become activated by growth factors/ cytokines released from the neighboring cells [9,10]. The activated PSCs then participate in wound healing. Subsequently, they either retreat via apoptosis or remain continuously activated. The latter scenario gives rise to pancreatic fibrosis [10,11]. There are two main families of purinergic receptors for ATP: the P2Y receptor family of G-protein coupled receptors and the P2X receptor family of ligand-gated ion channels. The P2X receptors are annotated P2X1-P2X7 [12]. One of the most multifaceted receptors is the P2X7 receptor, which has a large intracellular C-terminal and forms a cation channel at micromolar ATP concentrations. At higher concentration of ATP, in the millimolar range, the receptor can open as a pore permeable to molecules up to 900 Da [13,14]. This leads to apoptosis/necrosis, and therefore the receptor has been named the death receptor [15][16][17]. However, experiments by Baricordi et al. [18] indicated that the receptor has also proliferation determining properties when expressed in lymphoid cells. This behavior could occur at lower ATP concentrations and is most likely not due to poreforming abilities of the receptor. The P2X7 receptor in rodents and humans exists in many different isoforms: in rodents the full length A (595 amino acids -aa) and K (592aa) isoforms with different starting exons, two C-terminally truncated versions named B (431aa) and C (442aa), and isoform D with only one transmembrane domain (153aa) [19]. Adinolfi et al. [20] suggested that the truncated isoform B stimulate proliferation in transfected HEK293 cells. Furthermore, expression of both A and B isoforms leads to additive effects on proliferation. A number of studies show that the P2X7 receptor stimulates a variety of cell responses and there are a number of SNPs that are associated with human disease [21,22]. In pancreas, there are number of P2 receptors on exocrine and endocrine cells [1,3]. In samples from patients suffering from chronic pancreatitis, or from mouse models of this disease, CD39, P2Y2 and P2X7 receptors were upregulated [23,24]. However, the expression of the P2X7 receptor in PSCs is unsettled, as the experimental evidence is contradicting or incomplete; expression of the receptor was claimed by Künzli et al. but Hennigs et al. could not detect any expression [24,25]. In another study, Won et al. [26] illustrated that ATP increases calcium signals in the nucleus of PSCs and this event was independent of extracellular calcium, implicating a P2Y type receptor. The regulation of PSCs viability and function is highly relevant for many pancreatic diseases and we hypothesized that the P2X7 receptor could be an important element in this regulation. The aim of our experiments was to determine the expression of the P2X7 receptor and its effect on proliferation and/or death in freshly isolated PSCs. For this purpose we further developed and simplified the isolation method of PSCs that were obtained from WT and the Pfizer P2X7 receptor KO mice [27]. The experiments presented here show that the proliferation of PSC is dependent on the P2X7 receptor and on ATP concentrations, such that ATP can function as a promoter of proliferation at micromolar concentrations. At millimolar concentrations, however, ATP is lethal to PSCs. Isolation of Cells P2X7 KO and WT mice were bred from Pfizer (NOD.129P2(B6)-P2rx7 tm1Gab /DvsJ) on the C57BL/6JBom background. Additional WT mice were purchased from Taconic (C57BL/6JBomTac). Mice of mixed gender weighing approximately 20 g were used in this study. For comparison experiments WT and KO mice were littermates. Procedures were approved by the Danish Animal Experiment Inspectorate (Dyreforsøgstilsynet). All isolation steps were carried out to minimize the risk of contamination. The pancreata from P2X7R (P2X7 Receptor) KO and WT mice were removed after the mice were killed with cervical dislocation. Connective tissue and fat were removed and the pancreas was cut into pieces in physiological Ringer. Isolation media contained DMEM and F12 (1:1), 5 mM HEPES, 1 mg/ml Albumin, 2 mM CaCl 2 and 5 mM Glycine, gassed with 5% CO 2 and pH adjusted to 7.4. The small pieces were incubated in 5 ml of media containing 6 mg of Collagenase V (Sigma C9263), gassed with 5% CO 2 , 20% O 2 and 75% N 2 for 50 minutes at 37uC. Digested pieces were dispersed by vigorous pipetting with a glass pipette. The cell suspension was then centrifuged at 1000 g for 8 minutes. The media was changed and the cells were resuspended and added to a 10 cm plastic dish (NUNC) that was coated with 1 ml of FCS. After three hours, the dish was washed with media, and PSCs were the only cells that attached strongly. PSCs then appeared as small dark spots, some having small protrusions. The DMEM media (10% serum and Pen Strep, Invitrogen) was changed daily the first two days. Immunocytochemistry PSCs were split and allowed to attach to glass coverslips overnight. The cells were fixed in 4% paraformaldehyde in PBS for 15 RNA Isolation and PCR Cells were cultured to confluence and then RNA was isolated with RNeasy Mini Kit (Qiagen 74104 Briefly, cells lysates were precipitated with 1 volume of 70% ethanol and column purified. The RNA was treated with DNase 1 (RNase free DNase Set, Qiagen 79254). 300 ng of extracted RNA was used per reaction mixture in QIAGEN OneStep RT-PCR Kit (210212) analysis, with amplification parameters as follows: one cycle at 50uC for 30 min and one cycle at 95uC for 15 min followed by 40 cycles at 95uC for 30 s, 58uC for 30 s, 72uC for 30 s, and finally, one cycle at 72uC for 10 min. Subsequently, all transcripts were subjected to electrophoresis on 1.2% agarose gels. Table 1 shows primers used; these were synthesized by TAG Copenhagen A/S (Copenhagen, Denmark). All primers were designed using Primer-BLAST (NCBI) with an expected Tm of 60uC, annealing temperature was selected to be 2uC below this Tm. Western Blot Protein lysates were created by adding 56 lysis buffer (250 mM TrisBase, 1.25 M NaCl, 50 mM EDTA, 5% Triton X-100, 20 mM NaF) to the PSCs. Cell lysates were centrifuged at 15,000 g for 15 min. Western blot samples were either not reduced or reduced by heating to 90uC in the presence of 50 mM DTT (dithiothreitol) for 10 minutes and run on precast gels from Invitrogen. The membrane was blocked overnight at 4uC in 0.5% milk powder and 1% BSA together with 0.1% Tween 20. Primary antibody for P2X7R (1:200, 4 mg/ml, Extracellular; Alomone APR 004, 1:100, 2 mg/ml, C-terminal; Santa Cruz sc-15200) was added in blocking buffer for 1 hour. The appropriate secondary antibody conjugated to horse-radish peroxidase (1:2000) was added in blocking buffer for 1 hour. Enzyme substrate was added and blots were viewed on a Fusion FX Vilber Lourmat. Video Microscopy Isolated cultured PSCs were seeded onto collagen coated culture flasks (0.031 mg/mL Laminin, 0.031 mg/ml Fibronectin and 0.18 mg/mL Collagen IV) and attached for 3 hours. Subsequently, the DMEM medium was exchanged with Ringer's solution containing in mM: 122.5 NaCl, 5.4 KCl, 0.8 MgCl 2 , 1.2 CaCl 2 , 1 NaH 2 PO 4 5.5 Glucose and 10 HEPES. The culture flasks were placed in a heating chamber at 37uC of an inverted Axiovert 25 (Carl Zeiss). Cell images were recorded using video cameras (models XCST70CE and XC-77CE, Hamamatsu/Sony) and PCvision frame grabber boards (Hamamatsu, Herrsching). Acquisition of images was controlled by HiPic and WASABI software (Hamamatsu). PSCs were left to rest for 30 minutes, and then stimulated with 5 mM ATP. Images of cells were taken every two minutes for 10 hours. Images selection and videos were made using ImageJ. Calcium Signals PSCs were seeded in Wilco dishes with optical glass bottom. Cells were incubated with 5 mM Fluo-4AM (Invitrogen) in 15 minutes. PSCs were pre-incubated with suramin (100 mM), az10606120 (10 mM) or vehicle solutions 30 min before the experiment. PSCs were stimulated with 50 mM 2'(3')-O-(4-Benzoylbenzoyl)adenosine-5'-triphosphate (BzATP), 50 mM ATP, and 5 mM ionomycin, sequentially. Fluo-4 was excited at 488 nm with an Argon laser and fluorescence was collected at 500-570 nm using a PL Apo 206 NA 0.7 objective on Leica confocal/multiphoton microscope. Cells were kept at 37uC in physiological solutions containing in mM: 140 NaCl, 1 MgCl 2 , 1.5 CaCl 2 , 0.4 KH 2 PO 4 , 1.6 K 2 HPO 4 , 10 Glucose and 10 HEPES. Image analysis was performed in LAS AF software with 5 single cell responses shown in the figure together with the collective response from circa 50 cells per preparation and evaluated by whole frame analysis. Fluo-4 intensity is given as fluorescence ratio at time t in relation to time 0 (Ft/F0); and in response to agonists as DFt/F0. Cell Death Assays ATP (5 mM) was added to PSCs that had been seeded in Wilco glass dishes. After 5 hours, annexin V FITC conjugated antibody (Sigma APOAF) was added together with the dye propidium iodide for 10 minutes and pictures were taken using Leica SP 5X MP microscope. Annexin V was exited at 488 nm and emitted light collected at 500-570 nm; propidium iodide was excited at 488 nm and emission was collected at 600-700 nm sequentially. In one series of experiment, Caspase 3/7 activation (Promega, Caspase-Glo 3/7 assay) was measured according to the instructions of the manufacturer. Cell Proliferation Cells were isolated and grown on plastic dishes for 7 days, which led to their activation. The activated PSCs were split and added to 96 well plates (COSTAR) at 5000 PSCs/well. After attachment for 24 hours in 10% serum, the media was changed to that containing the indicated concentration of serum together with ATP and/or other agonists/antagonists. The PSCs were then left undisturbed for 48 hours. During the last 4 hours, cells were incubated with the reagents from ROCHE (BrdU) or DOJINDO (Cell Counting Kit 8) and processed according to the manufacturer9s instructions and with appropriate controls. Chemiluminescence and absorbance were measured in Fluostar OPTIMA. All manual cell-counting was done using a counting chamber or dish area in the microscope. All measurements were performed in either duplicates or triplicates. The cell counting kit was used in the experiments where proliferation over many days was monitored in living cells due to limited toxicity. BrdU incorporation that monitors the actual DNA synthesis requires that the cells are fixed. This method has higher sensitivity and most data shown in the result section were performed with the BrdU kit. Experiments with Cell Counting kit 8 were done in parallel to these and showed similar results in all experiments. Chemicals and Statistics All chemicals/kits were purchased from Sigma-Aldrich unless otherwise states. The inhibitors A438079 and az10606120 were both purchased from TOCRIS Bioscience. Data are shown as means 6 SEM, n denotes a number of experiments on cells isolated from different animals. Students paired t test was applied when comparing two samples from the same animal and P,0.05 was accepted as significant. For comparison of responses to various agonists Dunn's test in one-way Analysis of Variance (ANOVA) on Ranks was used. Data were analyzed in Origin or Microsoft Excel. Characterization of PSC Preparation and Morphology The two common methods for PSCs isolation make use of either a centrifugation gradients [4] or an outgrowth method [5]. The centrifugation method employs the low density of PSC in a Nycodense gradient to separate the cells. For the outgrowth method, pancreatic pieces are allowed to stay in media for longer time and PSCs migrate out of the tissue and proliferate. We aimed to simplify and scale down the isolation method for a single mouse pancreas. Therefore, we developed a method where PSCs were Table 1. Primer sets used for RT-PCR on PSCs for isoforms A-D and K, and within the deleted area (Intra) and spanning the deleted area (Spanning). isolated using a combination of collagenase digestion and selective attachment. PSCs appear to have the ability to utilize the fibronectin in fetal calf serum for early attachment, as is the case for many cell types [28]. Therefore, PSCs attached quickly to the dish compared to ducts, acinar and islets cells, which did not attach to the dish in the first three hours. The stellate cell population obtained was uniform and showed clear lipid granules that could be detected up to day 2 (Fig. 1A). After 7 days, PSCs were activated and expressed clear a-SMA and also GFAP staining ( Fig. 1B-C). The Expression of the P2X7 Receptor in PSCs We examined the P2X7 receptor expression using PCR, immunocytochemistry and Western Blot on WT and Pfizer KO samples. First, we investigated the mRNA content of the PSCs. The primer sets for PCR are given in Table 1. The creation of the Pfizer P2X7 receptor KO mice was made by inserting a Lox-NeoR-Promoter-Lox into the C-terminal instead of bp 1675 to bp 1760, this introduces an early stop codon truncating the Cterminal affecting isoforms A and K [27]. The map of isoforms and mRNA present in WT and KO is shown in Fig. 2. In the original paper, Solle et al. did not observe any mRNA in macrophages [27]. Fig. 3A shows that mRNA for all the mouse isoforms A-D and K are expressed WT PSCs. Surprisingly, in PSCs from Pfizer KO the mRNA for the exons downstream of the inserted neomycin box were still expressed, while the deleted area (P2X7R Intra) and the fragment amplified by primers spanning the insertion (P2X7R Spanning) were not present, as seen in the WT preparations. Nevertheless, the insertion of the NeoR part did lead to disruption of this part of the mRNA. Both the complete B and C isoform mRNAs of the P2X7 receptor are present in PSCs from KO (Fig. 3A). Also mRNA for the KO truncated P2X7 receptor isoform A (527aa) and K (524aa) with an early stop codon was detected (data not shown). This KO isoforms A and K hybrids were also reported by Masin et al. [29]. To determine whether any of the mRNA of the subtypes resulted in protein, we examined the protein expression using antibodies against the extracellular domain or the C-terminal part of the P2X7 receptor. Fig. 3B shows images of WT and KO PSCs labeled with the P2X7 receptor extracellular antibody, which labels intracellular vesicles and has a weaker but clear plasma membrane staining observed only in WT (see insert). The antibody against the C-terminus of the P2X7 receptor did not give any significant staining of the cells (data not shown). The same two antibodies were used for detecting the P2X7 receptor in a Western Blot in reducing and non-reducing conditions (Fig. 3C). Loading control (aSMA) was included for both blots. The expected size of a full length receptor monomer of the P2X7 receptor is 75 kDa and this is detected in WT with both antibodies (Fig. 3C, left and right panels). As seen in the figure, the extracellular antibody (left) also recognized a ,450 kDa protein in non-reducing conditions in both WT and KO PSC samples; the band disappeared in the reduced sample and the 130 kDa band became more intense. These large proteins could be either P2X7 receptor trimers and/or a multiprotein complex, or a potential unspecific binding. Proteins of similar size have been reported by Kim et al. [30] and Masin et al. [29]. Importantly, the band for the shorter receptor isoform is expressed both in WT and KO and it is detected at about 60 kDa in reduced conditions only. This is most likely the isoform B or C. The C-terminal antibody (Fig. 3C, right) recognizes only the band at 75 kDa, demonstrating the full length P2X7 receptor monomer. Clearly, the C-terminus of P2X7 receptors is disrupted in KO PSCs and no protein was detected. Also the clear bands at 130 kDa and 450 kDa, which were sensitive to reducing conditions, were not obvious with the Cterminal antibody (Fig. 3C left and right panels). Differences in Cell Death in WT and KO PSCs The mRNA data together with Western blot and immunocytochemistry illustrated that several isoforms of the P2X7 receptor are expressed in KO PSCs, including a truncated isoform K that escaped in the Glaxo KO [19]. However, this isoform must also be truncated in KO cells, as its last exon is the same as for isoform A. Therefore, we investigated whether the P2X7 receptor expressed in KO cells induced cell death, as is known to be the case for the full length A and K isoforms, presumably due to their poreforming capabilities. Behavior of WT and KO PSCs was monitored by video microscopy in an environmentally controlled chamber for 10 hours. Without exogenous ATP, there was no significant death observed in the KO cells, nor in the WT PSC (Fig. 4A). The video capture in Fig. 4A shows that after the addition of 5 mM ATP, 72.5612% (n = 5 independent experiments) of WT PSCs died within 10 hours compared with only 3.662% (n = 5) of KO PSCs. The data summarizing PSC death for WT and KO PSC is shown in Fig. 4B. These data clearly shows that the WT PSCs express functional P2X7 receptors that form cytolytic pores at 5 mM ATP. The P2X7 receptor protein that is expressed in KO PSCs, however, is not sufficient to cause pore opening and the following cell death. Since the cell death in WT cells visually appeared to have similarities with necrosis rather than apoptosis, we investigated which type of cell death might be initiated with high ATP concentrations. Using annexin-V FITC conjugated antibody together with propidium iodide, it is evident that the cells appear necrotic (Fig. 4C). In another assay we used apoptotic markers, caspase 3/7, and did not observe any significant activation after 5 hours (Fig. 4D), further supporting the notion that PSCs die by non-apoptotic, likely necrotic death. Importantly, the cell death that was initiated with 5 mM ATP could be inhibited by about 50% with the competitive P2X7 receptor-pore inhibitor A438079 at tested concentrations (Fig. 4E). These data show that the P2X7 receptor is important for initiating the cell death. Cell Numbers and Proliferation of PSCs We compared the number of PSCs isolated from WT and KO animals. Fig. 5A shows that the number of PSCs isolated from KO was about 50% lower than the number of cells isolated from same amount of WT pancreas tissue (n = 10). Next we tested whether this difference in cell numbers is due to the differences in the proliferative potential of the PSC. The experiments were conducted to monitor cell proliferation rate in vitro. After isolation, PSCs from WT and KO animals were immediately split and reseeded in 96 well plates with a density of 2500 cells/well. The cells were grown in 10% serum and the proliferation was determined with Cell Counting Kit 8 for a period of 8 days. The Fig. 5B shows that with the same seeding numbers, after first two days the growth of PSC from the KO mice lagged greatly behind the PSC from WT mice. The WT PSCs grew with a doubling time of 1.7 days and KO PSCs at 2.2 days, as calculated using the best fit exponential growth. Clearly, KO PSC had less proliferation potential. Since, there are several isoforms of the receptor expressed in KO PSCs (see Fig. 3), this finding indicates that the full length P2X7 receptor protein is required for the full proliferation effect. The above experiments indicated that the P2X7 receptor could be important in regulating proliferation, therefore we postulated that by providing exogenous ATP we could further increase proliferation rate of PSCs. During the activation phase from day 0 to day 9, 100 mM ATP or the P2X7 receptor agonist 100 mM BzATP was included. The agonists were added to the cells daily when media was changed after Cell Counting Kit 8 measurements. However, addition of the agonists did not cause any difference in proliferation (data not shown). These experiments were carried out using 10% serum, so that effects of exogenously added ATP could have been masked by: (i) the stimulation of proliferation caused by the high serum; (ii) endogenous ATP release caused by daily medium changes; (iii) and not the least, changes in cells from inactive to active states. Therefore, we decided to simplify the protocols and work with activated PSC, lower serum concentrations and use the more sensitive BrdU kit. It became clear that as for many other cells, serum is important for PSC survival and proliferation. That is, 10% serum increased BrdU incorporation in PSCs to 386651% (n = 4) compared to the 1% serum that was chosen as control for the following experiments. ATP Affects Both Life and Death of Activated PSCs To investigate whether endogenous ATP and the P2X7 receptors were necessary for the proliferation of PSCs, we designed an experiment where cells were grown in the presence of apyrase, an enzyme that breaks down ATP/ADP to AMP. PSCs were grown in the presence of 1% serum, and proliferation of activated PSC cells (grown 7 days on plastic dishes) from WT and KO mice was determined using BrdU incorporation. Fig. 6A shows that KO PSCs proliferated significantly slower than WT cells, which confirmed the results obtained using different (Fig. 5). In the presence of apyrase (5 U/ml), the proliferation rate was reduced in both WT and KO PSCs to similar levels. The cells did not die, but only stopped proliferating as indicated by measurements with the cell counting kit (data not shown). We next tested whether ATP promotes proliferation in WT PSCs and whether it is independent of serum factors. All results were normalized to the proliferation rate with 1% serum set to 100%. Fig. 6B illustrates proliferation in 0% serum as BrdU incorporation was 4265% (n = 17). For comparison, in 1% serum apyrase reduced BrdU incorporation to 2464% (n = 12). Addition of exogenous ATP (100 mM) to serum-free medium did not change the proliferation rate of PSCs, i.e. BrdU incorporation was 3566% (n = 6). However, addition of 5 mM ATP resulted in death of PSCs; BrdU incorporation was 262% (n = 6). On the basis of the apyrase experiments, we concluded that endogenously released ATP could act as an autocrine proliferation potentiator for the activated PSCs in both WT and KO. We therefore hypothesized that exogenous ATP could further stimulate proliferation in 1% serum. Therefore, we tested the effect of increasing concentrations of ATP from 1 mM to 5 mM. The results shown in Fig. 7A were normalized to a 1% serum control without exogenous ATP. In WT PSCs, ATP stimulated proliferation in a concentration-dependent manner with a maximum effect at 100 mM (151610%, n = 19). Higher concentrations of ATP did not stimulate proliferation, they actually lowered the DNA synthesis significantly at 1 mM (72614%, n = 17). At 5 mM ATP, the DNA synthesis was greatly reduced (2965%, n = 6) (Fig. 7A). Also the cell number was reduced, as confirmed by cell counting (data not shown). In the following experiments, we determined the effect of ATP on BrdU incorporation in KO PSC and compared them to WT data. The raw value for WT PSC in 1% serum zero ATP control was 983961570 relative light units (RLU) and for KO PSCs it was 459661403 RLU (n = 23 and 7). Due to this difference between WT and KO, the controls were normalized so that ATP effects could be compared. As the results in Fig. 7B show, ATP stimulates proliferation in activated PSCs, both in WT and KO samples. This occurs in a concentration-dependent manner and 100 mM ATP gave the highest response in both types of cells with 151610% increase for WT and 182630% increase for KO, n = 19 and 17 (Fig. 7B). When the ATP concentration was increased further, there was a significant difference between proliferation of WT and KO PSCs. It is apparent that the KO PSCs did not die at high ATP concentrations of 1 mM, where their proliferation was 152635% of the control compared to WT PSCs that had already reduced proliferation with values of 7467% of the control (n = 7 and 17). At the highest ATP concentration tested (5 mM), there was a huge reduction of BrdU incorporation to 2965% in WT PSCs, compared to 72614% in KO PSCs, the latter being not significantly different to the control (n = 6 and 7). Since the above experiments pointed towards the importance of the P2X7 receptor in life and death of PSCs, we postulated that a P2X7 receptor stimulant or inhibitor could affect the ATP stimulated proliferation. We therefore tested the effect of the prototypic P2X7 receptor agonist BzATP. Fig. 8A shows that BzATP increased the proliferation to 148617% (n = 6) of the control at 10 mM and to 233646% (n = 6) at 100 mM. Thus, 10 mM BzATP elicited a similar effect on cell proliferation as a tenfold higher ATP concentration. There also seemed to be a small increase of proliferation in KO PSCs at 10 mM BzATP (143623%, n = 7) and at 100 mM (140630%, n = 7), though no statistical significance was achieved. This observation could indicate that the P2X7 receptor isoforms expressed in KO cells do not respond well to BzATP. The P2X7 receptor inhibitor A438079 is a competitive antagonist for P2X7 receptor specifically designed to prevent pore formation. When this antagonist (10 mM) was added 30 minutes before growth-stimulation by 100 mM ATP, or to PSCs growing in 1% serum, it did not show any significant effects (Fig. 8B). Therefore, we applied a non-competitive inhibitor, az10606120. This blocker binds in a positive cooperative manner to sites distinct from, but coupled to the ATP binding site and acts as a negative allosteric modulator [31]. Az10606120 (10 mM) was highly efficient in our experiments and inhibited the PSCs growth to 1968% (n = 4) compared to 1% serum control (without exogenous ATP). The proliferative promoting effect of 100 mM ATP was also significantly reduced from 162625% to 37612% (n = 4) with this inhibitor. Finally, in order to obtain another evaluation of functional P2X7 receptor expression in WT and KO cells, calcium imaging was performed. Intracellular Ca 2+ responses to BzATP (50 mM) and ATP (50 mM) and receptor inhibitors az10606120 (10 mM) and suramin (100 mM) were tested. Ionomycin (5 mM), a Ca 2+ ionophore, was added in the end of the experiment as a positive control. Fig. 9 shows that there were significant differences in responses obtained from WT and KO cells. First of all, PSC from KO animals showed significantly smaller responses to BzATP. In WT cells BzATP changed the Fluo-4 intensity to 2.560.23 and az10606120 inhibitor (10 mM) lowered the response significantly to 1.660.25 (n = 4) (Fig. 9A, B). In KO cells, BzATP also elicited Ca 2+ signals, Fluo-4 intensity increased to 1.060.15, and importantly, the az10606120 inhibitor had no further effects; i.e. Fluo-4 intensity was 1.360.24 (n = 4)( Fig. 9 E, F). This finding indicates that the az10606120 sensitive P2X7 receptor (isoform) remaining in PSCs from KO pancreas was not functional. In order to eliminate possible responses from other P2X and P2Y receptors, we added a broad spectrum inhibitor suramin at such a high concentration (100 mM) that most purinergic receptors would be blocked, and presumably only P2X7 responses would be visible (Fig. 9 C, G). Indeed, suramin inhibited the BzATP effect on Ca 2+ signals completely in KO PSCs as Fluo-4 intensity remained at the baseline 0.060.02. In contrast, WT cells still had significant response (0.760.23, n = 4), and we postulate that this was due to P2X7 receptors. Combination of both az10606120 and suramin nearly eliminated Ca 2+ responses to BzATP in PSCs from both WT (0.260.13, n = 4) and KO (0.060.07, n = 4) preparations. Regarding effects of the broader acting agonist ATP, Ca 2+ responses appeared lower in KO compared to WT PSC (but not significant). Discussion In the present study we have successfully isolated PSCs by a simplified method and studied their proliferation rate in response to purinergic signals. Activated PSCs grow faster in the presence of low concentrations of ATP. However, high ATP concentrations are cytotoxic to these cells. Solid evidence is provided for expression of the P2X7 receptors in PSCs. We propose that the P2X7 receptor isoforms are involved in regulation of PSC viability as discussed below. PSCs isolated using a simplified method gave a substantial yield of PSCs from a single mouse pancreas. As shown in Fig. 1, PSCs were homogenous and had the same characteristics as determined in previous studies using established methods of gradient centrifugation [4] and outgrowth [5]. The homogenous expression of a-SMA, GFAP and lipid granules in these cells shows that this high-yield method of isolation is equivalent to established isolation methods. We clearly show that PSCs express the P2X7 receptor on mRNA transcripts and protein level (Fig. 3B, C). All of the receptor isoforms (A-D and K) found in mice are expressed in the WT PSC (Fig. 3A). The KO PSCs also show mRNA for B-D isoforms and the C-terminally truncated A and K hybrid isoforms. Notably, PSCs from P2X7R KO mice also contained transcripts for the mRNA upstream relative to the 85 nucleobases in P2X7R that were replaced by the NeoR-plasmid insert. This expression profile of the P2X7 receptor is probably also relevant for other tissues. Our study indicates that the B (or C) isoform protein is Figure 6. Effect of endogenous ATP and serum on PSCs proliferation. A. Comparison of the DNA incorporation in WT and KO PSCs with and without 5 U/ml apyrase. All samples contained 1% serum (n = 9-23). Y-axis shows the BrdU incorporation in relative light units. In the presence of apyrase, there was no significant difference between proliferation of WT and KO cells. Significant difference (P,0.05) from respective controls (#) and between WT and KO (*) is indicated. B. Effect of serum on DNA incorporation in WT PSCs. Significant difference (P,0.05) from 1% serum controls (#) and between 0% and 0% serum with 5 mM ATP (1) is indicated. doi:10.1371/journal.pone.0051164.g006 expressed, as we detect a band at the expected size of 60 kDa in the Western blot with the extracellular antibody (Fig. 3C). This is highly relevant as the isoforms can be partially responsible for the proliferation effect in KO PSCs, and at least the B isoform has been implicated as a growth promoter [20]. In addition to the expressed B or C protein detected, we cannot determine the identity of the higher molecular weight bands. These could contain P2X7 protein coded by the transcript for either the isoforms KO A or K hybrids, B or C, as all of these versions will be recognized by the extracellular antibody. This is also the conclusion reached in a recent paper re-evaluating the P2X7 receptor expression in P2X7R KO mice by Masin et al. [29]. Interestingly, using immunocytochemistry we find that the P2X7 receptor antibody shows marked intracellular locations in both KO and WT PSC, which could be due to receptor desensitization during preparation. This has been seen regularly with P2X7R staining [32] and P2X7R-EGFP expression [33]. Intracellular hotspot staining is similar to that found for many other receptors, for example GABA [34]. Although this intracellular staining could be unspecific, we also see a clear membrane stain in the WT cells that is not observed in the KO cells (Fig. 3B). We also performed a functional assay of the P2X7 receptor using Ca 2+ imaging (Fig. 9). Interestingly, there was still a response to BzATP in KO PSCs, but this could not be inhibited by the P2X7 receptor blocker az10606120 as it could for the WT PSCs. This supports the notion that the major P2X7 receptor isoform, possibly B or C, is not functioning as a cation channel in KO PSCs. The possible explanation for the BzATP effects in KO cells is that it could be mediated by other purinergic receptors, such as the P2X1 receptor that has affinity for this agonist [35], or that expressed isoforms are not sensitive to az10606120 inhibitor. Several of our experiments show that the P2X7 receptor in WT PSCs is a death receptor when exposed to high ATP concentrations (Fig. 4, 7). However, PSCs from KO animals do not undergo cell death under the same conditions. This illustrates that the main receptor phenotype supposedly altered in the KO mice is correct and cells escape death. The protein site where the ''death sequence'' is located is most likely in the KO region (506aa-532aa) of the C-terminal. In this region, there is a sequence that is similar to the TNF-Receptor1 death domain (436aa-531aa) [36]. The KO receptor data in PSCs strengthen the prediction about the death region in the A and K isoform C-termini. Our experiments also indicate that cell death in PSCs might be of a necrotic character (Fig. 4 C, D), which has also been seen in other studies [15,16]. One of the most important outcomes of our study is that the full length P2X7 receptor (A and/or K isoform) is important for optimal regulation of proliferation in PSCs. Firstly, in vivo PSCs isolated from KO mice were about 50% lower in numbers compared with cells isolated from the WT mice (Fig. 5A). This agrees with the study of Glas et al. [37], who found fewer pancreatic b-cells in the P2X7 receptor KO animals. Clearly, the lower number of cells from the KO tissue would not be expected if the main effect of the P2X7 receptor was that of a death receptor. Since the KO cells lack the full length P2X7 receptor, and there is Figure 8. Effect of BzATP, A438079 and az10606120 on proliferation of PSC. A. The effect of BzATP on WT and KO PSCs (n = 6-7). All results were normalized to 1% serum controls that were set to 100%. The effect of: B. the P2X7 receptor A438079 inhibitor (10 mM); C. and negative allosteric modulator az10606120 (10 mM) on the basal proliferation response and in WT PSCs stimulated state with 100 mM ATP (n = 4-23). Significant difference (P,0.05) from respective controls (#) and with/without the inhibitor (*) is indicated. doi:10.1371/journal.pone.0051164.g008 a lower number of PSCs, we argue that the main property of the P2X7 receptor is to maintain proliferation of these cells in pancreas. Secondly, in vitro the KO PSCs grow much slower than WT PSCs as verified by several protocols (Fig. 5, 6). Basal ATP release occurs in many cells [38]. In apyrase experiments we show that endogenous ATP is important for proliferation of PSC (Fig. 6A). Since this is the case for both WT and KO cells, one could infer that the isoforms expressed in KO PSCs, potentially the B or C variant detected, can partly compensate for the loss of potentiating effect of the full length P2X7 receptor (see below). In order to simulate a stimulatory autocrine or paracrine release of ATP, exogenous ATP was added to PSCs. Most importantly, proliferation of PSCs was stimulated with ATP concentrations up to 100 mM (Fig. 7). We suggest that the basic proliferative response is mediated by either one of the truncated isoforms B or C, or potentially a KO A and K version, which has a truncated Cterminal. The N-terminal of the P2X7 receptor, which is still present in the KO, could transduce proliferative signaling via ERK1/ERK2 [33]. Nevertheless, for the full proliferative effects of exogenous and endogenous ATP seen in WT cells, the full length P2X7 receptor is required. Together, these are therefore the first experiments that illustrate the proliferation potential of the P2X7 isoforms in native cells. Our findings are consistent with the reports of Adinolfi et al. on HEK293 cells transfected with the P2X7 receptor [20] and Monif et al. on glial cells transfected with the P2X7 receptor [39]. Our PSCs do also require some serum for growth (Fig. 6B), perhaps because they are primary cells. Both the proliferative and death effects of the P2X7 receptor in PSCs are supported by pharmacological data, which give new insights. BzATP stimulated proliferation in WT PSCs with about tenfold higher potency than ATP (Fig. 8A), implicating a P2X7 receptor effect. However, in KO PSCs, the proliferation effect of BzATP was small and also lower compared to ATP. Amstrup and Novak [33] illustrated that a DC from Ser365 showed smaller Ca 2+ response to BzATP, but ATP effect was unchanged, which could indicate that the C-terminal is important for Ca 2+ signaling transduction. The competitive blocker A438079 inhibited cell death in PSCs, but the effect on proliferation was marginal (Fig. 8B). The negative allosteric modulator az10606120, on the other hand, had significant effect on growth induced by micromolar ATP concentrations (Fig. 8C). These data agree well with the intracellular Ca 2+ measurements showing that the az10606120 significantly inhibited response in WT PSCs (Fig. 9 C). It is possible that the effect of ATP on proliferation could differ from the ATP effect on pore formation and cell death due to different binding sites on the receptor. In accordance, Klapperstück et al. [40] showed that there are two high sensitivity binding sites for ATP (4 mM) and two low sensitivity binding sites for ATP (200 mM) on the P2X7 receptor. The high sensitivity signal was dependent on both the N-and C-terminal for full transduction, while the low sensitivity only depended on the C-terminal. Therefore we propose, based on the KO data, that the high sensitivity site still can transduce some signal, and that BzATP might not be such a good ligand for this site. This is also confirmed by the Ca 2+ imaging data, which showed significantly lower response to BzATP in KO compared with WT PSCs. We also conclude that the competitive inhibitor, A438079, is most likely binding to the low sensitivity site and might not be a good antagonist in respect to proliferation. The negative allosteric modulator, az10606120, has on the other and huge impact on the proliferation, suggesting that it modulates the high sensitivity site. Based on the present in vitro experiments, extracellular ATP is an important proliferation regulator in PSCs. How can this be transferred to the in vivo situation? PSCs are located around the acinar cells [4], suggesting that their main function is related to their interplay with these cells. In a normal physiological situation acinar cells release zymogen granules containing digestive enzymes to the pancreatic duct lumen. In case of pancreatic damage, some zymogen granules could release their cargo the basolateral side of the acinus [41]. We have shown that zymogen granules accumulate ATP via VNUT/Slc17a9 transporter and thus contain ATP at concentrations that could activate purinergic receptors on surrounding cells [42]. Thus, when mild pancreatic damage occurs, we propose that PSCs proliferation that involves the P2X7 receptor would be stimulated to protect the damaged area. Hoque at al. [43] illustrated that P2X7 receptor KO mice, were less susceptible to develop maximal acute pancreatitis, however, they postulate that it was due to lack of expression of P2X7 on macrophages. The cytolytic effect of ATP is more difficult to explain. We propose that a massive damage and disruption of acinar cells and release of intracellular ATP (3-5 mM) can lead to the death of PSCs, presumably by ''overstimulated'' P2X7 receptors. The cytolytic P2X7 receptors stimulated in macrophages release IL-1b [27,44]. It is a possibility that PSCs can convert the death signal of high ATP concentrations, into an interleukin release, which further activates and attracts other PSCs and immune cells. Whether the P2X7 receptor releases the interleukins, or whether it causes the cleavage of the propeptide, remains to be investigated. Conclusion In conclusion, we show that basal and added ATP stimulated proliferation in PSCs. This effect is mediated by the P2X7 receptor, and maximal proliferation is obtained with 100 mM ATP. This effect and Ca 2+ signaling could be inhibited by the specific P2X7 antagonist az10606120. We propose that the full proliferation effect requires the full length P2X7R isoform A or K. Nevertheless, since the KO PSCs show about half of the maximal proliferative response to endo/exogenous ATP, either KO A or K hybrid versions, or isoforms B or C are also involved. Since we find the expression of B or C variant proteins, we deem the latter more likely. At high ATP concentrations, exceeding 1 mM, the cytotoxic effect of ATP is dominating and the WT PSCs but not the KO cells die, and this could be partially inhibited by the antagonist A438079. Therefore, the C-terminal of the P2X7 receptor is necessary for the death signal. Together this study presents strong evidence that ATP and the P2X7 receptors are important regulators of both life and death of PSCs, and therefore might be potential targets for treatments of Figure 9. Calcium responses to BzATP and ATP in WT and KO PSC. The Fluo-4 response (Ft/F0) of WT (A-D) and KO (E-H) PSCs stimulated with 50 mM BzATP, 50 mM ATP and lastly with 5 mM ionomycin. Intracellular Ca 2+ signals were monitored in control conditions (A, E), in presence az101606120 (B, F), suramin (C, G) and suramin together with az10606120 (D, H). The figures illustrate one independent representative run with the frame average for about 50 cells in black, and an example response of 5 individual cells is shown in color. The peak intracellular Ca 2+ responses to agonists +/2 antagonists derived from frame intensities for approximately 50 cells/preparation were averaged for 4 separate PSCs preparations form WT and KO mice and shown as bargraphs. The letters underneath the bars show from which experiment (indicated by a letter) the mean Ca 2+ peak values were taken from. Significant difference (P,0.05) from respective controls (#) and in between WT and KO preparations (*) is indicated. doi:10.1371/journal.pone.0051164.g009 pancreatic fibrosis and PSC interactions with other cells, for example, in the development of pancreatic ductal adenocarcinoma.
2017-06-06T04:14:25.399Z
2012-12-17T00:00:00.000
{ "year": 2012, "sha1": "c70fd37496fff1860b56334ffb01f20e2cdd9fd4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0051164&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c70fd37496fff1860b56334ffb01f20e2cdd9fd4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
267361946
pes2o/s2orc
v3-fos-license
CLINICO-DEMOGRAPHIC PROFILE AND TREATMENT OF PATIENTS WITH PROSTATE CANCER IN A NORTH- CENTRAL NIGERIAN TEACHING HOSPITAL Background: Prostate cancer is one of the most common malignancies afflicting men worldwide. In the male population, it is estimated that one in seven will be diagnosed and one in 38 will die from prostate cancer. Majority of patients in Sub Saharan Africa present with advanced disease. Objective: To identify among prostate cancer patients, the age, clinical manifestation and stage at presentation as well as treatment received. Materials and Method: The study reviewed patients with prostate cancer at the Jos University Teaching Hospital between January 2014 and December 2017. The demographic and clinical characteristics as well as treatment given were analysed. Results: A total of 82 patients were studied. Age range was 41-100 years with a median of 67.9 years. The peak age group was 71-80 years, accounting for 41.4% of patients. Lower urinary tract symptom was present in all patients at the time of presentation. 59.8% of these patients presented with metastatic symptoms. Persistent low back pain was seen in 61.2 % of patients with metastatic symptoms, and digital rectal examination was suggestive of malignancy in 62.2% of patients. PSA was >20ng/ml in 73.3% of patients. Histology for all patients was adenocarcinoma, with a predominant Gleason score of 8 (29.3%). Bilateral total orchidectomy was offered in 59.8% of patients. Conclusion: Majority of patients with carcinoma of the prostate in Jos have features of metastasis at the time of diagnosis. Orchidectomy is the most common treatment offered in our environment. Tools that are recognized in prostate cancer diagnosis and screening include digital rectal examination (DRE), prostate specific antigen testing (plus its refinements like PSA density) and transrectal ultrasound scans.Prostate specific antigen (PSA) is widely used in detection of prostate cancer.PSA testing increases the positive predictive value of DRE for prostate cancer hence, the use in combination by most urologist for prostate cancer detection. 10e optimum treatment option available to patients with prostate cancer at all stages of the disease has been a subject of debate due to uncertainty surrounding the relative efficacy of various modalities including radical prostatectomy, radiotherapy, surveillance and endocrine therapy, hence treatment decisions are guided by grade and stage of tumor, life expectancy of disease, associated morbidity as well as patient and surgeons preferences. 9This study seeks to review the clinical characteristics of patients with prostate cancer and treatment given at a tertiary health institution in North central Nigeria. METHODOLOGY It is a retrospective descriptive study, January 2014-December 2017 carried out at the Jos University Teaching Hospital, Jos.Patients with histologic diagnosis of prostate cancer in urology unit were recruited for this study.Patient records were reviewed and data were obtained using a structured proforma.Patients with incomplete records were excluded.Data analysis was done using SPSS Version 20 with data expressed using tables. RESULTS Data obtained were for 82 patients and parameters analysed were: demographic characteristics, clinical presentation, PSA at presentation, histologic characteristics, and treatment modality. The mean age of patients that presented was 67.9 years, all patients at the time of clinical presentation had lower urinary tract symptoms with 59.8% of them having features suggestive of metastasis at the time.The digital rectal examination finding was sug gestive of malignancy at the time of diagnosis in 62.2% of patients. The prostate specific antigen values ranged from 2.83 to 171.2 ng/ml with 2.4% of patients having a PSA DISCUSSION In this study, the age range at presentation was 41 to 100 years with a modal range of 71 to 80 years and a mean age of 67.9 years.All patients in this study had lower urinary tract symptoms at presentation which is consistent with late presentation as prostate cancer rarely present with symptoms unless when advanced. 9e duration of symptoms before presentation was less than 6 months in majority of patients. This is similar to a study by Nwofor and Oranusi which showed average duration of 8 months prior to presentation, 11 this may be explained by improved health consciousness in Jos (in North Central Nigeria compared to South Eastern Nigeria). The predominant symptoms in this study indicative of metastasis were persistent low back pain and lower limb weakness making up 61.2% and 20.4% respectively.This can be explained by the fact that the commonest route of spread of prostate cancer is to the bone and in particular the spine. 12Similar study by Ekeke et al. in Port-Harcourt found paraplegia and haematuria with anaemia to be the common features. 13n Africa and Nigeria in particular, most patients with prostate cancer present with advanced disease and endocrine therapy is the commonest treatment option offered.From the study, 42.7% of patient preferred and opted for surgical castration (bilateral total orchidectomy) as a primary mode of treatment, while 17.1% of patient who initially opted for medical castration later requested for surgical castration.This is probably due to high cost of the drugs hence it is difficult for patients to procure regularly these medications and sustain it considering healthcare in this part of the world is still predominantly on a out-of-pocket basis with little or no insurance cover.Also, from the study 23.2% of patient declined any form of therapy, a finding that may be explained by patients aversion to surgical orchidectomy, financial constraints and fears with regards adverse effects of these medications. Table 1 : Age distribution of 82 patients with advanced prostate cancer Annals of Ibadan Postgraduate Medicine.Vol.21 No. 2, August 2023 54 Table 2 : Duration of lower urinary tract symptoms (LUTS) in 82 patients with advanced prostate cancer Table 3 : Distribution of Metastatic Symptoms in 49 patients with advanced prostate cancer Table 4 : Prostate Specific Antigen (PSA) at Diagnosis in 82 patients with advanced prostate cancer 4 CONCLUSIONCarcinoma of the prostate is common in Africa, with an incidence and attributed mortality that is on the rise especially among the elderly.Our study corroborates the finding that late presentation is common in sub-Saharan Africa with a large number of patients having features of metastasis at the time of diagnosis.Orchidectomy is the most common treatment offered in our environment as most modern treatment options for the disease are unavailable or unaffordable.There is the need to institute measures for early diagnosis and provision of facilities to institute effective treatment. Table 6 : Distribution of treatment offered in 82 patients with advanced prostate cancer
2024-02-02T05:08:47.761Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "c1e2dbaca28109ba440b7a7e26fa2208c1d9d47b", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c1e2dbaca28109ba440b7a7e26fa2208c1d9d47b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
551155
pes2o/s2orc
v3-fos-license
Electrically Controlled Adsorption of Oxygen in Bilayer Graphene Devices We investigate the chemisorptions of oxygen molecules on bilayer graphene (BLG) and its electrically modified charge-doping effect using conductivity measurement of the field effect transistor channeled with BLG. We demonstrate that the change of the Fermi level by manipulating the gate electric field significantly affects not only the rate of molecular adsorption but also the carrier-scattering strength of adsorbed molecules. Exploration of the charge transfer kinetics reveals the electrochemical nature of the oxygen adsorption on BLG. [This document is the unedited Author's version of a Submitted Work that was subsequently accepted for publication in Nano Letters, {\copyright} American Chemical Society after peer review. To access the final edited and published work see http://dx.doi.org/10.1021/nl202002p.] It has been a central topic of surface science how to control the adsorption and desorption in order to to bring out desirable features and functionalities by adsorbed molecules. Tuning the electronic features of solid surfaces has an important implication in that molecular chemisorptions and catalytic reactions are determined by them 1,2 . In particular for graphene, the twodimensional honeycomb carbon lattice, in which the conduction π * -band and the valence π-band contact to each other at the "Dirac point" giving a feature of zero-gap semiconductor 3 , the control of chemisorption is a critical issue since chemisorption directly leads to altering every electronic property of graphene. Other than the electron/hole doping 4 owing to the charge transfer between graphene and the adsorbed molecules, widely known are the charged impurity effect on the electron transport 5-7 , lattice deformation 8 , and opening the band gap due to asymmetric adsorption [9][10][11] . Aside from the macroscopic spatially-controlled adsorption that is achieved using nano-device fabrication technique 12 , microscopic control of adsorption structure is of great importance because the aforementioned adsorption effects are altered by the local structure of adsorbate, e.g., whether the adsorbed molecules are arranged in a random or superlattice structure 10,13 , or whether the molecules are adsorbed individually 4,14 or collectively (in dimers 15 or clusters 16,17 ). For the first step to realize such an advanced control of adsorption, the methods to utilize the interaction between the adsorbed molecules and graphene for it are to be explored. The principal impetus in the present study is to control the charge transfer between graphene and the adsorbed molecules by tuning the Fermi level of graphene, which is readily accomplished in the field effect transistor (FET) structure. When SiO 2 /Si substrate is used as the back-gate insulator of the FET, the tuning range of the Fermi level of graphene by the application of the gate voltage is at the extent of several ±0.1 eV 18 which would be sufficient to alter the chemical reactivity on the surface. Besides, the additional charge and the gradient of electric potential generated by the gate electric field are expected to change the polarization of adsorbed molecules 19,20 and to modify the charge distribution on graphene layers and the adsorbed molecules [21][22][23][24][25] leading to, e.g., the change in the ease of migration of molecules adsorbed on graphene 26 . Some research [27][28][29] argues that the change in the Fermi level caused by a gate electric field activates electrochemical redox reactions and the accompanying charge transfer causes hysteresis of the source-drain current in graphene FET, yet there has been no investigation that elucidates the relation between the kinetics of adsorption to graphene and gate electric field. In this study we investigated gate-tuned molecular oxygen adsorption through systematic measurements of conductivity using mainly bilayer graphene (BLG). The back-gated BLG-FETs were fabricated on SiO 2 (300 nm thick) on heavily n-doped silicon substrate by means of photolithography. The channel length and width were 6 µm and 3.5 µm, respectively. Prior to measurement, we repeated vacuum annealing (210 • C, 10 h) to remove the adsorbed moisture and contaminants on the surface until no more changes in the gate-dependent conductivity σ were eventually seen. After the annealing, the BLG-FET exhibited its pristine nature, that is, ambipolar transport properties with a conductivity minimum around V g = V 0 CNP < 8 V giving the " charge neutrality point (CNP)" with electrons and holes in BLG being equal in density. The Drude mobility µ(n) was estimated to be ∼ 1 × 10 3 cm 2 V −1 s −1 from the equation µ(n) = σ/e|n|, where n is the carrier density with n = (c g /e)(V g − V CNP ) (c g /e = 7 × 10 10 cm −2 V −1 ; First, the field effect transistor channeled by bilayer graphene (BLG-FET) is exposed to gaseous O2 while the gate voltage V g,ad is applied (left panel). Then the system is evacuated and the source-drain conductivity of the BLG-FET is measured by sweeping the gate voltage Vg (right panel). Subsequently gaseous O2 is again introduced, and the cycle is repeated. The gas introduction and evacuation are completed in a shorter time than ∼ 10 s to prevent additional gas from adsorption. The whole cycle is executed at room temperature. (b) Change of the field effect behavior due to the O2 exposure with V g,ad = 0 V (run 1). The gate voltage giving the minimum conductivity (the charge neutrality point), is shifted from Vg = V 0 CNP (< 8 V) (marked by a brown triangle, before O2 exposure) to the positive direction (green triangles) upon O2 exposure. Circles on the curves represent the conductivity at the hole density of 2.5 × 10 12 cm −2 that are used to calculate Drude conductivity shown in Figure 4a. (c) The same measurement as in the panel (b) with applying the finite V g,ad . All the curves are shifted by −V 0 CNP (V 0 CNP = 13, 9, and 11 V for the run of V g,ad = +80, +40, and −50 V, respectively) in Vg direction, i.e., the charge neutrality points of the pristine graphene without the adsorbed oxygen are taken as zero gate voltage. The top panel of (c) represents the σ vs Vg − V 0 CNP for the pristine graphene. The changes of σ vs Vg − V 0 CNP curve after a single and a double exposure to O2 (the time duration of a single exposure is 30 s) are shown in the center and the lower panel of (c), respectively. Filled circles indicate V shift = VCNP − V 0 CNP (the shift of the CNP) for each curve. c g is the capacitance per unit area for the back-gated graphene FET on 300 nm-thick SiO 2 ) 22 , It is in the range of the values for BLG-FET in two-probe configuration previously reported 6 , and therefore we confirm that the graphene of the present BLG-FET has few defects that may extremely enhance the chemical reactivity of graphene 30 . Next we exposed the BLG-FET to 1 atm of highpurity (>99.9995%) oxygen in the measurement chamber at room temperature. Instead of measuring conductivity with graphene kept in the O 2 environment, we performed the short-time interval exposure-evacuation cycles schematically shown in Figure 1a; O 2 exposure was done under the dc gate voltage V g,ad , followed by rapid evacuation in less than 10 s (the physisorbed O 2 molecules would be removed immediately without charge transfer), and eventually the σ vs V g measurement was done with sweeping V g under vacuum. In this cycle, we can rule out the possibility of the additional oxygen adsorption during sweeping gate voltage for the σ vs V g measurement since the system was evacuated then. In addition, we found that the σ vs V g curve did not vary under vacuum at room temperature at least for more than several hours, so that we can also rule out the possibility of the oxygen desorption during σ vs V g measurement (taking ca. 10 min to obtain a single σ vs V g curve). Therefore, just repeating the cycles substantially realizes the long-time O 2 exposure under V g,ad , the length of which is denoted by total O 2 exposure time, t. Figure 1b rection was observed (Figure 1b), which represents hole doping to graphene. Prolonged exposure brought further hole doping, and eventually the doping density (induced charge by the oxygen adsorption) n ox (t) = (c g /e)V shift (t) reached more than 5 × 10 12 cm −2 within a time scale of 10 3 min. Note that any hysteresis as observed in graphene FET in moist atmosphere 29,31-34 was not found in the observed σ vs V g curve, so that we can uniquely determine V CNP (t) as a function of t. Another remarkable feature is the hole conductivity in highly doped regime, V g − V CNP (t) < −40 V (i.e., |n| > 3 × 10 12 cm −2 ). The σ vs V g curve distorted and exhibited the sublinear dependence in this regime for the pristine BLG-FET as can be seen in Figure 1b. Yet it disappeared only after the exposure to O 2 for 1 min, whereas the carrier doping has not proceeded much at that time. Thus this rapid change in conductivity feature is discriminated from the slower change causing the shift of the CNP; one possibility is that the former is due to rapid oxidation of the metalgraphene interface 35,36 , which does not shift the Fermi level of graphene but asymmetrically varies the conductivity. Both V CNP and the mobility of the BLG-FET with the oxygen adsorbed were reset to the value for the pristine BLG-FET by annealing the O 2 -exposed BLG-FET in vacuum at 200 • C, indicating that O 2 desorption readily proceeds at high temperature without making any defects. By virtue of this reversibility of the oxygen ad-sorption, we can repeat the conductivity measurements in the exposure-evacuation cycles as described above for the same device and compare the results. We additionally carried out four consecutive measurements under the same condition except that a finite gate voltage V g,ad was applied during O 2 exposure; V g,ad was +80 V (run 2 and run 3), −50 V (run 4) and +40 V (run 5). Figure 1c represents the change of the gate-dependent conductivity σ vs V g − V 0 CNP at the initial step of the runs 2, 4, 5: before O 2 exposure, (i.e., after vacuum annealing) and after the first and second exposure-evacuation cycles (the duration time for O 2 exposure in each cycle is 30 s, i.e., t = 0.5, 1 min, after the first and second cycle, respectively). All the σ vs V g − V 0 CNP curves collapsed onto almost the identical curve in the pristine graphene as shown in the top panel of Figure 1c. Since V 0 CNP was within 10±3 V for each run (see the caption of Figure 1), the BLG-FET was realized to exhibit its pristine feature before O 2 exposure. After the O 2 exposure (the center and the bottom panel of Figure 1c) the gate-dependent conductivity changes similarly as was also observed for run 1 (Figure 1b), but the effect of applying V g,ad during O 2 exposure is marked by the clear difference in V shift (t). There is a tendency that V shift (t) is larger for higher V g,ad and smaller for lower V g,ad , indicating that hole doping proceeds more intensively to graphene with higher Fermi level. This trend is pronounced on the increase in the exposure time, as confirmed by comparing the center and Figure 2. We estimate dnox/dt from the differential between the neighboring data points for each run in Figure 2. Lines are linear fits for all the differential data. The slope of each curve gives u = 1.02, 0.91, 0.86, and 0.77 (dnox/dt ∝ t −u ) for the gate voltage V g,ad = +80, +40, 0, and −50 V, respectively. the bottom panel of Figure 1(c). We tracked the temporal evolution of the gatedependent conductivity over a wide time range between 10 0 -10 3 min. Figure 2 represents V shift for runs 1-5 with respect to O 2 exposure time, in which the corresponding doping density owing to the oxygen adsorption, n ox , is also shown in the right axis. The tendency that the high V g,ad leads to rapid doping can be seen clearly over a whole time range; e.g., to reach the doping level of V shift = 40 V, it took ca. 300 min for nonbiased BLG-FET. In contrast, hole doping is so enhanced for the BLG-FET of V g,ad = 80 V that it took only 4 min, on the other hand so suppressed for that of V g,ad = −50 V that it took more than 1000 min. The doping density increases almost linearly with respect to log t for V g,ad = +80 V and +40 V, whereas superlinearly for V g,ad = 0 V and −50 V. The plots for runs 2 and 3, having common V g,ad = +80 V, are completely on the same line, which verifies that the thermal annealing in vacuum for the reproducing of the undoped state in the BLG does not affect the behavior of adsorption. Figure 3 shows the time dependence of the doping rate dn ox /dt estimated from the differential between the neighboring data points in Figure 2. It is obvious that the doping rate changes in accordance with dn ox /dt ∝ t −u . The power u is dependent on V g,ad ; u ≈ 1 for V g,ad = +80 V and it decreases for the runs with lower V g,ad . This deviates from the conventional Langmurian kinetics for molecular adsorption which would give dn ox /dt ∝ exp(−t/τ ) with a constant τ . Careful verification is necessary to inquire the gatevoltage-dependent and non-Langmurian temporal change of the molecular doping since the rate for doping density dn ox /dt is related to both of the rate for the chemisorption of molecules (dN ox /dt, where N ox is the areal density of the adsorbed oxygen molecules) and the transferred charge per adsorbed molecule (the charge/molecular ratio, Z). Therefore, we analyze the mobility that includes the information of the scattering mechanism of the conducting electrons and the charge of the adsorbed molecules. Within a standard Boltzmann approach 37 , the mobility is changed inversely proportional to the density of the scattering centers (i.e., the adsorbed molecules), N ox . In the realistic case, the inverse mobility is given as a function of N ox and the carrier density n, which reads 5 Here µ 0 (n) represents the mobility of the pristine graphene without the adsorbed oxygen. The coefficient C(n) represents the feature of carrier scattering by the adsorbed oxygen. On the one hand, the charged-impurity scattering 38 gives C(n) ∝ [1 + 6.53 √ n (d + λ TF )]/Z 2 for the BLG in the low-carrier-density regime and in the limit of d → 0 within the Thomas-Fermi approximation 38 , where d is the distance between the impurities and the center of the two layers of the BLG (see the inset of Figure 4a for the definition), and the screening length λ TF = κh 2 /4m * e ≈ 1 nm (κ: dielectric constant) 38 . On the other hand, the short-range deltacorrelated scatterers give the constant C(n) ≡ C s 38 , or the strong impurities with the potential radius R give which is a decreasing function of n in the regime of n ∼ 10 12 cm −2 taking R to be several angstroms. As for the relation between the concentration of the adsorbed oxygen molecules N ox and the O 2 -induced doping density n ox , we assume that the charge Ze of each adsorbed oxygen molecule dopes the carriers −Ze in the BLG, notably, n ox = −ZN ox . To be exact, the amount of the induced charge is not such a simple function 6 proportional to the number of adsorbed molecules due to the energy-dependent DOS of BLG 40,41 and the anomalistic screening effect therein 5,6,42 . Yet in the low energy regime of BLG where the DOS is envisaged to be constant, the assumption above is appropriate. Figure 4a shows the inverse Drude mobility µ −1 vs n ox plots at the carrier density of n = 2.5 × 10 12 cm −2 (marked by the open circles in Figure 1b for run 1:V g,ad = 0 V) for V g,ad = +80, +40, 0, and −50 V. Linear increase in µ −1 with respect to n ox was found. This, along with the linearity between µ −1 and N ox given by Eq. (1), implies that Z is not a function of N ox , i.e., invariant against the increase of the adsorbed molecules. Interestingly, the slope of µ −1 vs n ox plot (the inverse of the slope corresponds to C(n)|Z| in Eq. (1)) depends on V g,ad . Note that before O 2 was introduced (n ox = 0), we observed µ −1 ≈ 28 V s m −2 irrespective of V g,ad , and thus the difference in the mobility by V g,ad genuinely results from Doping density, nox (×10 12 cm −2 ) the adsorbed oxygen instead of other unintentional impurities on the BLG or the SiO 2 substrate. In Figure 4b, the inverse of the slope, C(n)|Z|, is shown for the various carrier densities, n. Therein we omit the data in the low carrier regime of n < 2.5 × 10 12 cm −2 , in which the residual carriers due to electron-hole puddles cannot be disregarded and the carrier density n (and thus also the Drude mobility) cannot be correctly estimated only by considering the gate electric field effect 43 . Similarly to the charged impurity model rather than otherwise, C(n)|Z| is increasing with n. The dependence experimentally observed, however, still deviates from the theoretical calculated results within the charged impurity model plotted in Figure 4b for various d and Z (assuming d and Z are invariant to n). The difference in C(n)|Z| depending upon V g,ad indicates that the electronic polarity of graphene varies the adsorption states of oxygen molecules, leading to the variation of d and Z. When the positive (negative) V g,ad is applied, negative (positive) carrier is electrically induced on graphene, which may modify the interaction between graphene and the adsorbed oxygen molecules with the negative charge, e.g., the Coulomb interaction and the overlap of the orbitals. Eventually, the stable adsorption state is varied by V g,ad , leading to the difference in mobility. Besides, let us recall that conductivity measurement process is set apart from the O 2 adsorption process and that constant V g,ad is not applied when the mobility is measured (Figure 1a). Accordingly, whereas the stable adsorption state of oxygen during the conductivity measurement may differ from that during adsorption, the adsorbed oxygen molecules are kept in the former state during conductivity measurement, and the mobility varying by V g,ad is actually observed. This indicates that the energetic barrier exists for charge redistribution between graphene and the adsorbed oxygen molecules (shown below), and once the adsorption is accomplished, the charge Ze on each adsorbed oxygen molecules will not immediately change just after switching on/off the gate voltage. One possible reason for the deviation between the experimental results and theoretical curve is that d varies accompanied with the change in n (or sweeping V g ). Yet actually, since modifying d by several angstroms results in the small change in (C(n)|Z|) as shown in Figure 4b, it is necessary to investigate more about the behavior of the adsorbed oxygen molecules in the gate electric field in a the future study. It is controversial what kind of oxygen species does actually cause the hole doping to graphene. Because the electron affinity of O 2 (0.44 eV 44 ) is much lower than the work function of graphene (4.6 eV 45 ), direct charge transfer between them seems unfavorable. Instead, with an analogy of the charge doping of diamond surface 46 , there is a widely accepted 28,33,47 hypothesis that the hole doping proceeds through an electrochemical reaction 46 such as: O 2 + 2H 2 O + 4e − = 4OH − , by which the charge transfer is favorable due to the lowered free energy change ∆G = −0.7 eV 28 on the condition that the oxygen pressure is 1 atm and pH = 7. This electrochemical reaction needs the aid of water that is mostly eliminated in the experiment by annealing (we observed no hysteresis in the σ vs V g curve, that is, there are few charge traps often attributed to residual moisture on graphene or its substrate). Yet as for graphene deposited on the hydrophilic SiO 2 substrate 32 , it is possible that a small amount of residual water molecules (more than the chemical equivalent of O 2 ) are trapped on SiO 2 surface or voids, which cannot be easily removed by the vacuum annealing at 200 • C in comparison with those on graphene surface. We suggest that the electrochemical mechanism is plausible also in our case, yet the adsorbed molecule could be other chemical species than OH − , the charge of which may be dependent on V g,ad . FIG. 5: Schematic energy diagrams of the kinetics of O2 adsorption (H kinetics). (a) Path for electron transfer in this model is shown by the blue dotted arrow; electrons in BLG (the electrochemical potential ζG) are transferred to O2 molecules via the transition state (the circled T at the level of ζTS ), giving the adsorbed oxygen species (the circled A at the level of ζ ads ). The activation energy, the free energy change, and the level of the CNP are denoted by ‡ E, ∆G and ζCNP, respectively. The Fermi level is defined by εF = ζG − ζCNP. As a demonstration, the case for V g,ad > 0 is presented. (b) Temporal change in the activation energy and the Fermi energy due to the adsorption of oxygen molecules to BLG negatively doped by the positive gate voltage (as in case of panel a). The left panel represents the case before oxygen adsorption (t = 0), and the right panel represents the oxygen exposure for the time t. Here the energy is measured from the CNP. The red and the blue tick marks denote the level for the transition state and the Fermi level, respectively. Oxygen adsorption lowers the Fermi level, accompanied with the increase in hole doping of nox (equal to the area of grayed part, ε F (0) ε F (t) D(εF) dεF), and the activation energy increases according to Eq. (2) In light of the discussion above, the electrochemical description 33,48 is expected to be applicable for the observed adsorption kinetics of oxygen to the BLG. Here we premise that the molecular adsorption is determined by the electrochemical potential of graphene, and consider the charge transfer kinetics in an approach based on Butler-Volmer theory 49,50 . The model is schematically depicted in Figure 5. The probability of the adsorption reaction is determined by the electrochemical potential of graphene (ζ G ) and that in the equilibrium condition of the oxygen-chemisorption reaction (ζ ads ). When ∆G = ζ ads − ζ G < 0, the electrons favorably transfer from of graphene to the adsorbed oxygen (denoted by " A" in Figure 5a), and the oxygen-adsorption reaction proceeds. For charge transfer, the electrons should go through some energy barrier; we assume that electrons tunnel from BLG to the O 2 molecules via a single transition state (denoted by " T"), whose electrochemical potential is ζ TS . The difference ‡ E = ζ TS − ζ G corresponds to the activation energy of the oxygen-chemisorption reaction, which determines the frequency of the electron transfer. Whereas ζ G is dependent on the Fermi level ε F as ζ G = ε F + ζ CNP (ζ CNP is the electrochemical potential of the CNP), we envisage that ζ TS (or ‡ E) is a function of ε F as well. In the framework of the Butler-Volmer theory, we obtain the dependence of where α(> 0) is a constant related to the "transfer coefficient" in the Butler-Volmer theory that associates the activation energy with the electrochemical potential (not the Fermi energy); thus herein we call α as "pseudo transfer coefficient" (see Supporting Information for detail in the derivation). That is, we have the assumption that the activation energy scales linearly with the Fermi energy. Further assuming that the molecular adsorption rate dN ox /dt is controlled by the electron transfer process and is not strongly affected by other contributions such as molecular diffusion 14 , it is given by where D(ε) is the density of states (DOS) of BLG (ε is the energy measured from ζ CNP ) and f (ε; ε F ) = [1 + (ε − ε F )/k B T ] −1 is the Fermi-Dirac distribution function. The coefficient χ does not depend on ε F (if the distance between the adsorbing molecules and BLG varied depending on ε F or V g,ad , the tunneling frequency would be affected so that χ might be dependent on them as well; yet herein we ignore such effect for simplicity). The right-side of Eq. (3) represents the tunneling rate of the electron from the graphene to oxygen at the energy level of transition state, ε = ε F + ‡ E(ε F ) = ζ TS − ζ CNP . According to Eq. (2) and Eq. (3), the molecular adsorption rate is dependent on the Fermi level of graphene. Using them, we can explain both the temporal evolution of the doping rate and its dependence on the gate voltage V g,ad . Let us discuss the temporal change in the doping rate. The Fermi level of graphene is lowered with the increase of the adsorbed oxygen molecules, because the positive charge is induced on graphene by the charge Ze they possess. Recalling that n ox = −ZN ox , the doping rate is given by dn ox /dt. Furthermore, since ‡ E ≫ k B T is fulfilled as shown later, we also approximate that f (ε F + ‡ E(ε F ); ε F ) ≃ exp(− ‡ E(ε F )/k B T ). Using the relation dn ox = −D(ε F )dε F , we acquire a formula describing the temporal change of the Fermi level: where ε F (t) = ε F (t, V g,ad ), expressing that the Fermi level is a function of the exposure time t and the gate voltage V g,ad , and ε F (0) = ε F (0, V g,ad ), the Fermi level at t = 0. We have specifically defined two constants, the pseudo transfer coefficient α te (the subscript "te" abbreviates "temporal evolution") and The right side of Eq. (4) represents the product of the charge transfer frequency and the amount of charge per adsorbed molecule, whereas the left side does the resultant amount of the doped charge. Because BLG (or also single layer graphene) has the low DOS around the CNP compared to metal, the small amount of carrier doping results in the large shift in the Fermi level, which effectively controls the kinetics. Thus the adsorption kinetics is well described by Eq. (4), the equation focusing on the Fermi level. When we envisage that the BLG is approximately described by two-dimensional parabolic dispersion of the free electron, the DOS becomes constant as 46Å is the in-plane lattice constant, and γ 0 = 3.16 eV and γ ⊥ ≈ 0.4 eV 51,52 are the intrasheet and intersheet transfer integrals, respectively. In this case (hereafter labeled as P kinetics), Eq. (6) is readily integrated, giving and is equivalent to the integrated form of Elovich equation 53,54 , the empirical equation that is widely applicable to chemisorptions onto semiconductors. When the hyperbolic DOS of BLG 41 is reflected to Eq. (4), more accurate but more complicated expression of ε F (t) is acquired (denoted as H kinetics), given by where S[ε F (0), ε F (t)] is a function that depends on the DOS at the Fermi level and that at the transition state level (derived in the Supporting Information). We performed curve fitting of the experimental results of n ox (t) with Eq. (6) and Eq. (7) for P kinetics and H kinetics, respectively. The difference in V g,ad by runs is simulated by the dependence of ε F (0) on V g,ad first without considering the gap-opening effect 55,56 , that is, we calculate ε F (0) using the relation that the charge of c g (V g,ad −V 0 CNP ) doped on BLG by applying V g,ad is equal to εF(0) 0 D(ε)dε. Irrespective of the kinetic models, All the theoretical curves are well fitted to the experimental results except those for run 4 in the range above 10 3 min. Eq. (4) is invalid in the first place in the longtime regime in which the adsorption rate is almost as low as the desorption rate, since the present treatment includes no contribution of desorption. The deviation is, however, contrary to the expectation; the desorption should suppress the evolution of the hole doping yet the enhanced doping was actually observed. Thus we suspect that it is due to long-time scale chemisorption of oxygen onto graphene which is ubiquitously observed in carbon materials 57,58 . Note that the several volts of offsets are added in V shift (corresponding to the doping density of 2× 10 11 cm −2 ) to improve the fitting in the time range of t ≤ 10 0 min. This small offset corresponds to a fast reaction that finishes at the very initial stage of adsorption, e.g., due to the reactive chemisorption of O 2 to the defect or edge site of graphene 59,60 . Figure 6 shows the profiles of the fitting results. Practically, the fitting parameters are following two: α te and p, but in order to derive the initial activation energy before the oxygen adsorption, ‡ E(ε F (0)), from p based on Eq. (6), we briefly assume that the charge/molecular ratio is independent of the gate voltage and determine that Zχγ ⊥ = 1 × 10 5 eV 2 min −1 from the pre-exponential factor in the literature 33 , c ox νκ el ∼ 10 17 cm −2 s −1 (c ox νκ el in the literature corresponds to χγ ⊥ /π(hv F ) 2 in this paper). Note that if we assume another value than Zχγ ⊥ = 1 × 10 5 eV 2 min −1 , it results in a uniform shift of the calculated ‡ E(ε F (0)). In addition, the charge/molecular ratio of the adsorbed molecule Z is likely dependent on V g,ad from the discussion about the mobility, but it leads the shift of ‡ E(ε F (0)) only by ∼ k B T = 0.026 eV. On the one hand, α te (representing the temporal change of the activation energy) exhibits a significant deviation between the H kinetics and the P kinetics (Figure 6a), or depends on the treatment of the DOS of BLG. This deviation is understood as follows: H kinetics reflects the DOS of BLG that is a monotonically increasing function with respect to |ε F | having a minimum (D = D P ) at the CNP (Figure 6(c)), yet P kinetics does not. Since the downward shift of the Fermi level upon the oxygen adsorption depends on the DOS at the Fermi level, the range of the change in the Fermi level differs between two kinetics models ( Figure 6(c)). Thus α te , related with the Fermi level by Eq. (2), is calculated differently. This result is contrasted with the conventional electrochemical reaction on the metal electrodes in which the kinetics is not signif- icantly affected by the DOS of the electrodes and is just owing to the low DOS of the BLG with comparison to the metal. On the other hand, we found that ‡ E(ε F (0)) is a decreasing function of ε F (0) = ε F (0, V g,ad ) (shown in Figure 6b, therein we plot ‡ E(ε F (0)) with respect to ε F (0) instead of V g,ad ). From Eq. (2), the slope of the plots in Figure 6b corresponds to the pseudo transfer coefficient α, and we found that α gf = 0.36 and 0.42 for P kinetics and H kinetics, respectively (the subscript "gf" abbreviates "gate electric field"). Herein we distinguish α gf from α te ; being different from α te that is calculated based on the temporal Fermi level shift by oxygen adsorption (thus α te inevitably includes the oxygen adsorption effect), α gf is acquired by tuning the Fermi level electrically at t = 0 before oxygen adsorption. We found that α gf is much smaller than α te (≥ 1), or rather near 0.5, a typical transfer coefficient for simple redox reactions 48,49 (Figure 6a). Specifically, the oxygen adsorption effect included only in α te but not in α gf is attributed, e.g., to the electric dipole layer 61 formed between graphene and the adsorbed molecules and the Coulomb interaction between the adsorbed oxygen molecules. We expect that these effects also raise the activation energy roughly in proportional to the number of molecules N ox (or the doping density n ox ), and thus we have the expression for the additional adsorption effect: d ‡ E = ξ mol dn ox = −ξ mol D(ε F )dε F (ξ mol : the proportional coefficient; herein we use the relation dn ox = −D(ε F )dε F again). Then we acquire α te ≃ α gf + ξ mol D(ε F ) ( D(ε F ) denotes the average DOS in the range of the Fermi level for each run), which indicates that α te is large when the DOS is large in the level far away from the CNP (see Figure 6c). Indeed, as shown in Figure 6a, α te for H kinetics shows the V-shaped dependence on V g,ad with the minimum for V g,ad = +40 V, which gives the smallest average DOS. Though the kinetics for the molecular adsorption is affected by various effects as mentioned above, we represent them by the parameter α te and succeed in accounting for the observed kinetics in a facile way. Finally let us account for the power-law dependence of dn ox /dt ∝ t −u shown in Figure 3. It is helpful to look on the simpler P kinetics for the assessment of dn ox /dt; from Eq. (6), we obtain dn ox /dt ∝ p/(1 + pt). Approximately we have dn ox /dt ∝ u t u−1 t −u where u ≃ [1 + (1/p t )] −1 ≤ 1 (the time t is the center of the expansion of ln(dn ox /dt) in terms of ln t, and it is a good approximation if p t ≫ 1, or otherwise if p t ≃ 1 in the time range such that t = [10 −1 t , 10 t ]). Since p is intensively dependent on V g,ad (recall that (i) p is exponentially decaying against ‡ E(ε F (0)) as represented in Eq. (5), (ii) ‡ E(ε F (0)) linearly decreases with respect to ε F (0) with the slope of −α gf as shown in Figure 6c, (iii) ε F (0) is an increasing function of V g,ad . We acquired p =19.6, 2.9, 0.98 and 0.32 for V g,ad = +80, +40, 0, and −50 V, respectively, by fitting within the P kinetics model), we can find that u is almost unity for a positively high V g,ad and tends be smaller for negatively high V g,ad within the time range experimentally scoped, which is consistent with the observed behavior. When the electrochemical mechanism governs the kinetics of the oxygen adsorption, the activation energy of the charge transfer continually increases with the O 2 exposure time increasing. This effect leads to non-Langmurian kinetics of the oxygen adsorption and the power-law decrease of dn ox /dt, even though neither the desorption process nor the saturation limit of adsorption are taken into consideration. In BLG, it is known that a band gap opens due to the energy difference between two layers 9 . The gate electric field as well as the adsorbed oxygen may produce such a strong energy difference that the eventual band gap should affect the time evolution of molecular adsorption. The band gap opening effect is expected to exhibit most prominently when the Fermi level goes across the CNP (at the point shown by arrows in Figure 2), while we cannot find such behavior obviously. We guess it is partly because most of the adsorbed oxygen molecules exist in the interface between graphene and the gate dielectric SiO 2 . For the band gap opening effect to appear, it is necessary that the gate electric field and the molecular field enhance each other when the Fermi level is near the CNP (i.e., the charge induced by the gate electric field and that by the adsorbed molecules including unintentional residual impurities on the SiO 2 substrate are balanced), yet it is possible only when the molecules mainly adsorb on the top surface of BLG, and not for the molecules adsorbed in the interface ( Figure S2, Supporting Information). Or it may be partly because the disordered potential due to the impurities in the substrate fluctuates the energy level around which the band gap exists 62 , eventually blurring band gap opening effects and chemical reactivity 63 of BLG. Details about the band-gap opening effects are discussed in the Supporting Information. In summary, we investigated the weak chemisorption of O 2 molecules on bilayer graphene by measuring its transport properties. The hole doping due to O 2 chemisorption is remarkably dependent on the gate voltage, and the amount of the doped carrier increases with O 2 exposure time, the rate of which is in accordance with ∝ t −u (u ≤ 1) rather than with conventional Langmuirian kinetics. We conclude from these that an electrochem-ical reaction governs the O 2 chemisorption process, in which the rate of the chemisorption is determined by the Fermi level of graphene, and indeed succeed in accounting for the observed kinetics by the analysis based on the Butler-Volmer theory. We also found that the chemisorbed molecules decrease the mobility of graphene, and interestingly, the mobility change is dependent on the gate voltage applied during the adsorption, indicating that the adsorption state, e.g., transferred charge or distance between a molecule and graphene, can be modified electrically. Graphene, offering a continuously tunable platform for study of chemisorptions on it, realizes the electrical control of the adsorption by gate electric field, a novel and versatile method in which we would explore extensively a wider variety of host-guest interactions between graphene and foreign molecules. Supporting Information. Additional descriptions about (i) the experimental method, (ii) the electrochemistry-based kinetics model and the mathematical derivation for it, and (iii) an expanded discussion about the gap-opening effects.
2011-08-09T19:16:53.000Z
2011-07-25T00:00:00.000
{ "year": 2011, "sha1": "5aa3a32ee179c564b2d82c64054a664f552ae30b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1108.2012", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00118e02e3e147181bf50faa2949145517545a2e", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
232307234
pes2o/s2orc
v3-fos-license
Grand challenges and emergent modes of convergence science To address complex problems, scholars are increasingly faced with challenges of integrating diverse domains. We analyzed the evolution of this convergence paradigm in the ecosystem of brain science, a research frontier that provides a contemporary testbed for evaluating two modes of cross-domain integration: (a) cross-disciplinary collaboration among experts from academic departments associated with disparate disciplines; and (b) cross-topic knowledge recombination across distinct subject areas. We show that research involving both modes features a 16% citation premium relative to a mono-domain baseline. We further show that the cross-disciplinary mode is essential for integrating across large epistemic distances. Yet we find research utilizing cross-topic exploration alone—a convergence shortcut—to be growing in prevalence at roughly 3% per year, significantly outpacing the more essential cross-disciplinary convergence mode. By measuring shifts in the prevalence and impact of different convergence modes in the 5-year intervals up to and after 2013, we find that shortcut patterns may relate to competitive pressures associated with Human Brain funding initiatives launched that year. Without policy adjustments, flagship funding programs may unintentionally incentivize suboptimal integration patterns, thereby undercutting convergence science’s potential in tackling grand challenges. The history of scientific development is characterized by a pattern of convergence-divergence cycles (Roco et al 2013).In convergence, originally distinct disciplines synergistically interact to address complex problems and accelerate breakthrough discovery (National Research Council 2014).In divergence, in addition to fragmentation resulting from conflicting social forces (Balietti et al 2015), spin-offs occur as new techniques, tools and applications spawn.The evolving fusion of multi-domain expertise during the present convergence cycle carries significant intellectual and organizational challenges (Bromham et al 2016;Fealing & eds. 2011;National Research Council 2005;Pavlidis et al 2014).The core issue is that contemporary convergence takes place in the context of team science (Milojevic 2014;Wuchty et al 2007).Accordingly, collaboration across distinct academic cultures and units faces behavioral (Van Rijnsoever & Hessels 2011) and institutional barriers (National Research Council 2014). Two early successful examples of convergence are worth mentioning to draw a comparative baseline.First, the Manhattan Project (MP), where physicists, chemists, and engineers successfully worked in the 1940s to control nuclear fission and produce the first atomic bomb, under a tightly run government program (Hughes & Hughes 2003).A half-century later (1990s-2000s), the Human Genome Project (HGP) forged a multi-institutional bond integrating biologists and computer scientists, under an organizational design known as consortium science model whereby teams of teams organize around a well-posed central grand challenge (Helbing 2012), with a common goal to share benefits equitably within and beyond institutional boundaries (Petersen et al 2018).In 10 short years, the HGP led to the mapping and identification of the (a) * To whom correspondence should be addressed; E-mail: ipavlidis@uh.eduor apetersen3@ucmerced.edu human genetic code, ushering civilization into the genomics era. Brain science is presently supported by major funding programs that span the world over (Grillner et al 2016).In late 2013, the United States launched the BRAIN Initiative ® (Brain Research through Advancing Innovative Neurotechnologies), a public-private effort aimed at developing new experimental tools that will unlock the inner workings of brain circuits (Jorgenson et al 2015).At the same time, the European Union launched the Human Brain Project (HBP), a 10 year funding program based on exascale computing approaches, which aims to build a collaborative infrastructure for advancing knowledge in the fields of neuroscience, brain medicine, and computing (Amunts et al 2016).In 2014, Japan launched the Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS), a program to develop innovative technologies for elucidating primate neural circuit functions (Okano et al 2015).China followed in 2016 with the China Brain Project (CBP), a 15 year program targeting the neural basis of human cognition (Poo et al 2016).Canada (Jabalpurwala 2016), South Korea (Jeong et al 2016), and Australia (Committee et al 2016) followed suit, launching their own brain programs in the late 2010s. By nature and historical precedence, convergence tends to operate on the frontier of science.In the 2010s, brain science was declared the new research frontier (Quaglio et al 2017) promising health and behavioral applications (Eyre et al 2017).Intensification of brain research has been taking place against a backdrop of an increasingly globalized, interconnected and online scientific commons.This stands in sharp contrast to the nationally unipolar and offline backdrop of the MP and even the HGP.Moreover, the brain funding programs were designed to act as behavioral incentives in an scientific marketplace, aimed at bringing together diverse scholars and ideas.However, despite being oriented around the compelling structure-function brain problem, there were few guidelines on how to configure scholarly expertise to address the brain challenge.As such, these characteristics render brain research a "live experiment" in the international evolution of the convergence paradigm. Accordingly, we apply data-driven methods to reconstruct the brain science ecosystem as a way to capture the contemporary "pulse" of convergence, explored through a progressive series of research questions regarding its prevalence, anatomy and scientific impact.Given the pervasive funding championing the HBS challenge, we further analyze how the trajectory of HBS convergence has been impacted by the ramp-up of flagship funding initiatives oriented around the world.While previous work explored the role of cross-disciplinary collaboration in the Human Genome Project (Petersen et al 2018), here we extend that framework to differentiate between (a) the disciplinary diversity of the research team and (b) the topical diversity of their research -two alternative means of crossdomain integration.We refer to the former as disciplinary diversity and to the latter as topical diversity.We leverage existing taxonomies -in the case of disciplines, using the Classification of Instructional Program (CIP) system developed by the U.S. National Center for Education Statistics; and for topics using Medical Subject Heading (MeSH) ontology developed by the U.S. National Library of Medicine disciplines -to distinguish mono-domain versus cross-domain activity.Accordingly, we classify HBS research according to four integration types defined by a mono-/cross-{discipline × topic} domain decomposition. In a highly competitive and open science system with multiple degrees of freedom, our motivating hypothesis is that more than one operational cross-domain integration mode is likely to emerge.With this in mind, we identify five research questions (RQ) addressed in each figure in series.The first (RQ1) regards how to define convergence, which we address by developing a typological framework, one that is generalizable to other frontiers of biomedical science, and is relevant to the evaluation of multiple billion-dollar HBS flagship projects around the world.The second (RQ2) regards the status and impact of brain science convergence: Have HBS interfaces have developed to the point of sustaining fruitful cross-disciplinary knowledge exchange?Does the increasing prevalence of teams adopting convergent approaches correlate with higher scientific impact research?RQ3 addresses whether convergence is evenly distributed across HBS subdomains?And what empirical combinations of distinct subject areas (knowledge) and disciplinary expertise (people) are overrepresented in convergent research?RQ4 follows by seeking to identify whether convergence is evenly distributed over time and geographic region?And finally, RQ5: does the propensity to pursue convergence science or does the citation impact of convergence science depend on the convergence mode?To address this question, we implement hierarchical regression models that differentiate between three convergence modes: research involving cross-disciplinary collaboration, cross-subject-area exploration, or both.Given the lucrative nature of flagship funding initiatives, we hypothesize that the ramp-up of HBS flagships correlates with shifts in the prevalence and relative impact of research adopting these dif-ferent convergence modes. Our results identify timely and relevant science policy implications.Given contemporary emphasis around accelerating breakthrough discovery (Helbing 2012) by way of strategic research team configurations (Börner et al 2010), convergence science originators called for cross-disciplinary approaches integrating distant disciplines (National Research Council 2014).Instead, our analysis reveals that HBS teams recently tend to integrate diverse topics without necessarily integrating appropriate disciplinary expertise -an approach we identify as a convergence shortcut. Efficient long-range exploration facilitated by multidisciplinary teams is a defining value proposition of convergence science (National Research Council 2014), and provides a testable mechanism underlying the increased likelihood of large team science producing high-impact research (Wuchty et al 2007).Hence, the emergence and densification of cross-domain interfaces are likely to increase the potential for breakthrough discovery by catalyzing recombinant innovation (Fleming 2001), which effectively expands the solution space accessible to problem-solvers.It then follows that certain configurations are likely to amplify the effectiveness of recombinant innovation.Adapting a triple-helix model of medical innovation (Petersen et al 2016), recombinant innovation manifests from integrating expertise around the three dimensions of supply, demand and technological capabilities: (i) the fundamental biology domain that supplies a theoretical understanding of the anatomical structure-function relation, (ii) the health domain that identifies demand for effective science-based solutions, and (iii) the techno-informatics domain which develops scalable products, processes and services to facilitate matching supply from (i) with demand from (ii) (Yang et al 2021). In order to overcome the challenges of selecting new strategies from the vast number of possible combinations, prior research finds that innovators are more likely to succeed by way of exploiting their own local expertise (Fleming 2001) rather than individually exploring distant configurations by way of internal expansive learning (Engeström & Sannino 2010).Extending this argument, exploration at unchartered multidisciplinary interfaces is likely to be more successful when integrating knowledge across a team of experts from different domains, thereby hedging against recombinant uncertainty un- derlying the exploration process (Fleming 2004). A complementary argument for convergence derives from the advantage of diversity for harnessing collective intelligence and identifying successful hybrid strategies (Page 2008).Recent work provides additional empirical support for the competitive advantage of diversity, using cross-border mobility (Petersen 2018) as an instrument for social capital disruption to identify the positive role of research topic and collaborator diversity. Data collection and notation Figure 1 shows the multiple sources combined in our study, which integrates publication and author data from Scopus, PubMed, and the Scholar Plot web app (Majeti et al 2020) (see Supplementary Information (SI) Appendix S1 for detailed description).In total, our data sample spans 1945-2018 and consists of 655,386 publications derived from 9,121 distinct Scopus Author profiles, to which we apply the following variable definitions and subscript conventions to capture both articleand scholar-level information.At the article level, subscript p indicates publication-level information such as publication year, y p ; the number of coauthors, k p ; and the number of keywords, w p .Regarding the temporal dimension, a superscript > (respectively, <) indicates data belonging to the 5-year "post" period 2014-2018 (5-year "pre" period 2009-2013), while N (t) represents the total number of articles published in year t.Regarding proxies for scientific impact, we obtained the number of citations c p,t from Scopus, which are counted through late 2019.Since nominal citation counts suffer from systematic temporal bias, we use a normalized citation measure, denoted by z p (see Methods -Normalization of Citation Impact).Regarding author-level information, we use the index a -e.g.we denote the academic age measured in years since a scholar's first publication by τ a,p . To address RQ1 we classified research according to three category systems indicative of topical, disciplinary and regional clusters.The first category system captures research topic clusters grouped into Subject Areas (SA); counts for each article are represented by a vector with 6 elements, − → SA The variable N SA,p counts the total number of SA categories present in a given article, with min value 1 and max value 6. The second taxonomy identifies disciplinary clusters determined by author departmental affiliation, which we categorized according to Classification of Instructional Program (CIP) codes.Article-level CIP category counts are repre-sented by −−→ CIP p , with 9 elements pertaining to the following categories: (1) Neurosciences, (2) Biology, (3) Psychology, (4) Biotech.& Genetics, (5) Medical Specialty, (6) Health Sciences, (7) Pathology & Pharmacology, (8) Engineering & Informatics, and (9) Chemistry & Physics & Math.The variable N CIP,p counts the total number of CIP categories present in a given article, with min value 1 and max value 9; Methods and SI Appendix S1 offer more details. The third taxonomy captures the broad regional scope of each research article team determined by each Scopus author's affiliation location, and represented by the vector − → R p which has 4 elements representing North America, Europe, Australasia, and rest of World.See Fig. S1 for the composition of SA and CIP clusters, and SI Appendix S1 for additional description of how these classification systems are constructed.Figure S2 (Fig. S3) shows the frequency of each SA (CIP) category and the pairwise frequency of all {SA, SA} ({CIP, CIP }) combinations over the 10-year period centered on 2014, along with their relative changes after 2014; See SI Appendix S2-S3 for discussion of the relevant changes in SA and CIP categories after 2014. We represent the collection of article features by As indicated in Fig. 1, based upon the distribution of types tabulated as counts across vector elements, an article is either cross-domain, representing a diverse mixture of types denoted by X; or mono-domain, denoted by M .We use a generic operator notation to specify how articles are classified as X or M , The objective criteria of the feature operator O is specified by its subscript: for example O SA ( F p ) yields one of two values: X SA or M ; similarly, O CIP ( F p ) = X CIP or M .Note that all scholars map onto a single CIP, hence solo-authored research articles are by definition classified by O CIP as M .While we acknowledge that is possible for a scholar to have significant expertise in two or more domains, we do not account for this duplicity, as it is likely to occur at the margins; hence, the home department CIP represents the scholar's principle domain of expertise.We also classify articles featuring both X SA and X CIP as O SA&CIP ( F p ) = X SA&CIP (and otherwise M ). To complement these categorical measures, we also developed a scalar measure of an article's cross-domain diversity (see Materials & Methods -Measuring cross-domain diversity for additional details).By way of example, consider the vector − → SA p (or −−→ CIP p ) which tallies the SA (or CIP counts) for a given article p published in year t.We apply the outer tensor product − → SA p ⊗ − → SA p (or −−→ CIP p ⊗ −−→ CIP p ) to represent all pairwise co-occurrences in a weighted matrix D p ( v p ) (where v p represents a generic category vector; see SI Appendix S4 for examples of the outer tensor product).The sum of elements in this co-occurrence matrix are normalized to unity so that each D p ( v p ) contributes equally to averages computed across all articles from a given year or period.Since the off-diagonal elements represent cross-domain combinations, their relative weight given by f D,p = 1 − Tr(D p ) ∈ [0, 1) is a straightforward Blau-like measure of variation and disparity (Harrison & Klein 2007). Descriptive Analysis Increasing prevalence of cross-domain science.With the continuing shift towards large team science (Milojevic 2014;Pavlidis et al 2014;Petersen et al 2014;Wuchty et al 2007), one might expect a similar shift in the multiplicity of domains spanned by modern research teams -but to what degree? Figure 2(A) addresses RQ2 by showing the frequencies of monodomain (M ) research articles versus cross-domain articles (X) in our HBS sample.Articles were separated into aboveand below-average citation impact (z) for each publicationyear cohort (t), and within each of these two subsets we calculated the fraction f # (t|z) of articles containing combinations across # = 1, 2, 3 and 4 categories.The fraction of monodomain articles is trending downward, which we observe for both research topics (SA) and authors' disciplinary affiliations (CIP).The decline is much more steep for SA than for CIP.Correspondingly, cross-domain articles have become increasingly prevalent, in particular for SA.For both SA and CIP the two-category mixtures dominate the three-and four-category mixtures in frequency, in sequence.Accordingly, in the sections that follow we do not distinguish between cross-domain articles with different #. As a first indication of the comparative advantage associated with X, we observe a robust inequality f # (t|z > 0) > f # (t|z < 0) for cross-domain research (# ≥ 2), meaning a higher frequency of cross-domain combinations is observed among articles with higher impact.Contrariwise, in the case of mono-domain research the opposite phenomenon occurs, f 1 (t|z > 0) < f 1 (t|z < 0).Taking into consideration temporal trends, these robust patterns indicate a faster depletion of impactful mono-domain articles, coincident with an increased prevalence of impactful research drawing upon integrative recombinant innovation. Recombinant innovation at the convergence nexus.Comprehensive analysis of biomedical science indicates that convergence has largely been mediated around the integration of modern techno-informatics capabilities (Yang et al 2021).Yet within any domain, in particular HBS, the questions remains as to the development of a functional nexus that sustains and possibly even accelerates high-impact discovery by both expanding the number of possible functional expertise configurations and supporting rich cross-disciplinary exchange of new knowledge and best practices.The robust inequality f # (t|z > 0) > f # (t|z < 0) provides support at the aggregate level, but does not lend any structural evidence. To further address RQ2, Fig. 2(B) illustrates the composition of the HBS convergence nexus, showing integration of cross-disciplinary expertise across three broad yet distinct biomedical domains.Shown are the populations of HBS researchers by region, represented as collaboration networks compared over two non-overlapping 10-year intervals to indicate dynamics.Each node represents a researcher, colored according to three disciplinary CIP superclusters: (i) neurobiological sciences (corresponding to CIP 1-4), (ii) health sciences (CIP 5-7), and (iii) engineering & information sci-FIG.2: Trends in cross-domain scholarship in Human Brain Science.(A) Fraction f # (t|z) of articles published each year t that feature a particular number (#) of categories.Articles are split into an above-average citation subset (zp > 0) and below-average citation subset (zp < 0).Upper panel: Articles categorized by SA.Middle panel: Articles categorized by CIP; subpanel shows data on logarithmic y-axis; Lower panel: Articles categorized by both SA and CIP.Distinguishing frequencies by citation group indicates higher levels of cross-domain combinations among research articles with higher scientific impact -for both SA and CIP.However, cross-domain activity levels are visibly higher for SA than for CIP, indicating higher barriers to boundary-crossing arising from mixing different scholar expertise.(B) Snapshots of the collaboration network at 10-year intervals indicating researcher population sizes by region, and the densification of convergence science at cross-disciplinary interfaces.Nodes (researchers) are sized according to the number of collaborators (link degree) within each time window. ences .Node locations are fixed to facilitate visual representation of network densification.Inter-and crossregional comparison alludes to the emergence and densification of cross-domain interfaces (see also Fig. S4).Because the network layout is determined by the underlying structure, there is a high degree of clustering by node color, emphasizing both the relative sizes of the subpopulations that are wellbalanced across region and time, and also the convergent interfaces where cross-disciplinary collaboration and knowledge exchange are likely to catalyze.As such, these communities of expertise conjure the image of a Pólya urn, whereby successful configurations reinforce the adoption of similar configurations. The links that span disciplinary boundaries are fundamental conduits across which scientists' strategic affinity for exploration (Foster et al 2015;Rotolo & Messeni Petruzzelli 2013) is effected via cross-disciplinary collaboration that brings "together distinctive components of two or more disciplines" (Nissani 1995;Petersen et al 2018).Our analysis of cross-disciplinary collaboration indicates that the fraction of articles featuring convergent collaboration have continued to grow over the last two decades (see Fig. S4).In what follows we further distinguish between integration across neighboring (Leahey & Moody 2014) and distant domains, with the latter appropriately representing convergence (National Research Council 2005, 2014;Roco et al 2013). Cross-domain convergence of expertise (CIP) and knowledge (SA).In the context of the bureaucratic structurefunction problem, team assembly should be optimized by strategically matching scholarly expertise and research topics to address the particular demands of a particular challenge.Hence, with 9 different disciplinary (CIP) domains historically faced with a variety of challenges, RQ3 addresses to what degree these domains differ in terms of their composition of targeted SA.Fig. 3(A) illustrates the evolution of topical diversity within and across each CIP cluster, revealing several common patterns.First, nearly all domains show a reduction in research pertaining to structure (SA 2), with the exception of Biotechnology & Genetics, which was oriented around the structure-function problem from the outset.As such, this domain features a steady balance between SA 2-5, while being an early adopter of techno-informatics concepts and methods (SA 6).Early balance around the innovation triple-helix (Petersen et al 2016) may explain to some degree the longstanding success of the genomics revolution, as the core disciplines of biology and computing were primed for a fruitful union (Petersen et al 2018).Other HBS disciplinary clusters are also integrating techno-informatic capa-bilities, reflecting a widespread pattern observed across all of biomedical science (Yang et al 2021). Which CIP-SA combinations are are overrepresented in boundary-crossing HBS research?Inasmuch as mono-domain articles identify the topical boundary closely associated with individual disciplines, cross-domain articles are useful for identifying otherwise obscured boundaries that call for both X CIP and X SA in combination.We identified these novel CIP-SA relations by collecting articles that are purely monodomain for both CIP and SA (i.e., those with O CIP ( F p ) = O SA ( F p ) = M ) and a complementary non-overlapping subset of articles that are simultaneously cross-domain for both CIP and SA (i.e., O SA&CIP ( F p ) = X SA&CIP ). Starting with mono-domain articles, we identified the SA that are most frequently associated with each CIP category.Formally, this amounts to calculating the bi-partite network between CIP and SA, denoted by M CIP M SA .These All CIP 1 2 3 4 5 6 7 8 9 CIP-SA associations are calculated by averaging the − → SA p for mono-domain articles from each CIP category, given by − → SA CIP .Figure 3(B) highlights the most prominent CIP-SA links (see SI Appendix S5 for more details).Likewise, we also calculated the bi-partite network X CIP X SA using the subset of X SA&CIP articles. To identify the cross-domain frontier, we calculated the network difference ∆ XM ≡ X CIP X SA − M CIP M SA , and plot the links with positive values -i.e.CIP-SA links that are over-represented in X CIP X SA relative to M CIP M SA .Results identify SA that are reached by way of crossdisciplinary teams.SA 2 (Anatomy and Organisms) and 3 (Phenomena & Processes) representing the structure-function problem, stand out as a potent convergence nexus accessible by teams combining disciplines 1, 2, 4 and 9. A related key insight concerns the relative increase in SA integration achieved by increased CIP diversity.Figure S5 compares the average number of SA integrated by teams with varying number of distinct CIP, N CIP,p .On average, mono-disciplinary teams (N CIP,p = 1) span 2.2 SA, whereas teams with N CIP,p = 3 span 19% more SA, confirming that cross-disciplinary configurations are functional in achieving research breadth. Quantitative Model Trends in cross-domain activity.To address the temporal and geographic parity associated with RQ4, we define three types of cross-domain configurations -Broad, Neighboring, and Distant -defined according to a particular combination of SA and CIP categories featured by a given article. Broad is the most generic cross-domain configuration, based upon combinations of any two or more SA (or CIP) categories, and represented by our operator notation as Neighboring is the X configuration that captures the neuro-psychological ↔ bio-medical interface representing articles that contain MeSH from SA (1) and also from SA (2, 3 or 4), represented summarily as Distant is the X configuration that captures the neuropsycho-medical ↔ techno-informatic interface.The specific set of category combinations representing this configuration are SA [1-4] × [5,6]; and for CIP, [1,3,5] × [4,8]; as above, articles featuring (or not featuring) categories spanning these categories are represented by X Distant,SA (belong to a counterfactual set indicated by M ), X Distant,CIP (resp., M ), X Distant,SA&CIP (resp., M ).By way of example, the bottom of Figure 1 illustrates an article combining SA 1 and 4, which is thereby classified as both X SA and X Neighboring,SA ; and, an article featuring CIP 1,3,5,8, which is thereby both X CIP and X Distant,CIP . To complement these categorical variables, we also developed a Blau-like measure of cross-domain diversity, given by f D,p (see Methods Measuring cross-domain diversity).Figure 4 shows the trends in mean diversity f D (t) for the Broad, Neighboring, and Distant configurations.For each configuration we provide a schematic motif illustrating the combinations measured by D p ( v p ), with diagonal components representing mono-domain articles (indicated by 1 on the matrix diagonal) and upper-diagonal elements capturing cross-domain combinations (indicated by X).Comparing SA and CIP, there are higher diversity levels for SA, in addition to a prominent upward trend.In terms of CIP, Fig. 4(A) indicates a decline in Broad diversity in recent years, with North America (NA) showing higher levels than Europe (EU) and Australasia (AA); these general patterns are also evident for Neighboring diversity, see Fig. 4(B).Distant CIP diversity shown in Fig. 4(C) indicates a recent decline for AA and NA, with NA peaking around 2009; contrariwise, EU shows a steady increases consistent with the computational framing of the Human Brain Project. In contradistinction, all three regions show steady increase irrespective of configuration in the case of SA diversity, consistent with scholars integrating topics without integrating scholarly expertise, possibly owing to differential costs associated with each.For both Broad and Neighboring configurations, NA and EU show remarkably similar levels of SA diversity above AA; however, in the case of Neighboring, AA appears to be catching up quickly since 2010, see Fig. 4(D,E).In the case of Distant, all regions show steady increase that appears to be in lockstep for the entire period.See Figs.S6-S7 and SI Appendix Text S6 for trends in SA and CIP diversity across additional configurations. Regression model -propensity for and impact of X.To address RQ5, we constructed article-level and author-level panel data to facilitate measuring factors relating to SA and CIP diversity and shifts related to the ramp-up of HBS flagship projects circa 2013 around the globe.To address these two outcomes, we modeled two dependent variables separately: In the first model the dependent variable is the propensity for cross-domain research (indicated by X; depending on the focus around topics, disciplines or both, then X is specified by X SA , X CIP or X SA&CIP ).We use a Logit specification to model the likelihood P (X).In the second model the dependent variable is the article's scientific impact, proxied by c p .Building on previous efforts (Petersen 2018;Petersen et al 2018), we apply a logarithmic transform to c p that facilitates removing the time-dependent trend in the location and scale of the underlying log-normal citation distribution (Radicchi et al 2008) (see Methods -Normalization of Citation Impact). Figure S9 shows the covariation matrix between the principal variables of interest. Model A: Quantifying the propensity for X and the role of funding.As defined, O( F p ) = X or M is a two-state outcome variable with complementary likelihoods, P (X) + P (M ) = 1.Thus, we apply logistic regression to model the odds Q ≡ P (X) P (M ) , measuring the propensity to adopt crossdomain configurations.We then estimate the annual growth in P (X) by modeling the odds as log(Q p ) = β 0 + β y y p + β • x, where x represents the additional controls for confounding sources of variation, in particular increasing k p associated with the growth of team science (Milojevic 2014;Wuchty et al 2007).See SI Appendix Text S7, in particular Eqns.( S2)-(S4), for the full model specification; and, Tables S1-S3 for parameter estimates. Summary results shown in Fig. 5(A) indicate a roughly 3% annual growth in P (X SA ), consistent with descriptive trends shown in Fig. 2. In contradistinction, growth rates for P (X CIP ) are generally smaller, indicative of the additional barriers to integrating individual expertise as opposed to just combining different research topics.In the case of P (X SA&CIP ), the growth rate is higher for Distant, where the need for cross-disciplinary expertise cannot be short-circuited as easily as in Neighboring. A relevant dimension of RQ5 is how HBS projects have altered the propensity for X.Hence, we added an indicator variable I 2014+ which takes the value 1 for articles with y p ≥ 2014 and 0 otherwise.Figure 5(B) indicates significant decline in P (X) for X CIP and X SA&CIP for each configuration on the order of -30%; this result is consistent with the recent increase in f 1 (t|z) visible in Fig. 2 (B). Model B: Quantifying the citation premium associated with X and funding.We model the normalized citation impact z p = α a + γ X SA I X SA,p + γ X CIP I X CIP,p + β • x, where x represents the additional control variables and α a represents an author fixed-effect to account for unobserved timeinvariant factors specific to each researcher.The primary test variables are I X SA,p and I X CIP,p , two binary factor variables with defined similarly for CIP.To distinguish estimates by configuration, for Neighboring we specify I X Neighboring,SA and I X Neighboring,CIP , with similar notation for Distant.Full model estimates are shown in Tables S4 -S5. Figure 5(C) summarizes the model estimates -γ X SA , γ X CIP and γ X SA&CIP -quantifying the citation premium attributable to X.To translate the effect on z p into the associated citation premium in c p , we calculate the percent change 100∆c p /c p associated with a shift in I X,p from 0 to 1. Observing that σ t ≈ σ = 1.24 is approximately constant over the period 1970-2018 and due to the property of logs, the citation percent change is given by 100∆c 0 l p x i 5 h j + w P n 8 A U s u j c o = < / l a t e x i t > X < l a t e x i t s h a 1 _ b a s e 6 4 = " l F / j m U A u D z R z q 6 s L u g X r A x B 1 r e g = " > A A A B 6 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 0 G P R i 8 c q 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g M v H h T x 6 j / y 5 r 9 x 0 + a < l a t e x i t s h a 1 _ b a s e 6 4 = " l F / j m U A u D z R z q 6 s L u g X r A x B 1 r e g = " > A A A B 6 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 0 G P R i 8 c q 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g M v H h T x 6 j / y 5 r 9 x 0 + a l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " l F / j m U A u D z R z q 6 s L u g X r A x B 1 r e g = " > A A A B 6 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 0 G P R i 8 c q 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g M v H h T x 6 j / y 5 r 9 x 0 + a l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " l F / j m U A u D z R z q 6 s L u g X r A x B 1 r e g = " > A A A B 6 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 0 G P R i 8 c q 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g M v H h T x 6 j / y 5 r 9 x 0 + a x C e Q j e 4 s v L 5 P G s 7 r l 1 7 + 6 8 1 r g u 4 i j D E R z D K X h w A Q 2 4 h S a 0 g M E Y n u E V 3 p z E e X H e n Y 9 5 a 8 k p Z g 7 h D 5 z P H 5 K O j w 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " r Z a E / z n q W Y W x C e Q j e 4 s v L 5 P G s 7 r l 1 7 + 6 8 1 r g u 4 i j D E R z D K X h w A Q 2 4 h S a 0 g M E Y n u E V 3 p z E e X H e n Y 9 5 a 8 k p Z g 7 h D 5 z P H 5 K O j w 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " r Z a E / z n q W Y W x C e Q j e 4 s v L 5 P G s 7 r l 1 7 + 6 8 1 r g u 4 i j D E R z D K X h w A Q 2 4 h S a 0 g M E Y n u E V 3 p z E e X H e n Y 9 5 a 8 k p Z g 7 h D 5 z P H 5 K O j w 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " r Z a E / z n q W Y W x C e Q j e 4 s v L 5 P G s 7 r l 1 7 + 6 8 1 r g u 4 i j D E R z D K X h w A Q 2 4 h S a 0 g M E Y n u E V 3 p z E e X H e n Y 9 5 a 8 k p Z g 7 h D 5 z P H 5 K O j w 0 = < / l a t e x i t > X SA&CIP < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 L < l a t e x i t s h a 1 _ b a s e 6 4 = " M P K p F a 5 (see SI Appendix S7B). Our results indicate a robust statistically significant positive relationship between cross-disciplinarity (X CIP ) and citation impact, consistent with the effect size in a different case study of the genomics revolution (Petersen et al 2018), which supports the generalizability of our findings to other convergence frontiers.To be specific, we calculate a 8.6% citation premium for the Broad configuration (γ X CIP = 0.07; p < 0.001), meaning that the average cross-disciplinary publication is more highly cited than the average mono-disciplinary publication.We calculate a smaller 5.9% citation premium associated with X SA (γ X SA = 0.05; p < 0.001).Yet the effect associated with articles featuring X CIP and X SA simultaneously is considerably larger (16% citation premium; γ X SA&CIP = 0.13; p < 0.001), suggesting an additive effect. Comparing results for the Neighboring configuration to the baseline estimates for Broad, the citation premium is relatively larger for X SA (11% citation premium; γ X Neighboring,SA = 0.088; p < 0.001) and roughly the same for X CIP and X SA&CIP .This result reinforces our findings regarding the convergence "short-cut" (when X CIP is absent), indicating that this approach is more successful when integrating domain knowledge across shorter distances, consistent with innova-tion theory (Fleming 2001). The configuration most representative of convergence is Distant, which compared to Broad and Neighboring features smaller effect size for X SA&CIP (5.2% citation premium; γ X Distant,SA&CIP = 0.04; p < 0.001).The reduction in γ X Distant,SA&CIP relative to values for Broad and Neighboring configurations likely reflects the challenges bridging communication, methodological and theoretical gaps across the Distant neuro-psycho-medical ↔ techno-informatic interface.More interestingly, this configuration is distinguished by a negative X SA estimate, indicating that the convergence shortcut yields less-impactful research than mono-domain research.Nevertheless, it is notable that for this convergent configuration, there is a clear hierarchy indicating the superiority of cross-disciplinary collaboration approaches to integrating research across distant domains. As in the Article-level model, we also tested for shifts in the citation premium attributable to the advent of Flagship HBS project funding using a similar DiD approach.Figure 5(D) shows the citation premium γ X SA&CIP for articles published prior to 2014, and the difference δ X+ corresponding to the added effect for articles published after 2013.For Broad and Distant we observe δ X+ < 0, indicating a reduced ci-tation premium for post-2013 research.By way of example for the Broad configuration: whereas cross-domain articles published prior to 2014 show a 19% citation premium (γ X SA&CIP = 0.15; p < 0.001), those published after 2013 have just a 19%-11% = 8% citation premium (δ X SA&CIP + = −0.09;p < 0.001).The reduction of the citation premium is even larger for Neighboring (δ Neighboring,X SA&CIP + = −0.16;p < 0.001).Yet for Distant, we observe a different trendresearch combining both X SA and X CIP simultaneously has advantage over those with just X CIP or X SA , in that order (δ Dist.,XSA&CIP + = 0.04; p = 0.016; 95% CI = [.01,.08]). We briefly summarize coefficient estimates for the other control variables.Consistent with prior research on crossdisciplinarity (Petersen et al 2018), we observe a positive relationship between team-size and citation impact (β k = 0.415; p < 0.001), which translates to a σ β k ≈ 0.5% increase in citations associated with a 1% increase in team size (since k p enters in log in our specification).We also observe a positive relationship for topical breadth (β w = 0.03; p < 0.001), which translates to a much smaller σ β w ≈ 0.04% increase in citations associated with a 1% increase in the number of major MeSH headings.And finally, regarding the career lifecycle, we observe a negative relationship with increasing career age (β τ = −0.011;p < 0.001) consistent with prior studies (Petersen et al 2018), translating to a 100 σ β τ ≈ -1.3% decrease in c p associated with every additional career year.See Tables S4-S5 for the full set of model parameter estimates. Behind the Numbers Further qualitative inspection of prominent research articles in this category identifies four key convergence themes associated with past or developing breakthroughs: Magnetic Resonance Imaging (MRI).MRI technology has been instrumental in identifying structure-function relations in brain networks, and has reshaped brain research since the 1990s.As a method that involves both sophisticated technology and core brain expertise, MRI has been a focal point for X Distant,SA&CIP scholarship.For example, ref. (Van Dijk et al 2012) addresses the problem of motion, a pernicious confounding factor that can invalidate MR brain results.Hence, this research article exemplifies how a fundamental problem threatening an entire line of research acts as an attractor of distant cross-disciplinary collaborations with an allencompassing theme, including authors from CIP 5 (medical specialists) and CIP 8 (engineers and computer scientists), while thematically spans four topical domains: SA 2 (Anatomy & Organisms), SA 3 (Phenomena & Processes), SA 5 (Techniques & Equipment), and SA 6 (Technology & Information Science). Genomics.Following the completion of the Human Genome Project (HGP) in the early 2000s, genomics and biotechnology methods have established a foothold in brain research.This convergent frontier made headway in solving long-standing morbidity riddles and formulating novel therapies, e.g.providing a deeper understanding of the genetic basis of developmental delay (Cooper et al 2011) and developing treatment for glioblastoma using a recombinant poliovirus (Desjardins et al 2018).Both these articles include authors from CIP 4 and 5; thematically, these articles cast a wide net, with the former spanning SA 1, 3, 4 and 5, while the latter covers SA 2, SA 4 and SA 5. Robotics.In the early 2010s neurally controlled robotic prosthesis reached fruition by way of collaboration between neuroscientists (CIP 1) and biotechnologists (CIP 4).A prime example of this emerging bio-mechatronics frontier is research on robotic arms for tetraplegics (Hochberg et al 2012), which thematically covers all SA 1-6. Artificial Intelligence (AI) and Big Data.Following developments in machine learning capabilities (ML), deep AI methods were brought to bear on MR data, pushing brain imaging towards more quantitative, accurate, and automated diagnostic methods.Research on brain legion segmentation using Convolutional Neural Networks (CNN) (Kamnitsas et al 2017) is an apt example produced by collaboration between medical specialists (CIP 5) and engineers (CIP 8), and spanning SA 2-4 and SA 6. Simultaneously, massive brain datasets combined with powerful AI engines made their appearance along with methods to control noise and ensure their validity, as exemplified by ref. (Alfaro-Almagro et al 2018) produced by neuroscientists (CIP 1), health scientists (CIP 6), and engineers (CIP 8), and also featuring a nearly exhaustive topical scope (SA 2-6). All together, case analysis indicates X Distant,SA&CIP products are typically characterized by significant SA integration, typically including 3-4 non-technical SA plus 1-2 technical SA.This thematic coverage exceeds the disciplinary bounds implied by the CIP set of the authors, which typically includes one non-technical CIP plus one technical CIP. Discussion In a highly competitive and open science system with multiple degrees of freedom, more than one operational mode is likely to emerge.To assess the different configurations that exist, we developed an {author discipline × research topic} classification that enables examination of several operational modes and their relative scientific impact. Competing Convergence Modes: Our key result regards the identification and assessment of a prevalent convergence shortcut characterized by research combining different SA (X SA ) but not integrating cross-disciplinary expertise (M CIP ).Assuming the HBS ecosystem to be representative of other competitive science frontiers, our results suggest that the two operational modes of convergence evolve as substitutes rather than complements.Trends from the last five years indicate an increasing tendency for scholars to shortcut crossdisciplinary approaches, and instead integrate by way of expansive learning.This appears to be in tension with the intended mission of flagship HBS programs.Instead, our analysis provides strong evidence that the rise of expedient convergence tactics may be an unintended consequence of the race among teams to secure funding. In order to provide timely assessment of convergence science, we addressed our fundamental RQ1 -how to measure convergence?-bydeveloping a generalizable framework that differentiates between diversity in team expertise and research topics.While it is true that a widespread paradigm shift towards increasing team size has transformed the scientific landscape (Milojevic 2014;Wuchty et al 2007), this work challenges the prevalent view that larger teams are innately more adept at prosecuting cross-domain research.Indeed, convergence does not only depend on team size but also on its composition.In reality, however, research teams targeting the class of hard problems calling for convergent approaches are faced with coordination costs and other constraints associated with crossing disciplinary and organizational boundaries (Cummings & Kiesler 2005, 2008;Van Rijnsoever & Hessels 2011).Consequently, teams are likely to economize in disciplinary expertise, and instead integrate cross-domain knowledge in part (or in whole) by way of polymathic generalists comfortable with the expansive learning approach.As a result, a team's composite disciplinary pedigree tend to be a subset of the topical dimensions of the problem under investigation. As a consistency check, we also find this convergence shortcut to be more widespread in research involving topics that are epistemically close, as represented by the Neighboring configuration we analyzed.Contrariwise, in the neuropsycho-medical ↔ techno-informatic interface, belonging to the Distant configuration, convergent cross-disciplinary collaboration runs strong.Perhaps not by serendipity, mixed analysis further indicates that this is exactly the configuration where transformative science has long been occurring. Arguably, a certain degree of expansive learning is needed for multidisciplinary teams to operate in harmony.For example, in the case of a psychologist collaborating with a medical specialist, it would be ideal if each one knew a little bit about the other's field, so that they establish an effective knowledge bridge.After all, this is what transforms a multidisciplinary team to a cross-disciplinary team, such that convergence becomes operative.However, this approach is not the dominant trend in HBS (see the Article level Model), and is possibly a response to the broad and longstanding paradigm promoting interdisciplinarity (Nissani 1995) with less emphasis on cross-domain collaboration.Again using our simple example, it may be that the medical specialist prefers not to partner at all with psychologists in the prosecution of bi-domain research, i.e., opting for the streamlined substitutive strategy of total replacement over the strategy of partial redundancy, which comes with the risks associated with cross-disciplinary coordination. A limitation to our framework is that we do not specify what task (e.g.analysis, conceptualization, writing) a given domain expert performed, and hence do not account for division of labor in the teams here analyzed.Indeed, recent work provides evidence that larger teams tend to have higher levels of task specialization (Haeussler & Sauermann 2020), which thereby provides a promising avenue for future investigation, i.e., to provide additional clarity on how bureaucratization (Walsh & Lee 2015) offsets the recombinant uncertainty (Fleming 2001) associated with cross-disciplinary exploration.Another limitation regards the nuances of HBS programs that we do not account for, e.g.different grand ob-jectives, funding levels and disciplinary framing which varies across flagships.Yet as a truly multidisciplinary ecosystem, we believe HBS provides an ideal testbed for evaluating the prominence, interactions, and impact of the constitutional aspects of convergence (Eyre et al 2017;Grillner et al 2016;Jorgenson et al 2015;Quaglio et al 2017). Our results also provide clarity regarding recent efforts to evaluate the role of cross-disciplinarity in the domain of genomics (Petersen et al 2018), where we used a similar scholaroriented framework that did not incorporate the SA dimensions.One could argue that the cross-disciplinary citation premium reported in the genomics revolution arises simply from the genomics domain being primed for success.Indeed, Fig. 3(A) shows that HBS scholars in the domain of Biotech.& Genetics discipline maintained high levels of SA diversity extending back to the 1970s.We do not observe similar patterns for other HBS sub-disciplines.Yet, our measurement of a ∼16% citation premium for research featuring both modes (X SA&CIP ) are remarkably similar in magnitude to the analog measurement of a ∼ 20% citation premium reported in (Petersen et al 2018). Econometric Analysis: In order to accurately measure shifts in the prevalence and impact of cross-domain integration, in addition to how they depend on the convergence mode, we employed an econometric regression specification that leverages author fixed-effects and accounts for research team size, in addition to a battery of other CIP and SA controls.Regarding the growth rate of HBS convergence science, Fig. 5(A) indicates that research integrating topics and disciplinary expertise is growing between ∼2-4% annually, relatively to the mono-disciplinary baseline; however, this upward trend reversed after the ramp-up of HBS flagships, as indicated in Fig. 5(B).Our results also indicate that the citation impact of publications from polymathic teams (X Neighboring,SA and X Distant,SA ) is significantly lower than the impact of publications from more balanced cross-disciplinary teams (X Neighboring,SA&CIP and X Distant,SA&CIP ), see Fig. 5(C).On a positive note, a difference-in-difference strategy provides support that HBS research featuring the X Distant,SA&CIP configuration has increased in citation impact following the rampup of HBS flagships, see Fig. 5 (D). There are various possible explanations to consider, most prominent of which is that the cognitive and resource demands required to address grand scientific challenges have outgrown the capacity of even monodisciplinary teams, let alone solo genius (Simonton 2013). Reflecting upon these results together, it is somewhat troubling that the polymathic trend proliferates and competes with the gold standard, that is, configurations featuring a balance of cross-disciplinary teams and diverse topics (X SA&CIP ).Counterproductively, flagship HBS projects appear to have incentivized expansive research strategies manifest in a relative shift towards X SA since the ramp-up of flagship projects in 2014.This trend may depend upon the particular flagship's objective framing.Take for instance the US BRAIN Initiative, with the expressed aim to support multi-disciplinary mapping and investigation of dynamic brain networks.As such, its corresponding research goals promote the integration of Neighboring topics, where scientists with polymathic tendencies may feel more emboldened to short-circuit expertise.In addition, there are practical pressures associated with proposal calls.Another possible explanation regarding team formation, is that it may be easier and faster for researchers to find collaborators from their own discipline when faced with the pressure to meet proposal deadlines.Additionally, funding levels are not unlimited and bringing additional reputable specialists into the team comes with great financial consideration.Hence, a natural avenue for future investigation is to test whether other convergence-oriented funding initiatives also unwittingly amplify such suboptimal teaming strategy. Theoretical insights -expansive learning: Indeed, the polymathic trends described here pre-existed the flagship HBS projects, and so must have deeper roots.One hypothesis is that this trend represents an emergent scholarly behavior owing to efficient 21st century means to pursue new topics by way of expansive learning (Engeström & Sannino 2010), since the learning costs associated with certain tasks characterized by explicit knowledge have markedly decreased with the advent of the internet and other means of rapid high-fidelity communication.Indeed, many of the activity signals brought to the fore by this study bear the hallmarks of expansive learning.Perhaps the most telling signal is the propensity towards topically diverse publications -Fig.4(D-F), which largely stems from horizontal movements in the research focus of individual scientists rather than vertical integration among experts from different disciplines -Fig.4(A-C).The scientific system is increasingly interconnected, as evident from the densification of collaboration networks and emergent cross-disciplinary interfaces -Fig.2(B).These interfaces satisfy the conditions that are conducive to boundary crossing, especially with respect to research topics, which can act as structures facilitating "minimum energy" expansion (Toiviainen 2007).To this point, we also assessed wether the relationship between CIP diversity and SA integration depends on wether the configuration represents neighboring or distant domains.Analyzing the set of X SA&CIP articles, we find that expansive integration is consistently most effective in Distant configurations, e.g.teams with N CIP,p = 3 span roughly 32% more SA than their mono-disciplinary counterparts -Fig.S5(B). Policy Implications: Consistent also with other studies in expansive learning, actions taken by participants do not necessarily correspond to the intentions by the interventionists (Rasmussen & Ludvigsen 2009).The participants are brain scientists in this case, and the interventionists are the funding agencies and the scientific establishment at large.While the latter aim to promote research powered by true multidisciplinary teams, the former appear to prefer to shortcut around this ideal. Policy makers and other decision-makers within the scientific commons are faced with the persistent challenge of efficient resource allocation, especially in the case of grand scientific challenges that foster aggressive timelines (Stephan 2012).The implicit uncertainty and risk associated with such endeavors is bound to affect reactive scholar strategies, and this interplay between incentives and behavior is just one source of complexity among many that underly the scientific system (Fealing & eds. 2011). To begin to address this issue, policies addressing the challenges of historical fragmentation in Europe offer guidance.European Research Council (ERC) funding programs have been powerful vehicles for integrating national innovation systems by way of supporting cross-border collaboration, brain-circulation and knowledge diffusion -yet with unintended outcomes that increase the burden of the challenge (Doria Arrieta et al 2017).To address this fragmentation, many major ERC collaborative programs require multinational partnerships as an explicit funding criteria.Motivated by the effectiveness of this straightforward integration strategy, convergence programs can can include analog crossdisciplinary criteria or review assessment to address the convergence shortcut.Such guidelines could help to align polymathic vs. cross-disciplinary pathways towards more effective cross-domain integration.Much like the vision for brain science -towards a more complete understanding of the emergent structure-function relation in an adaptive complex system -a better understanding of cross-disciplinary team assembly, among other team science considerations (Börner et al 2010), will be essential in other challenging frontiers calling on convergence. Methods Normalization of citation impact.We normalized each Scopus citation count, c p,t , by leveraging the well-known log-normal properties of citation distributions (Radicchi et al 2008).To be specific, we grouped articles by publication year y p , and removed the time-dependent trend in the location and scale of the underlying log-normal citation distribution.The normalized citation value is given by where µ t ≡ ln(c t + 1) is the mean and σ t ≡ σ[ln(c t + 1)] is the standard deviation of the citation distribution for a given t; we add 1 to c p,t to avoid the divergence of ln 0 associated with uncited publications -a common method which does not alter the interpretation of results. Figure S8(G) shows the probability distribution P (z p ) calculated across all p within five-year non-overlapping time periods.The resulting normalized citation measure is well-fit by the Normal N (0, 1) distribution, independent of t, and thus is a stationary measure across time.Publications with z p > 0 are thus above the average log citation impact µ t , and since they are measured in units of standard deviation σ t , standard intuition and statistics of z-scores apply.The annual σ t value is rather stable across time, with average and standard deviation σ ± SD = 1.24 ± 0.09 over the 49-year period 1970-2018.Subject Area classification using MeSH.Each MeSH descriptor has a tree number that identifies its location within one of 16 broad categorical branches.We merged 9 of the science-oriented MeSH branches (A,B,C,E,F,G,J,L,N) into 6 Subject Area (SA) clusters (see Fig. 1). Figure S1 shows the 50 most prominent MeSH descriptors for each SA cluster.Hence, we take the set of MeSH for each p denoted by W p , and map these MeSH to the corresponding MeSH branch (represented by the operator O SA ), yielding a count vector with six elements: shows the distribution P (N SA ) of the number of SA per publication: 72% of articles have two or more SA; the mean (median) SA p is 2.1 (2), with standard deviation 0.97, and maximum 6. Disciplinary classification using CIP.We obtained host department information from each scholar's Scopus Profile.Based upon this information provided in the profile description, and in some cases using additional web search and data contained in the Scholar Plot web app (Majeti et al 2020), we manually annotated each scholar's home department name according to National Center for Education Statistics Classification of Instructional Program (CIP) codes.We then merged these CIP codes into 9 broad clusters and three super-clusters (Neuro/Biology, Health, and Science & Engineering, as indicated in Fig. 1); for a list of constituent CIP codes for each cluster see Fig. S1(C).Analogous to the notation for assigning − → SA p , we take the set of authors for each p denoted by A p , and map their individual departmental affiliations to the corresponding CIP cluster (represented by the operator O CIP ), yielding a count vector with nine elements: Measuring cross-domain diversity.We developed a measure of cross-domain diversity defined according to categorical cooccurrence within individual research articles.Each article p has a count vector v p : for discipline categories v p ≡ −−→ CIP p and for topic categories v p ≡ − → SA p .We then measure article co-occurrence levels by way of the normalized outer-product where ⊗ is the outer tensor product, U (G) is an operator yielding the upper-diagonal elements of the matrix G (i.e.representing the undirected co-occurrence network among the categorical elements).In essence, D p ( v p ) captures a weighted combination of all category pairs.The resulting matrix represents dyadic combinations of categories as opposed to permutations (i.e., capturing the subtle difference between an undirected and directed network).While we did not explore it further, this matrix formulation may also give rise to higherorder measures of diversity associated with the eigenvalues of the outer-product matrix.The notation ||...|| indicates the matrix normalization implemented by summing all matrix elements.The objective of this normalization scheme is to control for the variation in v p in a systematic way.As such, this co-occurrence is a article-level measure of diversity which controls for variations in the total number of categories and different count statistics for elements belonging to −−→ CIP p and − → SA p .Consequently, totaling D p ( v p ) across articles from a given publication year yields the total number of articles published in a given year, p|yp∈t ||D p,t || = N (t). We also define a categorical diversity measure for each article given by f D,p = 1 − Tr(D p ) ∈ [0, 1), which corresponds to the sum of the off-diagonal elements in D. The average article diversity by publication year is denoted by f D (t) .In simple terms, articles featuring a single category have f D,p = 0 whereas articles featuring multiple categories have f D,p > 0. While the result of this approach is nearly identical , also referred to as the Gini-Simpson index), f D,p is motivated by way of dyadic co-occurrence rather than the standard formulation motivated around repeated sampling. Data accessibility: All data analyzed here are openly available from Scopus and PubMed APIs.Competing Interests The authors declare that they have no competing financial interests.Author Contributions AMP performed the research, participated in the writing of the manuscript, collected, analyzed, and visualized the data; MA developed software to collect, analyze, and visualize the data; and IP designed the research, performed the research, and participated in the writing of the manuscript.Funding: AMP and IP acknowledge funding from NSF grant 1738163 entitled 'From Genomics to Brain Science'.Acknowledgements: The authors acknowledge support from the Eckhard-Pfeiffer Distinguished Professorship Fund.AMP acknowledges financial support from a Hellman Fellow award that was critical to this project.Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. reported in each article.In total, we encountered 14,212 distinct Major Topic MeSH. Geographic Regions.We obtained geographic location data from each scholar's Scopus Profile, associating each individual with one of 77 countries; the top five countries represented are the United States with 5030 scholars, Germany with 1192, UK with 1074, China with 1049, and Japan with 894.These coauthors associate each p with a set of countries, which we cluster into four localized regions indexed by R: North America, corresponding to R = 1 (United States and Canada); Europe, R = 2 (33 European Union and non-European Union countries including Norway, Switzerland, Israel, Iceland, and Serbia); Australasia, R = 3 (Peoples Republic of China, Japan, South Korea, Australia, Taiwan, New Zealand, Singapore, Malaysia, and Thailand); and World, R = 4 (remaining countries including Brazil, India, Turkey, and South Africa, among others).88% of the publications are covered by regional clusters R = 1, 2, 3.The most prominent distinction between regions is for NA and EU, which both feature increases in Technology & Information Science [6] that are relatively larger than observed for AA and World, likely reflecting the technological capacity related to the tech.hubs in these regions; another distinction relates to the Psychiatry & Psychology [SA 1] which increases in EU and AA more than for NA and World; and also for Health [4] which increases in NA and AA more than for EU and World. S3. Levels and changes in SA and CIP co-occurrence before and after 2014 We also seek to identify which category pairs are frequently combined in articles, and to assess their frequency shifts after 2014.To this end, we introduce a tensor-product method to readily measure SA an CIP co-occurrence statistics for the purpose of identifying particular cross-domain orientations observed in cross-domain HB science. In order to juxtapose the relative frequencies of mono-category articles separately from multi-category articles, we define a modified outer-product matrix designed purely for visualization purposes: where ⊗ is the outer tensor product and • indicates the element-wise or Hadamard product.Note that this definition is slightly different than D p ( v p ) defined in Eq.( 2).The difference occurs in the first case, for which the matrix Υ ≡ v p ⊗ v p − v p • • v p for which the diagonal elements are eliminated via subtraction, i.e.Tr(Υ) = 0. Simply stated, when v p (representing − → SA p or −−→ CIP p ) has 2 or more non-zero elements then we primarily count the off-diagonal elements of the outer-product matrix and are not concerned with the relative frequencies of the on-diagonal elements.Contrari-wise, in the case that there is just one category present -e.g.v p = {0, 3, 0, 0, 0, 0} -then we track only the diagonal element, which counts the occurrence of the single category.In this second case, the resulting matrix Dp ( v p ) = DiagonalMatrix(Sign[ v p ]) has only one non-zero element, which occurs for the diagonal value D22 = 1; and all other matrix elements = 0. Note that in either case the total sum of all elements are normalized to unity, || Dp ( v p )|| = 1.This normalization implies that totaling Dp ( v p ) across articles from a given publication year yields the total number of articles, N (t). We then calculated the aggregate co-occurrence matrix, denoted by C < = yp∈ [2009−2013] Dp , using all articles published in the pre-period.It then follows from our normalization procedure that the total across all matrix elements is proportional to the total number of articles published in a given period, i.e. CIP and C < SA , respectively.To measure relative changes, we then calculated the percent difference in each matrix element 2009−2013] corrects for bias associated with differences in the number of articles published each of the pre-and post-periods.To illustrate why this correction is important, we randomized the counts contained in v p = − → SA p and plot the resulting C < rand.,SA and ∆C rand.,SAmatrices in Fig. S11.As anticipated, this randomization scheme eliminates the variation among on-diagonal elements and off-diagonal elements in panel (A); Moreover, in panel (B) the off-diagonal elements all show percent change values that are in the range of ±3%, thereby indicative of the threshold for distinguishing statistically significant percent changes in the real data. Returning to the real data and the calculation of C < CIP , the most notable results of this visualization are the consistently strong couplings between CIP category [1] and all other categories [2,3,4,5,6]; between categories [1,2,3] and [5]; and also between categories [1,2,5] and [6].Also of note is the higher-order clique among [1,5,6] where each CIP is strongly coupled to each other.Contrariwise, we observe relatively weak coupling between [7,8,9] and most all other CIP. Other prominent CIP that couple by region: NA shows relatively higher coupling between [1,4] and [4,5] and [5,8] compared to other regions; and EU shows relatively higher coupling between [1,9] and [2,9].Regarding the shifts from the pre-to post-2014 captured by ∆C CIP , NA and EU regions show consistent increase in CIP pairs [4,7] and [4,9], [3,8] and [2,7]; and consistent decrease between [1,8] and [2,9] and [6,9] and all combinations between 5 and [7,8,9].Notably, AA exhibits higher % change levels, following from the fact that several elements in C < CIP that are nearly 0. Regarding the shifts from the pre-to post-2014 captured by ∆C SA , the most consistent increases are between SA [1] and each of [2,4,6]; and between 4 and both [5,6].Contrariwise, the most consistent decreasing coupling is between [3] and both [2,6], and between [5,6].The matrices for NA and EU are rather similar, with the most prominent distinction between [2,6] -showing a -12% change for NA and a +5% change for EU; and also between [3,6] -showing also a -12% decrease for NA but no significant change for EU.This latter disparity is an example of where EU may be taking the lead in in-silico-oriented approaches to HB science, consistent with the framing of the Human Brain Project. The most notable distinction for AA relative to NA and EU is in the larger magnitude of shifts, representing a period of international convergence for all couplings involving SA [1], and in particular between [1,2] and between [1,6]; contrariwise for AA, there is a prominent decoupling between SA [5,6] which is consistent with the relative shifts away from these two SA to compensate for the prominent redirection towards [1] and [4], as also indicated by Fig. S3(B). S4. Calculation of cross-domain co-occurrence: an illustrative example of the Tensor Product Calculating f D,p begins with the outer-product between a categorical count vector, e.g.SA p ⊗ SA p , where ⊗ is the outer tensor product.The resulting matrix represents dyadic combinations of categories as opposed to permutations (i.e., capturing the subtle difference between an undirected and directed network).The subtle difference between the Blau index and f D,p arises from U (G), which is imposed to capture the difference between combinations rather than permutations (or directed versus undirected network).Hence, this perspective offers a new pathway to the formulation of the common Blau diversity index by way of co-occurrence rather than repeated sampling. Take for example an article p with 4 metadata entities belonging to 3 categories, v p = {1, 2, 0, 0, 1, 0}.Calculation of the co-occurrence matrix D p ( v p ) using the normalized outer-product defined in Eq.(2) yields The categorical diversity is calculated as the total across off-diagonal elements, f D,p = 1 − Tr(D p ) = 5/11.For completeness, consider the representation of a mono-disciplinary article with the same number of metadata entities that all fall into the second category, v p = {0, 4, 0, 0, 0, 0}.Then What does this measure measure?Notably, f D,p accounts for both categorical differences (Shannon-like) and concentration disparity (Gini-like) (Harrison & Klein 2007).One the first hand, articles with more variation in SA categories will correspond to larger f D,p values, as the number of non-zero off-diagonal elements is proportional to Mp 2 ∼ M 2 p , where M p is the number of distinct SA present, which contributes to larger f D,p ; and on the second hand, the off-diagonal elements will be relatively larger in combination if the count values contained in SA 2 are more evenly distributed, i.e., are not highly concentrated in just one category. S5. Bi-partite network between CIP and SA We quantify the empirical association between CIP and SA categories by aggregating the information contained in −−→ CIP p and − → SA p .We first applied this method to the subset of mono-domain articles comprised of p with O CIP ( F p ) = O SA ( F p ) = M .By definition, each of these article features just a single CIP, making it possible to identify the SA that are most frequently associated with mono-domain researchers from that CIP category.Formally, this amounts to calculating the bi-partite network between CIP and SA, operationalized by averaging the − → SA p for mono-domain articles from each CIP category, given by 4(G) shows only the most prominent CIP-SA links. For juxtaposition, we also calculated the bi-partite network using the non-overlapping subset of articles with O SA&CIP ( F p ) = X SA&CIP .Since these articles by construction have N CIP,p ≥ 2, we define the average association between CIP and SA as − → SA CIP = p∈CIP ( − → SA p /N SA,p )/N CIP,p , where the vector − → SA p /N SA,p contributes to the average for all CIP present in −−→ CIP p .The bi-partite network labeled X CIP X SA in Figure 4(G) also shows just the most prominent CIP-SA links, applying the same threshold that excludes links that have weight less than half of the most prominent weighted CIP-SA link for a given CIP. Let A (B) represent the matrix representation of X CIP X SA (M CIP M SA ) -after pruning less prominent CIP-SA links.We then compute the difference between the matrices, ∆ XM ≡ C = A − B, such that positive (negative) elements of C indicate prominent links that are relatively over-represented in cross-domain (mono-domain) articles.The Sankey chart labeled ∆ XM in Figure 4(G) shows just the positive elements, which tend to be larger in magnitude than the (relatively few) negative elements. In Fig. 3(B) we presented the bi-partite network of prominent CIP-SA relations for the Broad configuration. Figure S12 complements those results showing the bipartite network for both the Neighboring and Distant configurations, which provide cross-validation for the choice of CIP and SA categories they represent. S6. Historical trends in SA & CIP diversity: 2000-2018 We investigate historical trends in SA & CIP diversity using the matrix D p defined in Eq. ( 2), which simultaneously measures mono-dimensional and multi-dimensional features of each article.More specifically, we define f D,p = 1−Tr(D p ) as the fraction of the article's co-occurrence matrix capturing combinatorial diversity.Hence, in the limiting case that the article features just a single category, then f D,p = 0; and when all categories are present in equal quantities then f D,p = (d − 1)/(d + 1), where d is the dimension of the categorical vector v p .As d increases then f D,p approaches 1.Hence, for sufficiently large d then 0 ≤ f D,p 1. Figures S8(D,E) show the unconditional distributions, P (N SA ) and P (N CIP ), with observed values spanning across the full range d = 6 and d = 9, respectively. As a bounded quantity, the average article-level diversity f D (t) = N (t) −1 p∈N (t) f D,p is an appropriate measure of a characteristic article, where N (t) is the number of articles being considered from year t.However, f D (t) is nevertheless sensitive to bias associated with a systematic increase over time in N CIP,t and N SA,t , the average number of categories present per article per year.We address this issue by applying a temporal deflator which adjusts the annual averages to account for systematic shifts in the underlying data generating process.To be specific, we define f D,SA (t) = f D,SA (t) × [ μSA /μ SA (t)], where μSA (t) = N SA,t /σ N SA,t is the inverse coefficient of variation (also called the signal-to-noise ratio) with respect to the number of SA per article, represented by N SA,p ; and μSA is the average value calculated across the roughly 3 decades of analysis.Figure S11(C) shows that μSA (t) is increasing steadily with time.Hence, adjusting for this secular growth is essential so that observed increases are not simply artifacts of the underlying growth in N SA,p or N CIP,p .We apply the same method to adjust for systematic shifts in N CIP,t . To illustrate the utility of this deflator method, we randomized the SA for all articles (by randomly shuffling the counts in each demonstrates that there is no trend in the corresponding f D,SA (t) , indicating that this method removes the underlying bias. Returning to the empirical data, Fig. S6 shows the evolution of disciplinary diversity captured by coauthors' departmental affiliations.Each panel shows f D,CIP (t) calculated for a specified combination of categories contained in each −−→ CIP p vector, as indicated by the schematic motif provided alongside each panel.For example, Fig. S6(A) calculates f D,CIP (t) from all 9 CIP categories considered independently, whereas Fig. S6(B) collects the counts associated with the combined categories [1-4] and [5-9] and calculates the diversity based upon the fraction of D p belonging to the single off-diagonal element D 12,p , which records the disciplinary mixing between these two supergroups. Figure S6(A) is calculated using the Broad configuration, and exhibits a slow increase in CIP diversity from 1990 to the mid 2000s in North America (NA) and European (EU) regions, which stalled thereafter, and even declined in the last decade for NA and AA, but not for EU. Figure S6(B) shows relatively lower levels and trends in the diversity at the intersection of supercategories [1-4] (representing traditional neuro/biology departments) and [5][6][7][8][9] (representing all other CIP jointly).By way of comparison, this trend indicates that the decline in panel (A) is not derived from the intersection explored in panel (B).Instead, Figs.S6(C,D) indicate that the decline in (A) is attributable to declines at the individual intersections between all permutations of CIP categories 1-7, and to a lesser extent between the three disciplinary subdomains: neuro/biology [1-5], health [5-7] and science and engineering [8][9].Overall, we also observe higher levels of CIP diversity in NA, followed by EU, and then followed by AA. Likewise, Figure S7 shows the evolution of research topic diversity captured by SA counts in each − → SA p .We observe much stronger trends for SA, suggesting that scholars tend to also cross disciplines as mono-disciplinary teams rather than via crossdisciplinary collaboration.Fig. S7(A) shows f D,SA (t) calculated for the Broad configuration which includes all SA categories.The diversity trend is increasing since 1990 for all regions, but with reduced pace since the early 2010s.Similar to our findings for CIP, we observe AA lagging the other two regions; however, in this case of SA we do observe more similar levels of diversity between EU and NA. Figure S7(B) indicates that much of the increase in SA diversity is attributable to research combining Health [SA 4] and the other categories -in other words, the domain of health science appears to be a persistent driving force behind convergence trends.Supporting evidence for this observation is also captured in the hierarchical clustering of SA represented by the minimum spanning tree (MST) representation of the aggregate SA co-occurrence matrix DSA,p -see Fig. S1(B).By way of comparison, the analog MST representation of DCIP,p in Fig. S1(D) features a less prominent hierarchy across the CIP categories. We analyzed several additional SA category subsets and super-category combinations to more deeply explore the anatomy of research topic diversity.Similarly, a significant component of the increasing diversity captured between SA [4,5,6] derives from the increase between research that is centered around Techniques & Equipment [5] and Technology & Information Science [6]; although this contribution shown in Fig. S7(F) only contributes to increases in diversity until 2010, after which there is a prominent decline.Interestingly, this is a configuration which emphasizes the leading role of AA since 2010 in combining these two areas.To further emphasize the role of Health, we exclude this category [4] from the diversity measures shown in Fig. S7(G), indicating that combinations of SA across the traditional domains of biology and the technology-oriented domains have also saturated around 2010, and their contribution to SA diversity primarily appears when considered the biology [1-3] and technology-oriented [5,6] as super-clusters illustrated in Fig. S7(H). S7. Panel regression: model specification We constructed article-level and author-level panel data to facilitate measuring factors relating to SA and CIP diversity and shifts related to the ramp-up of three regional HB flagship projects circa 2013, and several others thereafter.Figure S8 shows the distribution of various article-level features; and Figure S9 shows the covariation matrix between the principle variables of interest. We use the following operator notation to specify how we classify articles as being cross-domain (X) or mono-domain (M ).Starting with the feature vector , we obtain a binary diversity classification for each article denoted by X and M .We specify the objective criteria of the feature operator O by its subscript.For example, O SA ( F p ) = X SA if two or more SA categories are present, otherwise the value is M ; and by analogy, O CIP ( F p ) = X CIP if two or more categories are present, and otherwise O CIP ( F p ) = M .In the case of models oriented around articles featuring X SA and X CIP simultaneously (represented by O SA&CIP ( F p ) = X SA&CIP ), we exclude the set of articles classified as X SA but not X CIP and those classified as X CIP but not X SA .Hence, in what follows, the counterfactual baseline group for X SA&CIP articles are also the subset of mono-domain articles, which facilitates comparison of effect sizes across models oriented around X CIP , X SA and X SA&CIP . .Because all Scopus scholars map onto a single CIP, and since this model is primarily concerned with identifying factors associated with orientation towards cross-domain research, we exclude solo-authored research papers (i.e.those with k p = 1) from this analysis since the likelihood for those articles is predetermined (i.e.P (M ) = 1); for the same reason, we also exclude articles with a single Major MeSH category (i.e.those with w p = 1). A. Article-level For each article we also include several covariates of I X,p : the article publication year y p ; the mean journal citation impact, calculated as the average z p for articles from journal j, denoted by z j = z p | journal j ; the natural logarithm of the total number of coauthors, ln k p ; and the natural logarithm of the total number of Major MeSH terms, ln w p .As additional controls, we also include the total number of international regions associated with the authors' affiliations N R,p (with min value 1 and max value 4), and also the total number of categories featured by the article, N SA,p and N CIP,p . We then model the odds Q by way of a Logit regression model, specified in the case of X SA as Logit P (X SA ) = log P (X SA ) P (M ) = and in the case of X SA&CIP as Logit P (X SA&CIP ) = log P (X SA&CIP ) P (M ) = To account for errors that are geographically correlated over time, we estimated the model using robust standard errors clustered on a regional categorical variable.The full set of parameter results are tabulated in models ( 1)-( 3) in Tables S1-S3, which report the exponentiated coefficients.To be specific, the exponentiated coefficient exp(β) is the odds ratio, representing the factor by which Q changes for each 1-unit increase in the corresponding independent variable, i.e.Q +1 /Q = exp(β).In real terms, 100β ≈ 100(exp(β)−1) represents the percent change in Q corresponding to a 1-unit increase in the corresponding independent variable (where the approximation holds for small β values).As a result, exp(β) values that are less than (greater than) unity indicate variables that negatively (positively) correlate with the likelihood P (X). Quantifying shifts in propensity for CIP and SA diversity associated with the announcement of global Flagship HB projects circa 2013 In order to identify shifts in the 5-year period after the 2013 ramp-up of HB projects worldwide, we incorporated an interaction between the pre-/post periods -indicated by I 2014+,p , which takes the value 1 for y p ≥ 2014 and 0 otherwise -and a categorical variable specifying the region, represented by I R,p .We use the Rest of World region category (indicated by countries colored gray in Fig. 1) as the baseline for regional comparison since these regions did not feature flagship HB programs on the scale of those announced in Australia, Canada, China, Japan, Europe, South Korea, and the United States. By way of example, in the case of modeling the likelihood P (X SA ), the interaction term is added in the second row, Logit P (X SA ) = log P (X SA ) P (M ) = To differentiate different types of model variables, β is used to identify coefficients associated with continuous variables, γ is used for indicator variables, and δ is used to indicate interactions between indicator variables.In particular, the coefficient δ R+ measures the Difference-in-Difference (DiD) estimate of the effect of HB projects on the propensity for research teams to pursue X SA approaches.Figures S6 and S7 demonstrate that historical trends in the prevalence of cross-domain diversity satisfy the parallel trend assumption for both CIP and SA, respectively.The full set of parameter results are tabulated in models ( 4)-( 6) in Tables S1-S3, and the point estimates for principal test variables are visually summarized in Fig. S10. B. Author-level Model B In the second model, we seek to measure the relation between the two different types of article diversity -CIP and SA -and the article's scientific impact, proxied by c p .Our approach leverages the hierarchical features of the article-level data grouped into author-specific subgroups representing HB researcher publication portfolios.As a result, model coefficients represent estimates net of author-specific time-independent factors.In other words, this fixed-effect specification yields parameter estimates that are net of the author-specific baseline α a = z a , where a is an author index.This specification identifies a clear counterfactual framework for identifying the different outcomes associated with X and M that are relevant to researcher problem identification and team-assembly strategies. First, in order to measure relative differences in citation impact within and across publication cohorts, we apply a logarithmic transform that facilitates removing the time-dependent trend in the location and scale of the underlying log-normal citation distribution (Radicchi et al 2008).As such, the normalized citation impact defined in Eq. ( 1) is where µ t ≡ ln(1 + c t ) is the mean and σ t ≡ σ[ln(1 + c t )] is the standard deviation of the log-citation distribution for articles grouped by publication year.We uniformly add 1 to each c p,t count to avoid the divergence ln 0 associated with uncited publications, a common method that does not alter the interpretation of our results.Importantly, the standard deviation σ t ≈ σ = 1.24 is approximately constant over the focal period of our analysis.Consequently, we are able to transform the relation between z p and a given covariates into a percent change in c p,t associated with the same covariate.More specifically, building on previous work (Petersen 2018;Petersen et al 2018) we define the citation premium as the percent change 100∆c p /c p associated with shift in the independent variable v.For sake of simplicity, consider the basic linear model Y (c) = z p = β 0 + β v v with the decomposition of differentials, ∂Y (c)/∂v = (∂Y /∂c)(∂c/∂v) = β v ; it follows from the property of logarithms that ∂Y /∂c = 1 σt(1+c) .Calculating the percent change 100∆c p /c p follows from rearranging the differential relations above, yielding dcp σt(1+cp)dv = β v .Hence, when the independent variable β v is a binary indicator variable, then the shift from value 0 to 1 corresponds to dv = 1, and so the percent change 100∆c p /c p ≈ 100dc p /c p ≈ 100 × σ t × β v ≈ 100 × σ × β v .By extension, when the independent variable is a scalar quantity then the percent change in c p associated with a 1-unit increase dv is also given by 100 × σ × β v .And in the case that the scalar quantity enters in logarithm (e.g.ln k p ), then a 1% increase in v corresponds to a σ × β v percent increase in c p . Quantifying the effect of cross-domain diversity on scientific impact While previous work aimed to identify the role of X CIP in the ecosystem of biology and computing researchers that championed the genomics revolution (Petersen et al 2018), here we seek to simultaneously identify the relative impact of X CIP and X SA in the emerging ecosystem of HB science.In this way, we are able to compare research strategies that leverage combinations of diverse researcher expertise -i.e.cross-disciplinary collaboration -to those that do not, in the ultimate pursuit of interdisciplinary knowledge and research (Nissani 1995). To this end, we model the relation between z p and X CIP & X SA by applying ordinary least-squares (OLS) regression to estimate the coefficients of the panel regression model implemented with researcher profile fixed effects: where the model parameters are estimated using Huber-White robust standard errors, which account for heteroscedasticity and serial correlation within the publication set of each scholar, indexed by a. The control variables in Eq. (S7) include ln k p , measuring the natural logarithm of the total number of coauthors; ln w p is the natural logarithm of the total number of Major MeSH terms; the career age variable τ a,p , measuring the number of years since the researcher's first publication, capturing variation attributable to the career life cycle; and we also include factor variables controlling for publication year and other article-level features measured by − → SA p , −−→ CIP p , − → R p .We exclude solo-authored research papers (i.e.those with k p = 1) along with articles with a single Major MeSH category (i.e.those with w p = 1). Table S4 shows the full parameter estimates for six similar models that differ primarily in the type of cross-domain diversity included as the principle test variable, represented generically by I X .In models (1)-( 3) we vary the specification of the types of SA and CIP being tested.To be specific, in model (1) we include indicators I X SA and I X CIP , where I X CIP takes the value 1 if O CIP ( F p ) = X CIP and 0 if O CIP ( F p ) = M , and similarly for I X SA .These definitions of X correspond to the Broad configuration, calculated using all CIP and SA categories, as indicated in Figures 4(A,D).According to this definition, articles combining SA (CIP) from any two or more categories are classified as X SA (X CIP ). In model ( 2 In model (3) we use X indicators defined according to the Distant configuration, representing longer-distance or "Convergent" cross-domain activity -here capturing the neuro-psycho-medical -vs-techno-computational interface.In our model specification, X is represented by the binary indicator variables I X Distant,SA and I X Distant,CIP .In the case of X Distant,SA , this interface corresponds to articles combining SAs [1][2][3][4] Likewise, Models (4-6) instead focus on X SA&CIP (represented by I X SA&CIP ); each model corresponds to the either the Broad, Neighboring or Distant configurations defining X SA and X CIP .As such, these models test the citation premium associated with articles featuring cross-domain diversity in combination.Because we exclude the confounding subsets of articles featuring X SA but not X CIP , or vice versa, then the counterfactual to X SA&CIP in are articles that are mono-dimensional in both categories.Thus, since the counterfactual groups are similar, the the citation premium estimated by γ X SA&CIP are comparable with the γ X SA and γ X CIP estimated in models (1-3).The full set of parameter results are reported in Table S5, and the transformed point estimates measuring the percent increase in c p associated with each X definition are visually summarized in Fig. 5(C). Quantifying shifts in the effect of cross-domain diversity associated with the announcement of global Flagship HB projects circa 2013 We test for shifts in the citation premium attributable to the advent of global Flagship HB projects by introducing an interaction between I 2014+,p and I X SA&CIP , as indicated by the addition of two terms into the second row, As before, this Difference-in-Difference approach is based upon the counterfactual comparison of articles featuring X SA&CIP to those featuring M , integrating an additional comparison between articles published after 2014 to those published before 2014.As in the previous citation model, the model parameter γ X SA&CIP represents the citation premium attributable to research endeavors simultaneously featuring cross-domain combinations of both SA and CIP.However, in this specification γ X SA&CIP applies to articles published before 2014.The analog estimate of the relative citation premium for articles published after 2014 is γ X SA&CIP + δ X SA&CIP + .In other words, if all other covariates are held at the average values, then the citation premium difference is given by δ X SA&CIP + , with positive (negative) values indicating an increase (decrease) in the citation premium after 2014.The principle test variables γ X SA&CIP , δ X SA&CIP + and their sum are visually summarized in Fig. 5(D). Significance Statement Authors must submit a 120-word maximum statement about the significance of their research paper written at a level understandable to an undergraduate educated scientist outside their field of speciality.The primary goal of the significance statement is to explain the relevance of the work in broad context to a broad readership.The significance statement appears in the paper itself and is required for all research papers. Please provide details of author contributions here.Please declare any competing interests here.[2009][2010][2011][2012][2013][2014][2015][2016][2017][2018].(C) Increased frequency of convergent domain combinations between P 1 and P 2. For example, the most prominent convergent interface is between Neuro/Bio and Health, which was featured in 8.6% of articles in P 1 and 10.7% in P 2, corresponding to a +24% growth in P 2 relative to P 1.All percent increases are significant at the p < 0.001 level based on a two-sample two-tailed z-test comparing the proportions for P 1 and P 2. 1 2 3 4 5 6 7 8 9 5 6 7 8 9 r pub., N CIP,p # CIP per pub., N CIP,p 2); each curve is calculated for articles belonging to a given geographic region, as determined by the coauthors' regional affiliations: Australasia (red), Europe (blue), and North America (orange).For each panel we provide a matrix motif indicating the set of focal CIP categories; counts for categories included in brackets are considered in union.For example, whereas panel (A) calculates fD,CIP (t) across all 9 CIP categories (each category considered separately); instead, panel (B) calculates each Dp by considering just two super-groups, the first consisting of the union of CIP counts for categories [1-4], and the second comprised of categories [5][6][7][8][9].Subset SA (2); each curve is calculated for articles belonging to a given geographic region, as determined by the coauthors' regional affiliations: Australasia (red), Europe (blue), and North America (orange).For each panel we provide a matrix motif indicating the set of focal SA categories; counts for categories included in brackets are considered in union.For example, whereas panel (A) calculates fD,SA(t) across all 6 SA categories (each category considered separately); instead, panel (C) calculates each DSA,p by considering a subset of four SA categories 1-4. N(t) t < l a t e x i t s h a 1 _ b a s e 6 4 = " L / T 4 4 + 5 q P X 8 U r 8 1 R p T e G y U H u z H 0 h y h 4 U z 9 P j F B X K k x j 0 y S I z 1 U v 7 1 M / M t r p 7 o f d C Z U J K k m A s 8 f 6 q c M 6 h h m n c A e l Q R r N j Y E Y U n N X y E e I o m w N s 0 V T A l f m 8 L / S c N 3 P N f x r k v F 6 s W i j j w 4 A E f g B H i g A q r g C t R A H W B w B x 7 A E 3 i 2 7 q 1 H 6 8 V 6 n U d z 1 m J m H / y A 9 f Y J U N S X u g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " O 0 W W 8 U 4 4 p f y Z N e Y z 6 w 4 B P 0 / g j w R p T e G y U H u z H 0 h y h 4 U z 9 P j F B X K k x j 0 y S I z 1 U v 7 1 M / M t r p 7 o f d C Z U J K k m A s 8 f 6 q c M 6 h h m n c A e l Q R r N j Y E Y U n N X y E e I o m w N s 0 V T A l f m 8 L / S c N 3 P N f x r k v F 6 s W i j j w 4 A E f g B H i g A q r g C t R A H W B w B x 7 A E 3 i 2 7 q 1 H 6 8 V 6 n U d z 1 m J m H / y A 9 f Y J U N S X u g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " O 0 W W 8 U 4 4 p f y Z N e Y z 6 w 4 B P 0 / g j w R p T e G y U H u z H 0 h y h 4 U z 9 P j F B X K k x j 0 y S I z 1 U v 7 1 M / M t r p 7 o f d C Z U J K k m A s 8 f 6 q c M 6 h h m n c A e l Q R r N j Y E Y U n N X y E e I o m w N s 0 V T A l f m 8 L / S c N 3 P N f x r k v F 6 s W i j j w 4 A E f g B H i g A q r g C t R A H W B w B x 7 A E 3 i 2 7 q 1 H 6 8 V 6 n U d z 1 m J m H / y A 9 f Y J U N S X u g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " O 0 W W 8 U 4 4 p f y Z N e Y z 6 w 4 B P 0 / g j w Kp", "nMeSHMain","Zp", "XSAp", "XCIPp", "NEUROSHORTXSAp", "NEUROSHORTXCIPp", "NEUROLONGXSAp", "NEUROLONGXCIPp", "NRegp", "NSAp", 100 k < l a t e x i t s h a 1 _ b a s e 6 4 = " q f 2 H u w t q 2 V W x 0 S R Q j 3 / c 7 v D a + N 4 = " > A A A B 8 n i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 k V Q Y 9 F L x 4 r 2 F p o l 5 J N s 2 1 o N l m S W a E s / R l e P C j i 1 V / j z X 9 j 2 u 5 B W x 8 E H u / N T G Z e l E p h 0 f e / v d L a + s b m V n m 7 s r O 7 t 3 9 Q P T x q W 5 0 Z x l t M S 2 0 + F q U l r + g 5 h j / w P n 8 A 6 l 6 Q W w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q f 2 H u w t q 2 V W x 0 S R Q j 3 / c 7 v D a + N 4 = " > A A A B 8 n i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 k V Q Y 9 F L x 4 r 2 F p o l 5 J N s 2 1 o N l m S W a E s / R l e P C j i 1 V / j z X 9 j 2 u 5 B W x 8 E H u / N T G Z e l E p h 0 f e / v d L a + s b m V n m 7 s r O 7 t 3 9 Q P T x q W 5 0 Z x l t M S 2 0 + F q U l r + g 5 h j / w P n 8 A 6 l 6 Q W w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q f 2 H u w t q 2 V W x 0 S R Q j 3 / c 7 v D a + N 4 = " > A A A B 8 n i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 k V Q Y 9 F L x 4 r 2 F p o l 5 J N s 2 1 o N l m S W a E s / R l e P C j i 1 V / j z X 9 j 2 u 5 B W x 8 E H u / N T G Z e l E p h 0 f e / v d L a + s b m V n m 7 s r O 7 t 3 9 Q P T x q W 5 0 Z x l t M S 2 0 + F q U l r + g 5 h j / w P n 8 A 6 l 6 Q W w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q f 2 H u w t q 2 V W x 0 S R Q j 3 / c 7 v D a + N 4 = " > A A A B 8 n i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 k V Q Y 9 F L x 4 r 2 F p o l 5 J N s 2 1 o N l m S W a E s / R l e P C j i 1 V / j z X 9 j 2 u 5 B W x 8 E H u / N T G Z e l E p h 0 f e / v d L a + s b m V n m 7 s r O 7 t 3 9 Q P T x q W 5 0 Z x l t M S 2 0 + F q U l r + g 5 h j / w P n 8 A 6 l 6 Q W w = = < / l a t e x i t > 100 z < l a t e x i t s h a 1 _ b a s e 6 4 = " y M a 7 e w N C l J K s x S L 3 4 V L x 4 U 8 e q 3 8 O a 3 M d 1 6 0 M 0 H g c d 7 7 5 f k 9 8 K E M 2 0 Q + n Y q S 8 s r q 2 v V 9 d r G 5 t b 2 T n 1 3 r 6 t l q g j t E M m l u g u x p p w J 2 j H M c H q X K I r j k N N e O L 4 q / N 4 9 7 e w N C l J K s x S L 3 4 V L x 4 U 8 e q 3 8 O a 3 M d 1 6 0 M 0 H g c d 7 7 5 f k 9 8 K E M 2 0 Q + n Y q S 8 s r q 2 v V 9 d r G 5 t b 2 T n 1 3 r 6 t l q g j t E M m l u g u x p p w J 2 j H M c H q X K I r j k N N e O L 4 q / N 4 9 U 8 e q 3 8 O a 3 M d 1 6 0 M 0 H g c d 7 7 5 f k 9 8 K E M 2 0 Q + n Y q S 8 s r q 2 v V 9 d r G 5 t b 2 T n 1 3 r 6 t l q g j t E M m l u g u x p p w J 2 j H M c H q X K I r j k N N e O L 4 q / N 4 9 U 8 e q 3 8 O a 3 M d 1 6 0 M 0 H g c d 7 7 5 f k 9 8 K E M 2 0 Q + n Y q S 8 s r q 2 v V 9 d r G 5 t b 2 T n 1 3 r 6 t l q g j t E M m l u g u x p p w J 2 j H M c H q X K I r j k N N e O L 4 q / N 4 9 u X e e / y E M 6 U R + r Z K S 8 s r q 2 v l 9 c r G 5 t b 2 j r 2 7 1 1 Z x K g l t k Z j H s u t j R T k T t K W Z 5 r S b S I o j n 9 O O P 7 q e + J 0 H u X e e / y E M 6 U R + r Z K S 8 s r q 2 v l 9 c r G 5 t b 2 j r 2 7 1 1 Z x K g l t k Z j H s u t j R T k T t K W Z 5 r S b S I o j n 9 O O P 7 q e + J 0 H u X e e / y E M 6 U R + r Z K S 8 s r q 2 v l 9 c r G 5 t b 2 j r 2 7 1 1 Z x K g l t k Z j H s u t j R T k T t K W Z 5 r S b S I o j n 9 O O P 7 q e + J 0 H A i e w S t 4 s 5 6 s F + v d + p i 2 L l j l z B 7 4 A + v z B w h 5 l E 8 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " w t A i e w S t 4 s 5 6 s F + v d + p i 2 L l j l z B 7 4 A + v z B w h 5 l E 8 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " w t A i e w S t 4 s 5 6 s F + v d + p i 2 L l j l z B 7 4 A + v z B w h 5 l E 8 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " w t < l a t e x i t s h a 1 _ b a s e 6 4 = " M P K p F a 5 + 2 n L P M w 3 / M Q j 5 B N j W V n Z T P 7 8 A f W 5 w 8 + q Z E H < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M P K p F a 5 + 2 n L P M w 3 / M Q j 5 B N j W V n Z T P 7 8 A f W 5 w 8 + q Z E H < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M P K p F a 5 + 2 n L P M w 3 / M Q j 5 B N j W V n Z T P 7 8 A f W 5 w 8 + q Z E H < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M P K p F a 5 + 2 n L P M w 3 / M Q j 5 B N j W FIG. 1 : FIG. 1: Data collection and classification schemes.The upper part of the figure shows the data generation mechanism along with the resulting topical (SA) and disciplinary (CIP) clusters.The middle part of the figure shows on the world map regional clusters pertaining to three large HBS funding initiatives -North America (NA), Europe (EU), and Australasia (AA).The lower part of the figure shows an example of how all three categorizations are operationalized for analytic purposes.Circles represent four research articles with authorship from distinct regions.The articles feature different keyword (SA) or disciplinary (CIP) category mixtures assigned one of two diversity measures: mono-(M ) and cross-domain (X). p , each corresponding to top-level Medical Subject Heading (MeSH) categories implemented by PubMed, which are indicated by the letters in brackets: (1) Psychiatry & Psychology [F], (2) Anatomy & Organisms [A,B], (3) Phenomena & Processes [G], (4) Health [C,N], (5) Techniques & Equipment [E], and (6) Technology & Information Science [J,L]; notably, regarding the structure-function problem that is a fundamental focus in much of biomedical science, category (2) represents the domain of structure while (3) represents function. FIG. 3 : FIG. 3: Evolution of SA boundary-crossing within and across disciplinary clusters.(A) SA composition of HBS research within disciplinary (CIP) clusters.Each subpanel represents articles published by researchers from a given CIP cluster, showing the fraction of article-level MeSH belonging to each SA, shown over 5-year intervals across the period 1970-2018.The increasing prominence of SA 5 & 6 in nearly all domains, in particular CIP 4 (Biotech.& Genetics) indicates the critical role of informatic capabilities in facilitating biomedical convergence science (Yang et al 2021).(B) Empirical CIP-SA association networks calculated for non-overlapping sets of mono-domain (MCIP MSA) and cross-domain (XCIP XSA) articles, and based upon the Broad configuration.The difference between these two bi-partite networks (∆XM ) indicates the emergent research channels that are facilitated by simultaneous XCIP and XSA boundary crossing -in particular integrating SA 2 with 3 (i.e. the structure-function nexus) facilitated by teams combining disciplines 1, 2, 4 and 9. Neurosciences 2: Biology 3: Psychology 4: Biotech.& Genetics 5: Medical Specialty 6: Health Sciences 7: Pathology & Pharmacology 8: Eng.& Informatics 9: Chemistry & Physics & Math FIG.4: Evolution of CIP and SA diversity in Human Brain Science research.(A-F) Each fD(t) represents the average article diversity measured as categorical co-occurrence, by geographic region: North America (orange), Europe (blue), and Australasia (red).Each matrix motif indicates the set of CIP or SA categories used to compute Dp in Eq. (2); categories included in brackets are considered in union.For example, panel (A) calculates fD,CIP (t) across all 9 CIP categories; instead, panel (B) is based upon counts for two super-groups, the first consisting of the union of CIP counts for categories 1 and 3, and the second comprised of categories 2, 4, 5, 6 and 7. (A,D) Broad diversity is calculated using all categories considered as separate domains; (B,E) Neighboring represents the shorter-distance boundary across the neuro-psychological ↔ bio-medical interface; (C,F) Distant represents longer-distance convergence across the neuro-psycho-medical ↔ techno-informatic interface. p 6 1 z w W z m O 3 m D 4 P k / f c q g 9 w = = < / l a t e x i t > FIG. 5 : FIG.5: Propensity for X and citation impact attributable to cross-domain activity at the article level.(A) Annual growth rate in the likelihood P (X) of research having cross-domain attributes represented generically by X. (B) Decreased likelihood P (X) after 2014.(C) Citation premium estimated as the percent increase in cp attributable to cross-domain mixture X, measured relative to mono-domain (M) research articles representing the counterfactual baseline.Calculated using a researcher fixed-effect model specification which accounts for time independent individual-specific factors; see TablesS4-S5for full model estimates.Note that "Broad" corresponds to XSA,XCIP ,X SA&CIP ; "Neighboring" corresponds to XNeighboring,SA,XNeighboring,CIP ,X Neighboring,SA&CIP ; and "Distant" corresponds to XDistant,SA,XDistant,CIP ,X Distant,SA&CIP .(D) Difference-in-Difference (δX+) estimate of the "Flagship project effect" on the citation impact of cross-domain research.Shown are point estimates with 95% confidence interval.Asterisks above each estimate indicate the associated p−value level: * p < 0.05, * * p < 0.01, * * * p < 0.001. S2. FigureS2(A) shows the relative frequency f < R,CIP (f > R,CIP ) by region, calculated in the 5-year period before (<) and after (>) the HB flagship project ramp-up year 2014.Each f R,CIP value represents the average −−→ CIP p vector calculated across all articles belonging to a particular region, and normalized to unity to facilitate comparison, i.e. 9 CIP =1 f R,CIP = 1.In both the pre-2014 period [2009-2013] and post-2014 period [2014-2018], the most prominent disciplines are Neurosciences [CIP 1] and Medical Specialty [5] in the North American (NA) and European (EU) regions.The Australasian (AA) region shows higher levels of scholars from disciplines in Engineering & Informatics [8] and Chemistry & Physics & Math [9] than their NA and EU counterparts in the pre-2014 period.However, after 2014 we observe a realignment of AA with the remarkably similar NA and EU profiles.This realignment is achieved by decreases in Engineering & Informatics [8] and Chemistry & Physics & Math [9], and increases in Neurosciences [1] & Medical Specialty [5].Fig. S2(B) shows these relative shifts calculated as the difference ∆f R,CIP = f > R,CIP −f < R,CIP .Overall, there appears to be a remarkable synchrony in the direction and magnitude of ∆f R,CIP for the NA and EU regions, primarily associated with decreases in Neurosciences [1] and Pathology & Pharmacology [7] and increases in Psychology [3] and Medical Specialty [5].NA is the only region showing increase in both Science & Engineering domains [CIP 8&9].Similarly, Fig. S3(A) shows the analog frequencies f < R,SA (f > R,SA ) for each SA by region.In the pre-2014 period, the most prominent SA categories are Anatomy & Organisms [SA 2] and Health [4], with all regions showing similar distribution profiles.The most prominent distinction in AA is a reduced prominence of Psychiatry & Psychology [1].By and large, the profiles remain consistent in the post-2014 period, with AA and NA experiencing prominent increases in Health [4], and AA showing a modest increase in Psychiatry & Psychology [1], which nevertheless does not fully compensate for the initial deficit in this category with respect to both NA and EU.Figure S3(B) indicates that all regions experienced a consistent decline in research involving the structure-oriented topics associated with Anatomy & Organisms [2], as well as the function-oriented topics associated with Phenomena & Processes [3].The most prominent distinction between regions is for NA and EU, which both feature increases in Technology & Information Science [6] that are relatively larger than observed for AA and World, likely reflecting the technological capacity related to the tech.hubs in these regions; another distinction relates to the Psychiatry & Psychology [SA 1] which increases in EU and AA more than for NA and World; and also for Health[4] which increases in NA and AA more than for EU and World. Figure S3(B) indicates that all regions experienced a consistent decline in research involving the structure-oriented topics associated with Anatomy & Organisms [2], as well as the function-oriented topics associated with Phenomena & Processes [3]. Figure S7(C,D) show that increasing diversity associated with Health [4] is largely captured via the in-corporation of technology-and informatics-oriented capabilities [5,6] -as opposed to integrating more traditional biological SA representing research domains associated with questions relating to how Anatomy & Organisms (structure) [2] and Phenomena & Processes (function) [3] relate to complex human behavior addressed by Psychiatry & Psychology [1] -as illustrated in Fig. S7(E). Model A Quantifying factors associated with propensity for CIP and SA diversity In the first model, we seek to better understand the factors associated with the prevalence of CIP and SA diversity as they evolve over time, and in particular their relation to the launching of the HB flagship programs.In order to model the articlelevel factors (indicated by p) associated with cross-domain research activity we define the binary indicator variable generically denoted as I X,p .By way of example, if we are considering SA diversity, then the indicator variable I X SA,p takes the value 1 if O SA ( F p ) = X SA and 0 if O SA ( F p ) = M .We then model the 2-state odds Q ≡ P (O SA ( Fp)=X SA ) P (O SA ( Fp)=M ) = P (X SA ) P (M ) , which represents the propensity for cross-domain research, where P (X SA ) + P (M ) = 1.Likewise, in the case of CIP diversity we model the odds as Q ≡ P (O CIP ( Fp)=X CIP ) P (O CIP ( Fp)=M ) ; and finally, we also consider the likelihood of research featuring both types of cross-domain activity, for SA & CIP, represented as Q ≡ P (O SA&CIP ( Fp)=X SA&CIP ) P (O SA&CIP ( Fp)=M ) ) we use X indicators defined according to the Neighboring configuration representing shorter-distance crossdomain activity -here capturing the neurobiological -vs-bioengineering interface.In our model specification, X is represented by the binary indicator variables I X Neighboring,SA and I X Neighboring,CIP .In the case of X Neighboring,SA , this interface corresponds to articles combining at least one MeSH mapping onto SA 1 (Psychiatry & Psychology) and at least one MeSH mapping onto SA [2] (Anatomy & Organisms), [3] (Phenomena & Processes) or [4] (Health).In the case of X Neighboring,CIP , this interface corresponds to articles combining at least one coauthor whose department maps onto CIP [1] (Neurosciences) or [3] (Psychology) and at least one coauthor whose department maps onto CIP [2] (Biology), [4] (Biotechnology & Genetics) or [5] (Medical Specialty) or [6] (Health Sciences) or [7] (Pathology & Pharmacology).Note that all Scopus scholars map onto a single CIP, and so solo-authored research articles are by definition mono-disciplinary. (corresponding to Psychiatry & Psychology (mind), Anatomy & Organisms (structure), Phenomena & Processes (function) and Health, respectively) and at least one MeSH mapping onto SAs [5,6] (Techniques & Equipment and Technology & Information Science, respectively).In the case of X Neighboring,CIP , this interface corresponds to articles combining at least one coauthor whose department maps onto CIPs [1,3,5] (Neurosciences, Psychology and Medical Specialty, respectively) and at least one coauthor whose department maps onto CIPs [4,8] (Biotechnology & Genetics and Engineering & Informatics, respectively). 1 : FIG. S1: Subject Area and Disciplinary clusters.(A) Principal MeSH terms comprising 6 Subject Area (SA) clusters.(B) Minimum spanning tree representation of topical hierarchy based upon SA co-occurrence within articles; node size proportional to total number of articles featuring a particular SA.(C) CIP codes comprising 9 disciplinary clusters.(D) Minimum spanning tree representation of disciplinary hierarchy based upon CIP co-occurrence within articles; node size proportional to total number of articles featuring a particular CIP. and CCIP,ij: matrices reporting % di erences in co-occurrences between the 5-year pre/post periods 12. the set of "Major Topic" MeSH keywords for a publication, Wp; the SA operator converts accordingly: OSA( Wp) = ≠ ae SAp; Similarly the set of Scopus authors on are, Ąp and 24.NCIP,p article-level count variable indicating the total number of CIP represented (independent of concentrations), ie min = 1 and max = 9 25.zj is the average z value calculated across all articles published in a specific journal (indexed by j) .(Author One) contributed equally to this work with A.T. (Author Two) (remove if not applicable).2 To whom correspondence should be addressed.E-mail: author.twoemail.comwww.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX PNAS | May 4, 2020 | vol.XXX | no.XX | 1-16 published in year t; normalized citations zp (involving µt) and ‡t) 5. number of coauthors, kp 6. number of regions associated with a publication, Rp 7. The article-level categorical diversity measure is calculated using Dp, the outer-product matrix used to count co-occurrences at the article level, drawing on the generic categorical vector vp period 2014-2018 (5-year period 2009-2013) 9. f indicates a generic fraction/frequency variable (i.e. with range [0,1]); subscript indicates the variable context, e.g.fD,SA(t) (fD,CIP (t)) indicates the co-occurrence per article measured using SA (CIP). FIG. S2: Temporal and regional distributions of CIP-coded author departments in human brain research.(A) Relative frequency of department CIP clusters in the 5-year period before 2014 (f < R,CIP ) and after 2014 (f > R,CIP ); f values are normalized to unity within region.(B) Shift in CIP cluster frequencies given by the difference ∆f R,CIP = f > R,CIP − f < R,CIP .(C) Disciplinary {CIP, CIP } co-occurrence in human brain science -by region.Each co-occurrence matrix C < CIP measures the frequency of a given {CIP, CIP } pair over the 5-year pre-period 2009-2013 based upon publications associated with one of three broad geographic regions; see Eqn. (S1) for its definition.By construction, matrix element values C < CIP,ij are proportional to the net share of publications featuring the indicated pair.Diagonal elements measure the frequency of publications featuring only a single CIP category.Note the use of two legends, one for the mono-disciplinary diagonal elements (gray-scale legend reported in units of 1000 publications) and one for off-diagonal elements (color-scale legend reported in units of 100 publications); as indicated by the legend scales, mono-CIP publications occur with significantly higher frequency than multi-CIP publications.(D) Relative change (post -pre period) in the co-occurrence matrix: ∆C CIP,ij measures the percent difference in the frequency of publications characterized by each {CIP, CIP } pair. FIG. S3: Temporal and regional distributions of Subject Areas (SA) in human brain research.(A) Relative frequency of topical SA clusters in the 5-year period before 2014 (f < R,SA ) and after 2014 (f > R,SA ); f values are normalized to unity within region.(B) Shift in SA cluster frequencies given by the difference ∆fR,SA = f > R,SA − f < R,SA .(C) Topical {SA, SA} co-occurrence in human brain science -by region.Each co-occurrence matrix C < SA measures the frequency of a given {SA, SA} pair over the 5-year pre-period 2009-2013 based upon publications associated with one of three broad geographic regions; see Eqn. (S1) for its definition.By construction, matrix element values C < SA,ij are proportional to the net share of publications featuring the indicated pair.Diagonal elements measure the frequency of publications featuring only a single SA category.Note the use of two legends, one for the mono-dimensional diagonal elements (gray-scale legend) and one for off-diagonal elements (color-scale legend), both of which are reported in units of 1000 publications.(D) Dynamic co-occurrence matrix, ∆CSA,ij, measuring the percent difference (post-pre) in the frequency of publications characterized by each {SA, SA} pair. FIG. S5: Expansive topical integration facilitated by CIP diversity.The number NCIP,p of distinct CIP featured by a given article is a measure of disciplinary diversity.(A) Average number of SA per article, NSA , computed for articles with a given NCIP,p and conditioned on the normalized citation impact.(B) Average number of SA per article, NSA, computed for articles featuring X SA&CIP according to a given configuration (Broad, Neighboring and Distant).The Distant configuration consistently corresponds to the highest levels of SA diversity.Comparing panels (A) and (B), NSA are also consistently larger for the NCIP subsets in (B) featuring X SA&CIP .For both panels, the horizontal dashed red line represents the baseline for comparison, computed as the average number of SA, NSA = 2.2, calculated for mono-disciplinary articles (NCIP,p = 1). FIG. S6:Trends in cross-disciplinary (CIP) scholarship in human brain science.Each curve corresponds to fD,CIP (t) , representing the average article diversity measured as categorical CIP co-occurrence in the off-diagonal matrix elements of DCIP,p, see Eq. (2); each curve is calculated for articles belonging to a given geographic region, as determined by the coauthors' regional affiliations: Australasia (red), Europe (blue), and North America (orange).For each panel we provide a matrix motif indicating the set of focal CIP categories; counts for categories included in brackets are considered in union.For example, whereas panel (A) calculates fD,CIP (t) across all 9 CIP categories (each category considered separately); instead, panel (B) calculates each Dp by considering just two super-groups, the first consisting of the union of CIP counts for categories[1][2][3][4], and the second comprised of categories[5][6][7][8][9]. FIG. S7:Trends in cross-topical (SA) scholarship in human brain science.Each curve corresponds to fD,SA(t) , representing the average article diversity measured as categorical SA co-occurrence in the off-diagonal matrix elements of DSA,p, see Eq. (2); each curve is calculated for articles belonging to a given geographic region, as determined by the coauthors' regional affiliations: Australasia (red), Europe (blue), and North America (orange).For each panel we provide a matrix motif indicating the set of focal SA categories; counts for categories included in brackets are considered in union.For example, whereas panel (A) calculates fD,SA(t) across all 6 SA categories (each category considered separately); instead, panel (C) calculates each DSA,p by considering a subset of four SA categories 1-4. For y a a Y N T T J j k h F K 6 U + 4 c a G I W 3 / H n X 9 j 2 o 6 g o g c u H M 6 5 l 3 v v C R P O t E H o w 8 m t r W 9 s b u W 3 C z u 7 e / s H x c O j t o 5 T R W i L x D x W 3 R B r y p m k L c M M p 9 1 E U S x C T j v h 5 G r h d + 6 p 0 i y W t 2 a a 0 E D g k W Q R I 9 h Y q d v X b C T w w A y K J e T W f d + r I o j c c r m S k W q t X v a h 5 6 I l S i B D c 1 B 8 7 w 9 j k g o q D e F Y 6 5 6 HE h P M s D K M c D o v 9 F N N E 0 w m e E R 7 l k o s q A 5 m y 3 v n 8 M w q Q x j F y p Y 0 c K l + n 5 h h o f V U h L Z T Y D P W v 7 2 F + J f X S 0 3 k B z M m k 9 R Q S V a L o p R D E 8 P F 8 3 D I F C W G T y 3 B R D F 7 K y R j r D A x N q K C D e H r U / g / a V + 4 H n K 9 m 0 q p c Z n F k Q c n 4 B S c A w / U Q A N c g y Z o A Q I 4 e A BP 4 N m 5 c x 6 d F + d 1 1 Z p z s p l j 8 A P O 2 y e i A 5 B c < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " L / T 4 4 + 5 q P X 8 U r 8 1I x / x 8 / o 1 l v O o = " > A A A B 7 3 i c d V D L S g M x F M 3 U V 6 2 v q k s 3 w S K 4 G j K 2 p d N d 0 Y 3 L C v Y B 7 V A y a a Y N T T J j k h F K 6 U + 4 c a G I W 3 / H n X 9 j 2 o 6 g o g c u H M 6 5 l 3 v v C R P O t E H o w 8 m t r W 9 s b u W 3 C z u 7 e / s H x c O j t o 5 T R W i L x D x W 3 R B r y p m k L c M M p 9 1 E U S x C T j v h 5 G r h d + 6 p 0 i y W t 2 a a 0 E D g k W Q R I 9 h Y q d v X b C T w w A y K J e T W f d + r I o j c c r m S k W q t X v a h 5 6 I l S i B D c 1 B 8 7 w 9 j k g o q D e F Y 6 5 6 H E h P M s D K M c D o v 9 F N N E 0 w m e E R 7 l k o s q A 5 m y 3 v n 8 M w q Q x j F y p Y 0 c K l + n 5 h h o f V U h L Z T Y D P W v 7 2 F + J f X S 0 3 k B z M m k 9 R Q S V a L o p R D E 8 P F 8 3 D I F C W G T y 3 B R D F 7 K y R j r D A x N q K C D e H r U / g / a V + 4 H n K 9 m 0 q p c Z n F k Q c n 4 B S c A w / U Q A N c g y Z o A Q I 4 e A B P4 N m 5 c x 6 d F + d 1 1 Z p z s p l j 8 A P O 2 y e i A 5 B c < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " L / T 4 4 + 5 q P X 8 U r 8 1I x / x 8 / o 1 l v O o = " > A A A B 7 3 i c d V D L S g M x F M 3 U V 6 2 v q k s 3 w S K 4 G j K 2 p d N d 0 Y 3 L C v Y B 7 V A y a a Y N T T J j k h F K 6 U + 4 c a G I W 3 / H n X 9 j 2 o 6 g o g c u H M 6 5 l 3 v v C R P O t E H o w 8 m t r W 9 s b u W 3 C z u 7 e / s H x c O j t o 5 T R W i L x D x W 3 R B r y p m k L c M M p 9 1 E U S x C T j v h 5 G r h d + 6 p 0 i y W t 2 a a 0 E D g k W Q R I 9 h Y q d v X b C T w w A y K J e T W f d + r I o j c c r m S k W q t X v a h 5 6 I l S i B D c 1 B 8 7 w 9 j k g o q D e F Y 6 5 6 H E h P M s D K M c D o v 9 F N N E 0 w m e E R 7 l k o s q A 5 m y 3 v n 8 M w q Q x j F y p Y 0 c K l + n 5 h h o f V U h L Z T Y D P W v 7 2 F + J f X S 0 3 k B z M m k 9 R Q S V a L o p R D E 8 P F 8 3 D I F C W G T y 3 B R D F 7 K y R j r D A x N q K C D e H r U / g / a V + 4 H n K 9 m 0 q p c Z n F k Q c n 4 B S c A w / U Q A N c g y Z o A Q I 4 e A BP 4 N m 5 c x 6 d F + d 1 1 Z p z s p l j 8 A P O 2 y e i A 5 B c < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " L / T 4 4 + 5 q P X 8 U r 8 1I x / x 8 / o 1 l v O o = " > A A A B 7 3 i c d V D L S g M x F M 3 U V 6 2 v q k s 3 w S K 4 G j K 2 p d N d 0 Y 3 L C v Y B 7 V A y a a Y N T T J j k h F K 6 U + 4 c a G I W 3 / H n X 9 j 2 o 6 g o g c u H M 6 5 l 3 v v C R P O t E H o w 8 m t r W 9 s b u W 3 C z u 7 e / s H x c O j t o 5 T R W i L x D x W 3 R B r y p m k L c M M p 9 1 E U S x C T j v h 5 G r h d + 6 p 0 i y W t 2 a a 0 E D g k W Q R I 9 h Y q d v X b C T w w A y K J e T W f d + r I o j c c r m S k W q t X v a h 5 6 I l S i B D c 1 B 8 7 w 9 j k g o q D e F Y 6 5 6 H E h P M s D K M c D o v 9 F N N E 0 w m e E R 7 l k o s q A 5 m y 3 v n 8 M w q Q x j F y p Y 0 c K l + n 5 h h o f V U h L Z T Y D P W v 7 2 F + J f X S 0 3 k B z M m k 9 R Q S V a L o p R D E 8 P F 8 3 D I F C W G T y 3 B R D F 7 K y R j r D A x N q K C D e H r U / g / a V + 4 H n K 9 m 0 q p c Z n F k Q c n 4 B S c A w / U Q A N c g y Z o A Q I 4e A B P 4 N m 5 c x 6 d F + d 1 1 Z p z s p l j 8 A P O 2 y e i A 5 B c < / l a t e x i t > µt < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 N W Q D y 5 s p z b O W b s M 1 N F + w M G C n w s = " > A A A B 7 H i c d V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g a d l t K + 2 x 6 M V j B b c t t E v J p m k b m m S X Z F Y o S 3 + D F w + K e P U H e f P f m H 6 B i j 4 Y e L w 3 w 8 y 8 K B H c g O d 9 O r m N z a 3 t n f x u Y W / / 4 P C o e H z S M n U e P 6 B m 9 O M p 5 c l 6 d t 2 V r z l n N n K I f c N 6 / A G C m j w 8 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "1 N W Q D y 5 s p z b O W b s M 1 N F + w M G C n w s = " > A A A B 7 H i c d V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g a d l t K + 2 x 6 M V j B b c t t E v J p m k b m m S X Z F Y o S 3 + D F w + K e P UH e f P f m H 6 B i j 4 Y e L w 3 w 8 y 8 K B H c g O d 9 O r m N z a 3 t n f x u Y W / / 4 P C o e H z S M n U e P 6 B m 9 O M p 5 c l 6 d t 2 V r z l n N n K I f c N 6 / A G C m j w 8 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "1 N W Q D y 5 s p z b O W b s M 1 N F + w M G C n w s = " > A A A B 7 H i c d V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g a d l t K + 2 x 6 M V j B b c t t E v J p m k b m m S X Z F Y o S 3 + D F w + K e P UH e f P f m H 6 B i j 4 Y e L w 3 w 8 y 8 K B H c g O d 9 O r m N z a 3 t n f x u Y W / / 4 P C o e H z S M n U e P 6 B m 9 O M p 5 c l 6 d t 2 V r z l n N n K I f c N 6 / A G C m j w 8 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "1 N W Q D y 5 s p z b O W b s M 1 N F + w M G C n w s = " > A A A B 7 H i c d V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g a d l t K + 2 x 6 M V j B b c t t E v J p m k b m m S X Z F Y o S 3 + D F w + K e P UH e f P f m H 6 B i j 4 Y e L w 3 w 8 y 8 K B H c g O d 9 O r m N z a 3 t n f x u Y W / / 4 P C o e H z S M n U e P 6 B m 9 O M p 5 c l 6 d t 2 V r z l n N n K I f c N 6 / A G C m j w 8 = < / l a t e x i t > h i = 1.24 < l a t e x i t s h a 1 _ b a s e 6 4 = " O 0 W W 8 U 4 4 p f y Z N e Y z 6 w 4 B P 0 / g j w I FIG. S8: Distributions of Article-level variables.(A) N (t) is the number of HB articles by (inset) The log-citation distribution is well described by a log-normal distribution (see panel G).As such, µt and σt corresponding to log-transformed citation counts are appropriate measures of log-normal location and scale; the average and standard deviation are σ ± SD = 1.24 ± 0.09 over the 49-year period 1970-2018.(B) P (k) is the probability distribution (PDF) of the number of coauthors per article.(C) P (w) is the PDF of the number of Major Topic MeSH "keywords" per publication, denoted by wp.(D) Each MeSH keyword maps onto one of the 6 SA clusters.Shown is the PDFof the number of distinct SA categories per publication, N SA,p .(E) Each departmental affiliation maps onto one of the 9 CIP clusters.Shown is the PDF of the number of distinct CIP categories per publication, N CIP,p .(F) Each Scopus Author's affiliation maps onto one of 4 regions: Australasia, Europe, North America, and (rest of) World.Shown is the PDF of the number of region categories per publication, N R,p .(G) Probability distribution (PDF) of zp disaggregated by publication cohort {t}; each green curve represents the smoothed kernel density estimate of the P (z), calculated with kernel bandwith = 0.1.Data are split into 5-year periods from 1965-2018, with the first panel including data from 1945-1964.Each PDF shows the baseline Normal distribution N (0, 1), demonstrating the stability of the distribution of normalized citation impact values over time, thereby facilitating robust cross-temporal modeling. FIG. S9: Cross-correlation and Descriptive statistics for regression model variables.Upper-diagonal elements: bivariate histogram between row and column variables.Diagonal elements: histogram for variable indicated by the row/column labels.Lower-diagonal elements: bivariate cross-correlation coefficient: light-shaded squares indicate the Pearson's correlation coefficient between two variables that are both continuous measures; dark-shaded squares indicate the Cramer's V associate between two variables that are both nominal (categorical). FIG. S10: Summary of Logit model parameter estimates.(A-C) Reported are 100β for the main covariates of interest reported in Tables S1-S3, quantifying the percent increase in the odds Q ≡ P (X)/P (M ) associated with a one-unit increases in: (A) mean journal citation impact zj,p; (C) ln k; (B) number of coauthors, kp; (C) number major MeSH terms (keywords), wp; (D-F) difference-in-difference estimates (100δR+) capturing the effect of Flagship project ramp-ups after 2013 on rates of cross-domain research -at three levels of specificity regarding the diversity range captured by X. Broad configuration correspond to unconstrained combinations of SA and CIP (represented by XSA, XCIP , X SA&CIP ).The Neighboring configuration corresponds to specific set of category combinations capturing the neurobiological -vs-bioengineering interface, represented by SA [1] × [2-4] and CIP [1,3] × [2,4-7] (and represented by XNeighboring,SA, XNeighboring,CIP , X Neighboring,SA&CIP ).And Distant also identifies a specific set of category combinations capturing the neuro-psycho-medical -vs-technocomputational interface, represented by SA [1-4] × [5,6] and CIP [1,3,5] × [4,8] (XDistant,SA, XDistant,CIP , X Distant,SA&CIP ).Reported are percent increase in Q, a ratio representing the propensity for cross-domain research relative to mono-domain research, directly associated with the ramp-up of Brain projects in: (D) Australasia; (E) Europe; (F) North America.Shown are point estimates with 95% confidence interval.Standard errors clustered by region to account for residuals that are correlated within regions over time.Asterisks above each estimate indicate the associated p−value level: * p < 0.05, * * p < 0.01, * * * p < 0.001. p ); importantly, this definition accounts for variability in N SA by normalizing the sum of the SA counts contained in − → SA p by N SA,p so that each article contributes equally to the average.Less prominent CIP-SA links are pruned from our Sankey chart visualization in order to emphasize the most meaningful CIP-SA relations.To this end, we remove the weakest Table : reports odds ratio e^beta (\approx 1+beta for small beta) TABLE S1 : Modeling the prevalence of cross-domain activity at the article level.Article-level analysis implemented using the logit model.The dependent variable is a binary indicator variable taking the value 1 if the article features cross-domain combinations (represented by XSA,p or XCIP,p or X SA&CIP,p ) and 0 otherwise.Publication data included: articles published in period yp ∈[1970, 2018]with kp ≥ 2 and wp ≥ 2. Robust standard errors are shown in parenthesis below each point estimate.Reported are odds ratios, exp(β). TABLE S2 : Conditional definition of Xp -identifying "Neighboring" or shorter-distance cross-domain combinations.Article-level analysis implemented using the logit model.The dependent variable is a binary indicator variable taking the value 1 if the article features cross-domain combinations (represented by XNeighboring,SA,p or XNeighboring,CIP,p or X Neighboring,SA&CIP,p ) and 0 otherwise.Publication data included: articles published in period yp ∈[1970, 2018]with kp ≥ 2 and wp ≥ 2. Robust standard errors are shown in parenthesis below each point estimate.Reported are odds ratios, exp(β). TABLE S3 : Conditional definition of Xp -identifying "Distant" or longer-distance cross-domain combinations.Article-level analysis implemented using the logit model.The dependent variable is a binary indicator variable taking the value 1 if the article features cross-domain combinations (represented by XDistant,SA,p or XDistant,CIP,p or X Distant,SA&CIP,p ) and 0 otherwise.Publication data included: articles published in period yp ∈[1970, 2018]with kp ≥ 2 and wp ≥ 2. Robust standard errors are shown in parenthesis below each point estimate.Reported are odds ratios, exp(β). TABLE S4 : Career-level analysis using panel model with individual researcher fixed effects.Publication data included: articles published in period yp ∈[1970, 2018]with kp ≥ 2 and wp ≥ 2; only includes researchers with Na ≥ 10 articles satisfying these criteria.Robust standard errors are shown in parenthesis below each point estimate.Y indicates additional fixed effects included in the regression model. TABLE S5 : Flagship Project effect: Career-level analysis using panel model with researcher fixed effects.Publication data included: articles published in period yp ∈[1970, 2018]with kp ≥ 2 and wp ≥ 2; only includes researchers with Na ≥ 10 articles satisfying these criteria.Robust standard errors are shown in parenthesis below each point estimate.Y indicates additional fixed effects included in the regression model.p < 0.05, * * p < 0.01, * * * p < 0.001 *
2021-03-23T05:08:41.570Z
2020-10-10T00:00:00.000
{ "year": 2021, "sha1": "419422c402b1aa46fac3ced53342239bf498341a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41599-021-00869-9.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "abae5393d53c47e15679d470d061a56b07e6ed61", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Economics", "Physics" ] }
202125887
pes2o/s2orc
v3-fos-license
Description of coupled-channel in Semiclassical treatment of heavy ion fusion reactions Fusion cross sections were measured for the systems 40Ar+144Sm, 40Ar+148Sm and 40Ar+ 154Sm above and under Coulomb barrier to understand the role of coupled channels effects involved in barriers fusion. The fusion barriers distributions and fusion probabilities were analyzed using the semiclassical mechanical code which called Sequential Complete Fusion (SCF) as well as Full Coupled Channel code (CCFULL). These calculations show that the observed fusion cross sections fusion barrier distribution and fusion probabilities for these systems are reproduced clearly in the semiclassical mechanical for all excitation states above and under Coulomb barrier. Introduction Reactions of two nucleus fusion have been researched to understand fundamental features of the fusion processes for several years and synthesis of particular compound nuclei by establish optimum conditions, such as, exotic nuclei far from β stability or create super-heavy nuclei. An effective central potential is resultant from the complex interactions between composite nuclei [1,2]. In describing the phenomena which may occur during a collision which is general problem in many sciences of physics and chemistry, is influence of internal degrees of freedom on this potential and its fundamental importance. the fusion of heavy nuclei presents good a way to study the quantum tunnelling below the Coulomb barrier of many-body systems, the relative motion is coupled with internal degrees of freedom happens a splitting in energy of the uncoupled fusion barrier and called the single-barrier penetration model (BPM) within the (WKB) approximation method, which modify the single Coulomb barriers, producing a distribution of barriers [3,4]. However, by using the semiclassical method of Alder-Winther (AW) for heavier systems dramatic enhancements beyond the BPM were observed in the fusion probability below the barrier [5]. The Coulomb excitations of collective states was studied by this method which was originally proposed to this study and it was latter generalized to include the excitations of the breakup channel in other nuclear reactions [6,7]. More recently, sophisticated quantum Coupled-channels calculations approximate the continuum by a discrete set of states, according to the Continuum Discretized Coupled-Channels method (CDCC) [8]. In Ref. [ 2 studies proved that the semiclassical method including the coupled channel between the entrance channel with the bound channels is established clear enhancements in describing the total fusion reaction cross section and the fusion barriers distribution down and up the Coulomb barrier. Where the fusion excitation functions have been measured to study the coupled-channel role involving rotational, and vibrational, influencing the fusion probabilities [11,12]. The goal of this paper focuses on adopting Discretized Coupled-Channels method (CDCC) for semiclassical approach to study the effect of the coupled-channels and show how it can be used to evaluate total fusion reaction cross section , the fusion barrier distribution and the fusion probability for the systems 40 Ar+ 144,148,154 Sm. The semiclassical theory calculations have been implemented and coded using Fortran codenamed (SCF) developed by H. D. Marta et al., [13], compared with the full quantum mechanical calculations were performed using the CCFULL developed by K. Hagino et al., [14], and the results will be compared with the empirical data. Penetration probability Recently, the direct reactions among heavy ions for bombarding energies down the Coulomb barrier have been proved by the validity semiclassical approach description. Since the cross sections increase strongly for bombarding energies up the Coulomb barrier it is of interest to study the expansion of the semiclassical theory to such energies [15,16]. The nucleus-nucleus interaction potential is formed from confluence among the nuclear attraction, the Coulomb repulsion, and the centrifugal force [17,18]; is the distance between the center of target and projectile, is the nuclear potential, is the Coulomb potential, and is the effective centrifugal potential. The fusion reactions are depending on quantum mechanics in single channel descriptions, of one frequently uses a complex potential (the optical potential), whose real part is given by Eq. (1). The sum of the three potentials gives rise to a Coulomb potential barrier, with height V B . The potential has also a negative imaginary part W F , very intense and with a short range, that accounts for the incident flux lost to the fusion channel [18,19]. Owing to the short wavelengths involved in heavy ion collisions the transmission coefficients are frequently evaluated by semiclassical approximations, like the WKB [18,19,20], the WKB approximation and it corresponds to using the classical action as the phase of the wave function. In the development of the tunnelling theory, several authors introduced concepts and recipes in order to get the tunnelling probability as close as possible to the exact result obtained from the solution of the Schrödinger equation (SE) in one dimension [19,21]as given; where is the integral where and are two classical turning points (CTP) which are the solutions of . In spite of several shortcomings, Eq. (2) gives fairly accurate results for smooth barriers and for energies not too close to the top of the barrier [22]. It becomes progressively worse as the energy approaches . Furthermore, the WKB penetration coefficient takes the constant value , but its quantum mechanical equivalent is equal to 1/2 at the barrier, and rise up to one as the energy increases at energies up the barrier. In 1935, Kemble [23] exhibited that the WKB approximation can be enhanced if one uses a better connection, he got the express [19,22] 3 Kemble's approximation remains correct as the energy approaches the barrier, leading to the valid result at , namely . The semiclassical coupling channels theory The semiclassical coupled channels calculations are employed the Continuum Discretized Coupled-Channel (CDCC) method, based on the semiclassical theory of Alder and Winther (AW) which was consists of classical mechanics that handle the relative motion, whereas the intrinsic motion ξ is treated as a time-dependent quantum mechanics problem for study the Coulomb excitation [5]. The Hamiltonian system in the coupled channel calculations can be written as [24]; Where, the relative motion between the center of masses of the two nuclei involved in the collision is describe by and the interaction between those for all degrees of freedom. The eigen function set are [24,25]; where Solving the classical equations of motion with the Hamiltonian for a given variable r and incident energy E c.m. , the classical trajectory r l (t) is determined. Then, the internal wave function for the excitable nucleus is found by solving the time dependent Schrödinger equation for the Hamiltonian is [24,26] i.e., Expanding in terms of a properly truncated set of eigen functions of h, Eq. (8) leads to one obtains the Alder-Winther equations The initial conditions , are used to solve these equations which means that before the collision the projectile was in its ground state at . The final population of channel in a collision with angular momentum l is and the cross section is [25,26,27] To expand this method to fusion reactions, we deal with the quantum mechanical problem in a coupled channel of the fusion cross sections. For simplicity, we assume that all channels are bound and have spin zero. The fusion cross sections are sum of contributions from each channel. Carrying out partial-wave expansions we get indicates to the radial wave function for the lth-partial-wave in channel and is the absolute value of the imaginary part of the optical potential associated to fusion in that channel [27]. Barrier distribution The fusion excitation function was derivatized to extracting the barrier distribution as the second derivative of with respect to the energy [2,28]. The point difference method was using to found the second derivative as given below. The barrier distribution is defined at energy [28] as where are evaluated at energies Here is the energy step taken for extracting the second derivative. The statistical error associated with the second derivative at energy E was calculated using the equation Where in the cross sections are obtained the absolute errors indicate by . Since is proportional to the value of , for cross sections measured with a fixed percentage error, at higher energies the barrier distribution becomes less and the cross sections are high. Results and Discussions The data have been analyzed by our employed to the coupled channel formalism. The coupled channel calculations for fusion cross sections , the fusion barrier distributions and fusion probabilities with single channel were performed using the SCF codes in semiclassical approach for the systems 40 Ar+ 144,148,154 Sm. Our calculated results of , and compared with the corresponding experimental data and with full quantum mechanical calculations using the CCFULL code. The Akyüz-Winther potential parameters used in the present calculations are tabulated in Table 1. We study the effect of channel coupling and with single channel on heavy ions fusion reactions which represented by solid and dashed (red and blue) curves, for the semiclassical and the full quantum mechanical results respectively. By using chi square method to distinguish between the cases of enhancement and suppression in the results compared experimental data. In figure 1, panels (a, b and c) shown the results of fusion cross section , the fusion barrier distribution and fusion probability for the fusion 40 Ar + 144 Sm in semiclassical and the full quantum mechanical. Experimental cross sections of this fusion were published in 1985 [29]. With respect to expectations of barrier penetration model calculations, they presented a big enhancement, for energies under the Coulomb barrier, which in this case is coupling of fusion cross sections . results were surprising form ( in coupling case compare with single case ( . Form fig.1, and chi square values we found that the calculations of the full quantum mechanical are the best due to vibration motion for two nuclei up to one and two-phonon with deformation parameters [30] for Argon and samarium nuclei respectively. Experimental fusion barrier distribution and fusion probability results respect to the calculations were enhancement above barrier by the least values of ( and ( . Under barrier the least values are given by ( and ( for coupling channels in semiclassical and the full quantum mechanical respectively. We seem, the measured values above barrier the best agreement with both our calculations. We employed the coupled-channel calculations to analyze the data [29], in reaction 40 Ar + 148 Sm for fusion cross section , the fusion barrier distribution and fusion probability . The potential used in the calculations was a Woods-Saxon parametrization of the Akyüz-Winther potential as shown Table 1. The results are shown in Fig. 2 panels (a, b and c), where the vibrational states of the project and target with one-phonon states. It can be seen that the coupling to the excitation channels for this reaction brings in an additional enhancement in semiclassical calculations under Coulomb barrier more than full quantum mechanical calculations from Fig. 2., panels (a, b and c) and the chi square values. The chi square values were given by ( for fusion cross section , ( the fusion barrier distribution and ( for fusion probability in the semiclassical and full quantum mechanical calculations respectively. Under Coulomb barrier the semiclassical calculations in a good agreement by taken into consideration of all excitation states. The semiclassical and the full quantum mechanical calculations including coupling and no coupling effects above barrier are converged with experimental data as shown in Exact coupled channel calculations were performed in the semiclassical mechanical with the full quantum mechanical. The results of , , and , are shown in Fig. 3., panels (a, b and c). The target 154Sm is taken rotation motion in the case of quantum mechanical calculations including coupling to the state (2+) which has experimental value of 0.081973 [31] with the deformation parameter taken from Ref. [32], while the projectile 40Ar is taken to be inert. The best calculated values of chi-square values obtained are , which corresponds to the semiclassical calculations including channel coupling are in best agreement with the experimental data [33] for under the Coulomb barrier. Compare with the minimum values obtained are , which correspond to the full quantum calculations including coupling. The best calculated value obtained is for which corresponds to the semiclassical calculations including no coupled and coupled channel, which means that they are able to reproduce the experimental data better than other calculations above barrier. The chi-square values are found to be for and of semiclassical calculations including coupling in comparison with the experimental data which means that they are perfect match with the corresponding experimental data. While the quantum mechanical calculations including coupling the chi-square values are found to be for which to be more near to semiclassical calculations including coupling with the experimental data. From that means that semiclassical calculations are perfect match with the corresponding experimental data. Conclusion We studied an expansion of the semiclassical theory of Alder and Winther to calculate the cross sections , the fusion barriers distribution and fusion probability . The semiclassical mechanical results were exhibited to be in very good agreement with the experimental data. As calculations are much converged than full quantum calculations, thus providing a powerfully tool for the analysis of heavy ion fusion reactions.
2019-09-10T09:09:29.324Z
2019-08-08T00:00:00.000
{ "year": 2019, "sha1": "2eb835eabb58d0e95669a0a64e064e8ea78cd7b7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/571/1/012113", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6d42446fcc350cbecb11ca0cd01ce6c98bf2a18a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221258020
pes2o/s2orc
v3-fos-license
Complete protection for mice conferred by a DNA vaccine based on the Japanese encephalitis virus P3 strain used to prepare the inactivated vaccine in China Background The incidence of Japanese encephalitis (JE) has been dramatically reduced in China after sufficient vaccine coverage. The live-attenuated Japanese encephalitis virus (JEV) vaccine SA14–14-2 is believed to have strongly contribute to this decrease. Another vaccine that seems to have decreased in importance is an inactivated vaccine based on the JEV P3 strain, which is considered to be modifiable, such as being transformed into a DNA vaccine to improve its immunogenicity. Methods In this study, the protective efficacy induced by the Japanese encephalitis DNA vaccine candidate pV-JP3ME encoding the premembrane (prM) and envelope (E) proteins of the P3 strain was assessed in BALB/c mice. The prM/E genes of the JEV P3 strain were subcloned into the vector pVAX1 (pV) to construct pV-JP3ME. Results The plasmid DNA was immunized into BALB/c mice, and high titers of IgG antibody and neutralizing antibody (nAb) against JEV were detected. The key cytokines in splenocytes were secreted upon stimulation with JEV antigens. Finally, complete protective efficacy was generated after challenge with the JEV P3 strain in the mice. Conclusions The DNA vaccine pV-JP3ME based on the JEV P3 strain in this study can induce specific humoral immune and cytokine responses and provide complete protection against JEV in mice. Background Japanese encephalitis (JE) virus (JEV) is a single positivestrand RNA virus belonging to the family Flaviviridae genus Flavivirus [1]. JEV infection can cause severe encephalitis, neurological sequelae, and even death in children and adults [2]. JEV was first isolated in China in 1940 and is transmitted mainly by Culex mosquito bites, with swine and wintering waterfowl as amplifying hosts [3]. By the end of the 1980s, JEV infection had constantly been a serious threat to the health of many Asian children, and nearly half of all JE cases have occurred in China. Since the development of two types of vaccinations in China, an inactivated vaccine (strain P3) and the live attenuated vaccine strain SA14-14-2 [4,5], the incidence of JE has decreased from 20.92 cases/100,000 individuals in 1971 to 0.12 cases/100,000 individuals in 2011 [6,7]. China is still using the two vaccines mentioned above, and live-attenuated vaccines were included in the national Expanded Programme on Immunization (EPI) at the end of 2007 [8]. Given the more convenient schedule, reduced toxicity, and better immunogenicity of live-attenuated vaccines, SA14-14-2 has now replaced inactivated vaccines [9]. Populations in some Asian countries and regions, such as Japan, South Korea, Taiwan, and certain populations, including immunocompromised people and those concerned about live vaccination, are still using inactivated vaccines [10]. The wild-type P3 strain has strong virulence and contains many key epitopes of immunogenicity, which can be improved by biological modification. In this study, by combining the biological characteristics of the virus and the advantages of the DNA vaccine, the prM/E genes of the P3 strain were subcloned into the DNA vaccine vector pVAX1 (pV) to obtain the JEV DNA vaccine pV-JP3ME. The comprehensive immune response and protection induced by this vaccine were evaluated, and this study will provide important data for its further application. Materials and methods Virus, cells, plasmids, and animals JEV (strain P3) was stored at − 80°C. It was used as the coating antigen and stimulus for in vitro experiments and used for challenge experiments. Vero cells were used for plasmid transfection and plaque assays to detect viral titers, and the plaque reduction neutralization test (PRNT) was used to detect the nAb titers. C6/36 cells are used for virus proliferation assays. The pV-JP3ME plasmid was constructed by introducing the BamHI enzyme digestion site, Kozak sequence and signal sequence upstream of the prM/E sequence of the P3 strain and introducing the XhoI digestion site downstream into the eukaryotic expression vector pV. Specific pathogen-free 6-to 8-week-old female BALB/c mice were used for immunity, sera and splenocyte collection and challenge tests. The results presented are from a single experiment or are from three independent experiments. Reagents and instruments The restriction enzymes BamHI and XhoI, eukaryotic expression vector pV, nuclear staining agent 4′,6-diamidino-2-phenylindole (DAPI), and transfection reagent Lipofectamine 3000 were purchased from Thermo Scientific (USA). Minimal essential medium (MEM) and RPMI-1640 medium were purchased from Gibco (USA). Methylcellulose was purchased from Sigma (USA). Goat anti-mouse fluorescein isothiocyanate (FITC)-IgG antibody was purchased from Beijing TransGen Biotech (China). Goat anti-mouse horseradish peroxidase (HRP)-IgG antibody was purchased from Abcam (USA). Tetramethylbenzidine (TMB) substrate color solution was purchased from MabTech Company (USA). The enzyme-linked immunospot (ELISPOT) kit, streptavidin and AEC color development kit were purchased from BD Company (USA). The gene introduction instrument was purchased from Shanghai Teresa Corporation (China). The enzyme-linked immunosorbent assay (ELISA) plate reader and cell culture incubator were purchased from Thermo (USA). The ELISPOT plate reader was purchased from CTL (USA). Transfection and immunofluorescence experiments pV-JP3ME or pV was transfected into Vero cells. After 5 h, the transfection plasmid/reagent mixture was discarded and replaced with complete culture medium. After 40 h, the medium was discarded, and the two groups of cells were simultaneously fixed. Then, the cells were incubated with JEV antiserum (1:1000) as the primary antibody and goat anti-mouse FITC-IgG as the secondary antibody. Observation of specific green fluorescence under a fluorescence microscope indicates that the plasmid was successfully transfected and expressed in mammalian cells in vitro. Fluorescence microscope imaging was performed at 200× magnification, and the microscope was from Nikon, Japan. Animal experiments The mice were randomly divided into two groups. The vaccine group was vaccinated with the DNA vaccine pV-JP3ME, and the control group was vaccinated with the empty vector plasmid pV. Immunization was performed three times, and each immunization dose was 50 μg. Intramuscular injection (i.m.) with electroporation (EP) was used. One and 3 weeks after the last immunization, the splenocytes and sera of the two groups of mice were collected, respectively. The next day, mice in the vaccine and the control group were challenged with JEV. The body weight changes and the survival rate were measured daily after the challenge, and the observation continued for 12 consecutive days. The animal experiment schedule is shown in Fig. 1. Plaque assay Vero cells were cultured in 24-well plates, and the cell density was more than 95% before virus infection. The serially diluted virus was serially diluted 10-fold from the original solution (1:1) for a total of seven dilutions, namely, 1:1 to 1:10 6 . Two hundred microliters of the virus dilution was added to each well and incubated at 37°C for 1 h. The plate was gently shaken once every 15 min. After the dilution was discarded, 5 mL of MEM medium containing 1.2% methylcellulose was added to each well. After 4 d of culture, the medium was discarded, and 1 mL of crystal violet staining solution was added to each well and stained for 30 min at room temperature. The number of plaques per well was counted, and the virus titer was calculated. The titer of the virus solution was repeatedly measured three times, and an average value was taken and is expressed as the plaque-forming unit (PFU)/mL. ELISA Heat-inactivated JEV was coated in a 96-well plate at 10 5 PFU per well at 4°C overnight. The coating antigen was discarded and blocked with 1% bovine serum albumin (BSA) at 37°C for 2 h. The blocking solution was discarded. The sera of each group of mice were started from 1:100 and were serially diluted at a twofold ratio for a total of 12 dilutions, namely, from 1:100 to 1:204, 800, and added to the wells in turn as a primary antibody at 4°C overnight. The next day, the primary antibody was discarded. After the plate was washed five times, HRP-labeled goat anti-mouse IgG antibody (1: 4000) was added as a secondary antibody. After incubation at 37°C for 1 h, the plate was discarded. The substrate solution developed color for 20 min, and the reaction was stopped with H 2 SO 4 . We used 1/2 of the A 450 nm value at the 1:100 dilution of the control group as the cut-off value, and the maximum dilution greater than this cut-off value is the serum IgG antibody titer. PRNT Vero cells were cultured in 24-well plates as described previously. The serum was diluted from 1:10 and serially diluted at a 2-fold ratio. There are seven consecutive dilutions, that is, 1:10 to 1:640. Each dilution of serum was mixed with an equal volume of virus solution (containing 100 PFU). For incubation at 37°C for 1 h, serum-free virus samples were set at 4°C and 37°C to exclude temperature factors and reference positive virus counts. Then, the serum/virus mixture was added to the wells in order and incubated at 37°C for 1 h. During the period, the plate was shaken gently every 15 min. The subsequent step was the same as described in 2.5. The serum dilution corresponding to a 50% reduction in the number of plaques in the positive wells incubated at 37°C was recorded as the PRNT 50 value, which is the nAb titer. ELISPOT assay IL-2 and IFN-γ capture antibodies diluted 1:200 were coated in a 96-well plate at 4°C overnight. We discarded the coating solution and blocked with RPMI-1640 medium containing 10% FBS at room temperature for 2 h. Then, the splenocytes from the two groups were added to each well at 2 × 10 5 cells per well, and 10 5 PFU heat-inactivated JEV was added as a stimulus and cultured at 37°C for 72 h. After the cultured splenocytes were discarded, IL-2 and IFN-γ detection antibodies were added, and then, the spot forming units (SFUs) were determined by adding streptavidin and AEC chromogenic solution. Statistical analysis All experimental data were recorded using Excel 2016 software, statistical analysis was performed using SPSS 17.0 software (USA), the body weight change was analyzed by repeated analysis of multivariate analysis of variance, survival rates were compared using the log- Fig. 1 Mouse experimental workflow. Groups of mice were immunized by intramuscular electroporation with 50 μg of either the pV-JP3ME DNA vaccine or pV in each limb individually and were boosted twice at three-week intervals. Splenocytes were obtained 1 week after the final immunization, and sera were collected 3 weeks after the final immunization. Subsequently, the vaccinated mice were challenged with 1 × 10 5 PFU of the JEV P3 strain. The body weight changes and the survival rates were observed for 12 consecutive days after the challenge rank test, and the differences between the groups were compared using one-way ANOVA. Quantitative data are expressed as the mean ± standard deviation. P < 0.05 was considered statistically significant. Results The target protein prM/E was successfully expressed in eukaryotic cells As shown in Fig. 2, after Vero cells were transfected with pV-JP3ME, the expressed target protein could bind to the JEV antisera and showed specific green fluorescence, while the Vero cells transfected with the empty vector plasmid pV showed no specific fluorescence. These results indicated that the target protein prM/E can successfully be expressed in eukaryotic cells and has reactogenicity, which can be used for subsequent experimental research. Vaccination of mice with pV-JP3ME induces high levels of JEV-specific IgG antibodies in immune sera As shown in Fig. 3a, 3 weeks after the last immunization, the sera of the vaccine group and the control group were collected, and the titer of the anti-JEV-specific IgG antibodies was measured by ELISAs. Higher IgG antibodies against JEV were detected in the sera of mice in the vaccine group than in the control group, with a titer of 1: 4200 compared with 1:163 in the control group (P < 0.001). There is a significant difference between the groups, as shown in Fig. 3a, indicating that after immunization with the JEV DNA vaccine pV-JP3ME, mice can induce high levels of IgG antibodies against JEV. Robust neutralizing activity was found in the immune sera The titer of anti-JEV-specific nAb was measured by PRNT. The results showed that the nAb titer against JEV in the sera of the vaccine group was 1:380 and that in the control group was 1:11 (P < 0.001). There was a significant difference between the groups, as shown in Fig. 3b, suggesting that after pV-JP3ME immunization, mouse sera had neutralizing activity against JEV. Inflammatory cytokines were produced upon stimulation with the JEV antigen One week after the last immunization, splenocytes of the vaccine group and the control group were obtained, and ELISPOT was used to determine the number of IL-2and IFN-γ-spots forming cells per 10 5 splenocytes upon stimulation with the JEV antigen. The results showed that the spots of IL-2 and IFN-γ were significantly greater in the splenocytes of the vaccine group than in those of the control group in vitro (P < 0.01), as shown in Fig. 3c and d. After the mice were immunized with pV-JP3ME, the splenocytes secreted higher levels of IL-2 and IFN-γ after stimulation with the JEV antigen, suggesting a better antiviral cytokine response. Vaccinated mice were fully resistant to lethal doses of JEV To evaluate the protective efficacy of the JEV DNA vaccine pV-JP3ME in mice, we used 1 × 10 5 PFU of JEV to intraperitoneally (i.p.) challenge the mice in the vaccine and the control group, and the changes in body weight and the survival rate were recorded and compared for 12 consecutive days. After the challenge with JEV, the body weight change of the vaccinated mice showed a steady trend during the observation period, and the differences between individuals were limited, while the body weight of the control mice continued to decline. A significant difference was shown between the two groups (P < 0.01, Fig. 4a). In terms of the survival rate, the mice in the vaccine group were fully protected, with a survival rate of 100% (8/8), while the mice in the control group began to die on the sixth day, and all died on the twelfth day Fig. 2 Representative images of immunofluorescence after Vero cells were transfected with plasmid DNA. After Vero cells were transfected with pV-JP3ME or pV, JEV antiserum was used as the primary antibody, and goat anti-mouse FITC-IgG was used as the secondary antibody for staining. The left image a shows specific green fluorescence, but the right image b does not (× 200) (0/8). The difference between the groups was statistically significant (P < 0.001, Fig. 4b). Discussion Currently, the JEV vaccine widely used in China is the live attenuated vaccine strain SA14-14-2. This vaccine has a good efficacy and, in addition to China, many Asian countries have approved it for use. Furthermore, another self-developed inactivated vaccine, the P3 strain, is still in use in China. The P3 strain used for special populations (based on concerns about live vaccines or unsafe vaccine incidents in society). However, the P3 Fig. 4 Protective effect generated by mice after immunization. In the third week after the mice were immunized with 3 pV-JP3ME or pV doses, the challenge test was performed. The mice in the vaccine group had no significant body weight changes within 12 days, while the control group showed a continual decline. Similarly, the mice in the vaccine group were completely protected, and the survival rate was up to 100%, while the mice in the control group all died. **P < 0.01, ***P < 0.001, n = 8 per group Fig. 3 Specific humoral immune response and cytokine secretion produced by mice after immunization. In the third week after the mice were immunized three times with pV-JP3ME or pV, a IgG antibodies and b nAbs were produced with higher titers in the vaccine group than in the control group. One week after immunization, the mouse splenocytes produced high levels of c IL-2 and d IFN-γ upon JEV antigen stimulation. **P < 0.01, ***P < 0.001, n = 8 per group inactivated vaccine rather than the SA14-14-2 vaccine is not included in the EPI; the former requires full payment by the patient, but it still has a solid market share as described above. In some Asian countries and regions, such as South Korea, Japan, and Taiwan, inactivated vaccines are customary and considered safe. Most of the inactivated vaccines used are the Nakayama strain [11], which originated from Japan and still predominates in the JEV vaccine market, although different types of live attenuated vaccines have been introduced into these regions. Inactivated vaccines do have some disadvantages that cannot be compared with live-attenuated vaccines, such as low immunogenicity, a significant Th2-type immune trend, and a requirement for multiple immunizations [12]. However, in terms of its safety, inactivated vaccines do not show virulence recovery and reproducibility, which still makes them highly desired in many JE-affected countries. The vaccinated population is mainly children around the age of 1 year, and safety cannot be ignored. The SA14-14-2 live-attenuated vaccine requires only two injections, while the inactivated P3 vaccine requires four or more vaccinations to achieve sufficient protection. This phenomenon strongly limits the vaccine as the first choice for emergency vaccines for travelers. Even in JE-endemic countries such as China, this vaccine is not optimal in comparisons. It is difficult to develop a live attenuated vaccine based on the P3 strain, and even if it is successful, it is difficult to compete with the existing SA14-14-2 strain because the latter has been certified and recommended by the World Health Organization. In comparison, DNA vaccines have been used in vaccine design in recent years to prevent and treat multiple pathogens and diseases, such as infectious diseases including dengue virus, human immunodeficiency virus, and malaria and tumors [13,14]. DNA vaccines have been approved for veterinary use, but there are no approved vaccines for human use [15]. Due to the balanced immune response and the long-lasting effect induced by DNA vaccines, they have become a popular alternative. Therefore, in this study, the most antigenic structural protein, prM/E, of the JEV strain P3 was used as a target to construct a DNA vaccine. First, we verified the success or failure of expressing the target antigen in the transfection experiments of mammalian cells in vitro. The vaccine expresses the target protein, which can stably bind to JEV-specific antibodies, suggesting that it has good reactivity. Regarding the route, time and dose of immunizations, we have already fully studied other flavivirus vaccines, such as dengue virus and Zika virus, and will not discuss them here. After three immunizations with the vaccine, mice show high titer IgG antibodies and nAb in vivo. Furthermore, the splenocytes of the immunized mice produced high levels of the cytokines IL-2 and IFN-γ upon stimulation with JEV-specific antigens. IL-2 plays an important role in the maturation, proliferation, and activation of T cells. IFN-γ is one of the most important innate and acquired antiviral cytokines and can play an antiviral role in both innate and adaptive immunity [16]. We selected these two representative cytokines for testing, suggesting that the vaccine is immunogenic. Furthermore, in the challenge test in vivo, pV-JP3ME provided complete protection for the mice, resulting in resistance to the lethal dose of JEV, while the mice in the control group all died. The above results indicate that the prM/E protein of the P3 strain is sufficient as a target protein candidate of the JEV vaccine to induce effective immunoprotection [17]. In recent years, there have been many studies on SA14-14-2, and the results showed the scalability of the vaccine. Erra et al. prepared an inactivated vaccine by using the live attenuated vaccine SA14-14-2 strain [18], and Appaiahgari et al. replaced the prM/E of the yellow fever virus attenuated strain 17D with that of SA14-14-2 by using chimeric recombination [19]; they also obtained a good immune effect. One of the reasons for selecting the P3 strain in this study is that the virulence of the P3 strain is closer to that of the wild type strain than SA14-14-2, and it is presented in the form of a DNA vaccine to ensure that there is no possibility of virulence recovery. The expressed target protein is similar to the natural conformation of the original strain prM/E protein containing key epitopes that are not displayed in the attenuated strain. Second, P3 is an inactivated vaccine strain widely used in China, and recombinant construction can be used in combination with the P3 inactivated vaccine in heterogeneous immunization in subsequent studies. Previous studies have reported that a DNA vaccine constructed using a specific target protein can be used as the prime immunization, a protein component vaccine (subunit vaccine) is prepared with the same target protein is boosted, and the immune response is more robust than that of homologous immunization, namely, DNA alone or subunit immunization alone [20]. The level of the immune response induced by heterologous immunization is higher and more balanced than single immunization, and the protective effect is also improved. This strategy is expected to reduce the times of immunization compared with immunization with an inactivated vaccine alone, and theoretically, it can also improve the immunogenicity and longterm immune response induced by the vaccine [21,22], which warrants further experiments. Conclusions In summary, this study used the P3 strain to construct a JEV DNA vaccine candidate, evaluated its immunogenicity and protective effect in mice, and confirmed that the vaccine can induce JEV-specific humoral and cytokine responses and provide complete protection against JEV in mice. Our data will provide a basis for the subsequent promotion and use of the vaccine and lay the foundation for its combined use with inactivated vaccines of the same strain in a heterologous regimen. Availability of data and materials All data and materials described in the manuscript are available.
2020-01-09T09:16:03.062Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "8a6cdc4ab1c06b97c89310ada628d24529d2ea91", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-020-01400-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b1e53cb43e1095c450520c3b458187a9a45d926", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
56422723
pes2o/s2orc
v3-fos-license
Excessive Debt or Excess Savings-- Transition Countries Sovereign Bond Spread Assessment We study the sovereign yield spreads determinants in transition – Central and Eastern Europe (CEE) and Caucasus and Central Asia (CCA) -countries and try to provide an answer to the key question: was the narrowing of the spreads and their compression a result of improvement of CEECCA countries sovereign‟s macroeconomic policy (implemented in early to mid 2000s), or was it due to global excess liquidity provision? If better domestic macroeconomic policy efforts and solid reforms implemented in this period have led to: i) improvement in sovereign debt management e.g., by increasing the average debt portfolio duration and reducing the stock of FOREX debt; ii) development of domestic financial markets with enlargement of the investor‟s base and enhancement of the risk management techniques; iii) continuing financial liberalization; iv) sustainable fiscal adjustment, reserve accumulation and price stability; and v) adoption of the most conductive to prosperity institutional structure, then it would be expected that any tighter monetary policy environment in the developed economies should have only a tiny effect on spreads. The models are estimated on an individual basis -country by country -using a framework allowing for fractionally integrated variables (ARDL) as well as, by utilising panel data (cross-sectional-time-series) estimation whenever data availability allows. We utilise daily data over the period 2006-2012 and quarterly data over the period 2002-2011. These are the periods for which meaningful comparable data are available for Bulgaria, Croatia, Hungary, Kazakhstan, Poland, Russia, Serbia, and Ukraine (in various combinations). We are careful not to attempt to split the sample into (say two) potential segments for comparison of “normal” versus “crises” period estimates (as customary) as since 2002 / 2003 the transition economies have started to experience the powerful financial effect generated by the excess global liquidity, i.e., the entire period under consideration is constituted by two phases characterised by: i) excess liquidity (2002-2008); and, ii) the Great Depression Mark II (2008 – to present). Introduction "Half-knowledge is more victorious than whole knowledge: it understands things as being more simple than they are and this renders its opinions more easily intelligible and more convincing."Nietzsche: Human, All Too Human: A Book for Free Spirits A range of academic studies have analysed the determinants of the difference between the sovereign"s emerging market debt securities and US Treasury bonds and/or German bunds of similar maturities.Still, while there have been a number of papers dealing with yield spreads on Eurozone government bonds (e.g., Codogno, Favero and Missale (2003), Pagano and Von Thadden (2004), Mody (2009), and Klepsch and Wollmershauser, (2011)) there have not been many methodical studies on the price determination of sovereign bonds in emerging markets; particularly in the group of Central and Eastern Europe (CEE) and Caucasus and Central Asia (CCA) countries. One early (partial exception) is the paper of Eichengreen and Mody (1998) examining launch spreads based on data for a mixed group of 55 emerging market countries over the period 1991 to 1996.They collect information on altogether 1,033 bonds split as follows: 670 from Latin America; 233 from East Asia; and 81 from Eastern Europe.Regressing spreads on various potential determinants they detect: "But the same explanatory variables have different effects in the principal debt issuing regions (Latin America, East Asia, and Eastern Europe)." It is interesting to compare the coefficients of regression on the variables Debt/GNP and GDP growth between the combined group of Latin America and East Asia countries with the Eastern Europe bond issues.While for the former group the coefficient on Debt/GNP is relatively small, has positive sign (0.437) and is significant (t-stat 2.054), for Eastern Europe its value is big, negative (-1.255) and it is insignificant (t-stat -1.367).In the same vein the coefficient on GDP growth for Latin America and East Asia is positive sizable (2.253) though insignificant (t-stat 0.616) and the equivalent coefficient for Eastern Europe is negative, vast (-14.250) and significant (t-stat -1.954).Furthermore, the coefficient of mutual determination corrected for degrees of freedom for the Latin America and East Asia estimated model is close to 0.6, while it is only about 0.09 for Eastern Europe.These OLS results suggest that about 60 per cent of the variation in spreads is explained for Latin America and East Asia and just about 9 per cent for Eastern Europe, anticipating the authors" statement: "And when it comes to changes in spreads over time, we find that these are explained mainly by shifts in market sentiment rather than by shifts in fundamentals." Hence, the established state of knowledge in this area is as yet by no means sufficient to resolve the question of what are the major determinants of sovereign bond spreads.Our research paper aims to help to reveal definite empirical regularities, plausible interconnections, and credible causalities in this area, providing an answer to the question --was the general narrowing of the spreads and their compression a result of an improvement of CEECCA countries macroeconomic policy, implemented after 2002, or was it due to global excess liquidity provision. Literature Review The empirical research on the determinants of government bonds spreads in advanced economies is vast, whilst the existence of similar analytical papers dealing with the emerging markets economies is more restricted.Still, both have recently enlarged, in particular since the beginning of the financial and economic crisis --the Great Depression Mark II --from 2008. The main focus is: macroeconomic fundamentals determining sovereign risk; external shocks related to global liquidity; risk aversion / appetite; state of development of domestic financial markets; and, quality of governance indicators. Contributions about the influence of macroeconomic variables on sovereign spreads, include Min (1998), Eichengreen and Mody (1998), Kamin and von Kleist (1999), and Hilscher and Nosbusch (2010).In general, these studies find considerable association with macroeconomic fundamentals and evidence that sovereign spreads in the 1990s declined more than country fundamentals" changes could account for.Baek et al (2005), among others, offer a possible explanation: they "[p]ostulate that the market-assessed country risk premium is determined not only by economic fundamentals of a sovereign but also by non-country-specific factors, especially the market"s attitude towards risk."In their analysis they find that the yield spreads, although reacting to alterations in economic aggregates, in principal are driven by changes in the market perception of risk.This finding is supported by the conclusions of the studies of various authors including: McGuire and Schrijvers (2003), Jaramillo and Weber (2013), Arora and Cerisola (2001), Ferrucci (2003), and Baldacci and Kumar (2010).Arezki and Bruckner (2010), construct an individual international commodity price index per country that allows them to confine revenue windfalls from rising prices of exported commodities and in addition exploit two measures of political institutions.Their main findings are: i) " [p]ositive international commodity price shocks lead on average to a significant reduction in commodity exporting countries" spread on sovereign bonds.";ii) allowing for cross-country differences in political institutions entails that for democracies "[a] positive commodity price shock of size 1 standard deviation significantly reduced the spread on sovereign bonds by over 0.4 standard deviation.On the other hand [...] autocracies a shock of similar magnitude was associated with a significant increase in the spread on sovereign bonds by 0.3 standard deviations.";and, iii) "[i]n democracies [...] windfalls from international commodity price shocks were significantly positively associated with real per capita GDP growth, in autocracies they were associated with a significant decrease in real per capita GDP." Hartelius, Kashiwase andKodres (2008), andGonzalez-Rozada andLevy-Yeyati (2008) find that macroeconomic fundamentals, global market liquidity and risk sensitivity mutually comprise the key causes of sovereign spread changes.Similar conclusions are established by Favero, Pagano and Von Thadden (2008), who analysed the sovereign spreads of European Union countries.Mody (2009) examines the interrelations linking sovereign bond spreads in the euro area countries and financial exposure and finds that financial exposure (calculated as a ratio of an equity index for the relevant country"s financial sector to the equity index taken as a whole) is strongly correlated with spread changes. Dell"Erba and Sola (2011)estimate the effect of the monetary and fiscal policy stance on both long-term interest rates and sovereign spreads by constructing a semi-annual dataset of macroeconomic and fiscal forecasts for 17 OECD countries over the period 1989-2009.They find that more than 60% of the variance in the data can be accounted for by monetary and fiscal policy positions.Kaminsky, Reinhart and Vegh (2005) examine the important question of procyclical versus countercyclical capital flows and monetary and fiscal policies depending on the country"s level of economic development.Their major findings are: "While macroeconomic policies in OECD countries seem to be aimed mostly at stabilizing the business cycle (or, at the very least, remaining neutral), macroeconomic policies in developing countries seem mostly to reinforce the business cycle, turning sunny days into scorching infernos and rainy days into torrential downpours." What's more, fiscal policies are incorporated as powerful forces of sovereign spread determination in European Union countries by Bernoth, Von Hagen and Schuknecht (2004); Afonso and Strauch (2004);and, Hallerberg and Wolff (2006).Hallerberg and Wolff (2006) after controlling for institutional changes, conclude that fiscal policy remains a significant determinant of the risk premium.According to them deficits and surpluses matter less for the risk premium in countries with better institutions.Apparently this reflects the market view that proper institutions will be able to deal with fiscal problems and make the monitoring of annual developments less important.The results are robust to controlling for country fixed effects and different estimation methodologies.Maltriz (2012), embark upon the subject-matter with Bayesian Model Averaging (BMA).In his study the author applies BMA "[t]o identify the best models and assess the quality of potential regressors."They "[f]ind that the most important drivers of default risk in the Eurozone are government debt to GDP, budget balance to GDP and terms of trade.For economic growth, export growth, import growth and the US interest rate the likelihood is between 10 and 50%, whereas for some variables found to be significant in the literature, as interest rate costs, capital formation and inflation, this likelihood is below 10%."Gibson, Hall, and Tavlas (2011), concentrate on a single country -Greeceand macroeconomic variables shaping spreads, providing evidence that "both undershooting and overshooting of spreads have occurred."This analysis is confirmed and extended additionally in space, time, and causality by De Grauwe and Ji (2012) who "[f]ind evidence that a significant part of the surge in the spreads of the PIGS countries (Portugal, Ireland, Greece and Spain) in the eurozone during 2010-11 was disconnected from underlying increases in the debt-to-GDP ratios and fiscal space variables, but rather was the result of negative self-fulfilling market sentiments [...]."They suppose that given the state of affairs: liquidity crisis, imposed austerity measures (presumably leading the country to recession), plus high interest rates on government securities could result in a solvency crisis.According to their model investors try to factor in the costs and benefits to the government from defaulting."A major insight of the model is that the benefit of a default depends on whether this default is expected or not."If investors expect a default, a default would occur, if they do not, no such would take place.Furthermore, they consider that if a country is not a member of the Eurozone, "This makes it possible for the country to always avoid outright default because the central bank can be forced to provide all the liquidity that is necessary to avoid such an outcome." While this argument may add up within its settings, one should not forget that investors may lose their confidence in the ability of the government of the "stand-alone country" to sustain its currency and take flight to safety by promptly exchanging the domestic currency denominated debt for cash -Euro or/and USD.Thus the self-fulfilling prophecy (or speculative crisis) may well become truethe country would rapidly lose foreign reserves; in time it would have no choice but to devaluate its currency; the level of the external debt would increase in local currency units; this would lead eventually to monetisation of the debt; this state of affairs brings forth new speculative attacks.Hence, just being a "stand-alone country" is not likely to be sufficient to insulate you from self-fulfilling expectations or speculative attacks.Akitoby and Stratmann (2006) emphasises the importance of sustainable fiscal policy and high fiscal adjustment, where reduction in current expenditures proves to be more effective on spread reduction than tax increases.The shaping power of liberalisation of the capital account, the currency convertibility risk premium, and the rule of law are investigated by Bacha, Holland and Goncalves (2008) as determinants of the local interest rates of emerging economies.Whereas, Edwards (2005), by means of the bidirectional interrelation between interest rates and capital account liberalisation shows that the degree of convergence of domestic and international interest rates could be used to assess the real degree of openness of the capital account. A connected subject matter that has received considerable attention is the relationship between sovereign spreads and default risk.Favero and Missale (2011) "[f]ind that default risk is the main driver of yield spreads, suggesting small gains from greater liquidity.Fiscal fundamentals matter in the pricing of default risk but only as they interact with other countries" yield spreads; that is, with the global risk that the market perceives.More importantly, the impact of this global risk variable is not constant over time, a clear sign of contagion driven by shifts in market."Hilscher and Nosbusch (2010), investigate spread determinants by focusing on the volatility of fundamentals.They observe "[t]hat the volatility of the terms of trade is both statistically and economically significant in explaining spread variation.A one standard deviation increase in the volatility of terms of trade is associated with an increase of 164 basis points in spreads, which corresponds to around half of the standard deviation of observed spreads."The authors assert as well that the terms of trade volatility is a significant predictor of country default.However, an important restriction of their conclusions is the regional and economic divergence of the countries included in their sample (Latin America 12, Africa 5, Eastern Europe 6, and Middle East and Asia 9) for which (time-invariant factors) no controls are provided. Another important area of research is the detection of short-term and long-term factors determining the sovereign bond spreads.Bellas, Papaioannou, and Petrova (2010) results indicate that in the long run, fundamentals are considerable determinants of emerging market sovereign bond spreads, while in the short run, financial volatility is rather the substantial determinant of spreads.Furthermore, researchers have also distinguished between the determinants of sovereign bond spreads during normal and crisis periods.Ebner (2009) highlights a noteworthy distinction in government bond spreads in Central and Eastern Europe throughout crisis and non-crisis periods.He provides evidence that market volatility, political instability and global causes gain in importance and predominantly explain the increase in spreads during crisis periods, while macroeconomic aggregates become less important. Belhochine and Dell"Erba (2013), applying spread regression to a panel of 26 emerging economies (including 7 transition economies: Bulgaria, Hungary, Kazakhstan, Poland, Russia, Serbia, and Ukraine) and bringing in the difference between the debt stabilising primary balance and the factual primary balance as a measure of debt sustainability, they find "[t]hat debt sustainability is a major determinant of spreads with an elasticity of about 25 basis points for each 1 percentage point departure of the primary balance from its debt stabilizing level."Furthermore they claim "[t]hat the sensitivity of spreads to debt sustainability doubles as public debt increases above 45 percent of GDP." In addition, another related approach in the literature deals with the interrelations between debt levels and their impact on economic growth (trough implicit transmission mechanisms) within the framework of a threshold model, where the behaviour of the variables is expected to change distinctly, when certainthresholdlevels are reached.The most influential paper in this respect has been (until very recently) the one published by Reinhart and Rogoff in 2010 (Growth in a Time of Debt).There the authors claim to have identified a key stylized fact: a burden of public debt larger than ninety percent of GDP notably and consistently reduces GDP growth.Examining public debt and GDP growth among twenty advanced economies in the period after the second world war, they determine that the average real GDP growth rate for countries having a public-debt-to-GDP ratio of over ninety per cent is, in fact, negative, amounting to -0.1 per cent. However, Herndon Th., M. Ash and R. Pollin (2013) have replicated Reinhart and Rogoff (2010) and were able to establish that coding errors, biased exclusion of available data, and unconventional weighting of summary statistics have led to miscalculations that provide a misleading picture of the relationship between public debt and GDP growth.They reveal that when accurately calculated, the annual average real GDP growth for national economies with a public-debt-to-GDP ratio of over ninety per cent is actually 2.2 percent, not -0.1 percent as stated in Reinhart and Rogoff.That is to say, that average GDP growth, when public debt/GDP ratios are in excess of ninety per cent is not significantly different from the average GDP growth when debt/GDP ratios are lower. Consequently, the conventional state of knowledge in this area is not adequate to resolve the question: was the general narrowing of the spreads and their compression in the CEECCA countries a result of these countries enhanced macroeconomic policies, implemented after 2002, or was it due to global excess liquidity provision (excess savings / underinvestment in real capital). Methodology In aiming to provide an answer (and illustrative evidence) to the above question we estimate various models: i) an individual basis model --country by country --using a framework allowing for fractionally integrated variables (ARDL); and, ii) a panel data model (cross-sectional-time-series) estimation. We utilise daily data over the period 2006-2012 and quarterly data over the period 2002-2011.These are the periods for which meaningful comparable data --for Bulgaria, Croatia, Hungary, Kazakhstan, Poland, Russia, Serbia, and Ukraine --are available. We start with the following equation with daily sampling frequencies:  Much more general and flexible apparatus than the traditional approach;  Important for modelling a wide range of macroeconomic relationships;  The standard practice of taking first differences may still lead to series with a component of long memory behaviour Many researchers are accustomed to think in terms of the stationarity of any time series used in the construction of whichever econometric model is being developed.As the assumption of stationarity is an important one, non-stationary time series are commonly transformed to stationary ones by differencing.This would suggest that a model specified in differences of economic time series should be favoured for finding estimates of parameters.But one of the important notions in macroeconomics is the concept of the existence of a long-run equilibrium relationship.Theoretically in steady-state equilibrium economic variables remain unchanged, until the system is shocked.Therefore, if such an equilibrium relationship is specified in first differences, the steady-state differences would be zero and there is no solution. Hence, in what follows we apply the (Autoregressive Distributed Lag) ARDL procedure developed by Pesaran and Shin (1995). Data Availability and Data Integrity Using data from transition economies necessitate careful discussion of its quality and consistency.These data may sometimes be characterised from pointless, through distorted, to completely inaccurate.Statistical and book-keeping standards under the socialist economic system have been very different from those commonly accepted in Western Europe.It has taken time to learn and understand it and to switch to the accepted international statistical standards. Much of the necessary fundamental data are still to be composed and / or disclosed and made easily available to the public.We hope to provide an impetus to serious data collection and complete disclosure for all transition economies for enabling deep economic analysis and informing consistent policy-making.The situation on the statistical front is made even more complex by the supranational economic institutions (e.g., IMF and WB) practice not to distribute all the data they have (see Annex 1) and to avoid publishing the data they hand out in high frequencies1 (quarterly and monthly).Moreover, the data published in the International Financial Statistics (IFS) and the World Economic Outlook (WEO) formats may and do differ, with access to the full database available only to internal IMF staff. Tables 1 to 3 including (below) illustrate the data availability for the group of countries we examine. The dataset We use daily data obtained directly from Bloomberg and ThompsonReuters.In general the data set for each country starts approximately mid-2006 and ends at mid-2012, comprising on average about 1600 observation per country.Technically the estimation is executed in Microfit 4.1 and EViews 6. Sovereign Bond Spreads, Financial Markets Determinants -Spread Regressions by Country A potential default is often mostly associated with an increase in yield spreads.To examine the determinants of sovereign bond spreads we estimate an equation for the sovereign bond spread (as dependent variable) determined by a range of exogenous variables. Furthermore we assess the long-term determinants and short-run dynamics (error-correction model) of the sovereign bond spreads of Bulgaria, Croatia, Hungary, Kazakhstan, Poland, Russia, Serbia, and Ukrainethese are the relevant countries for which we have managed to obtain meaningful data, both statistically and economically.Likewise, we gain some additional understanding of the convergence process.Based on this specification we may be able to illustrate quantitatively the impact improved investors" confidence may have upon financing conditions as depicted by government bond spreads.Given that, undoubtedly, there were significant differences in the creditworthiness of the borrowers in the index --this state of affairs at that time might suggest that investors did not differentiate adequately among borrowers.This situation was followed eventually by the Bear Sterns alarm in March 2008, which led to the increased discrimination in spreads across countries.Furthermore, the spreads widened extensively after September 2008, following the bankruptcy of the Lehman Brothers. Emerging Markets Bond Indices Hence, the key question is: was the narrowing of the spreads and their compression a result of an improvement of CEECCA country sovereigns" macroeconomic policy, implemented after 2002, or was it due to global excess liquidity provision? Figure 1.The Emerging Markets Bond Indices (EMBI) Sovereign Stripped Spread, Daily Credit default swaps (CDS) The spreads in Figure 2 (below) are for five-year contracts on CDSs with the spreads measured in basis points -each basis point is equal to USD 1,000.Seemingly comparable to an insurance contract, purchasers of a CDS pay for insurance against a credit event on the public debt.Hence, they can be used as a convenient, standard risk measure on government debt quality.For illustration, the Ukraine five-year CDS, the insurance premium is the annual insurance payment relative to the amount of debt; in March 2009, these CDSs reached a spread of more than 3,800 basis points (with even more extreme values on a daily basis, as can be seen at chart 3, below), meaning that the buyer pays an insurance premium of about 38 percent per year of the value of the securities (i.e., USD 3,800,000 on $10,000,000 worth of debt).The credit default swap seller collects the premiums and pays out (the face value) if a credit event occurs.Thus CDS spreads can be interpreted as a measure of the perceived risk that a government will restructure or default on its debt.CDS spreads in April 2012 imply that the perceived probability of the Ukraine government defaulting is substantially higher than it was one year earlier, but lower than in 2009.Credit default swaps pros and cons are debatable, to say the least, and the question are they instrument providing type of insurance or are they rather a device providing an unobstructed way to taking part in speculation is yet to be answered. In May 2010 the German Federal Financial Supervisory Authority (BaFin) put into operation a complete ban on taking naked sovereign CDS positions.3On March 14, 2012, the European Commission adopted a proposal for regulating short selling and certain aspects of credit default swaps, de facto permitting the use of CDS only for the purpose of hedging long positions already held by investors.4As the Commission points out, there are resemblances between short selling stocks that one does not own and buying CDSs on assets that one does not have.These positions are such that speculators profit from adverse developments in the underlying security, and the positions could contribute to a decline in prices in the underlying assets, e.g., prices of government debt. Economic theory is yet to provide an unambiguous answer to the long standing question about whether speculation in general and in derivative markets in particular is proving predominantly stabilizing or rather destabilizing to any given economic system.For example Portes (2010) concludes: "Banning naked CDS will require common action in the US and in the EU, but the political environment is right.We should not lose this opportunity."Atthe same time, Duffie (2010) argues that "Regulations that severely restrict speculation in credit default swap markets could have the unintended consequences of reducing market liquidity, which raises trading execution costs for investors who are not speculating, and lowering the quality of information provided by credit default swap rates regarding the credit qualities of bond issuers.Regulations that severely restrict speculation in credit default swap markets could, as a result, increase sovereign borrowing costs somewhat." More obviously sovereign CDS spreads can have a potentially important functional role in the process of price discovery.Still, empirical results concerning who leads the price discoverythe sovereign CDS market or the government bond market are mixed and imprecise.These divergences may be partly related to the different time periods, sampling frequency, methodology and a choice of data.The empirical studies have revealed the following mixed conclusions so far: a number of papers provide support for the dominance of the government bond market, while others claim to have verified the primacy of CDS market.Gyntelberg et. al. (2013) find that CDS prices have a tendency to shift first in reaction to news followed by alteration in bond prices in the same direction and eventual convergence.Palladini and Portes (2011) conclude as well that CDS market spreads in general lead bond markets, but the adjustment towards equilibrium is sluggish.Fontana and Scheicher (2010) examine ten euro sovereigns (January 2006 -June 2010) and find that price discovery is uniformly divided between CDS and bond markets.O'Kane (2012) presents comparable results.Aktug et al. (2012) study thirty emerging markets and find that bond markets lead CDS markets largely, but not always.Support for the bond markets leading role is also found in Ammer and Cai (2011).They find a long-term relationship between CDS and bond markets for the majority of countries.Overall tentatively they conclude that the bond market leads the CDS market more often.Giannikos et al. (2013) inspect the links of price discovery via, daily CDS spreads; bond spreads and stock prices over the period 2005-2008 for ten US financial firms.They find that throughout the sample period, CDS and bond spreads are evidently cointegrated --the CDS market dominating in price discovery.Examining 18 industrial and emerging economies from January 2007 to March 2010, Coudert and Gex (2013) conclude that bonds appear to lead for "low-yield countries" (developed) European economies, while the derivative market tend to be the direction-finder for "high-yield" emerging economies. Thus the evidence on price discovery presented above is, at any rate, adequate to challenge the conviction that the relatively small CDS market cannot influence bond spreads in sovereign debt markets as its net exposure is just a few per cent of the total government bond stock.Typically the proponents" justification of this view may go like this: "Profitable manipulation through price impact is difficult.[...] [a]chieving a sizable price impact would require CDS manipulators to take positions that are large relative to the amount of debt outstanding.In the case of the financially weaker Eurozone sovereigns, the aggregate net CDS positions […] represent small fractions of their respective amounts of debt outstanding.With Greece, for example, the aggregate of the net CDS positions held in the entire market has remained well under 3% of the total amount of Greek debt outstanding. […] That is, even if all CDS protection buyers in the market were manipulators, and had conspired to drive up CDS rates, they would have had only a marginal impact on the total amount of sovereign credit risk borne by bond owners and sellers of protection.Supply and demand for the sovereign's credit would cross at a new price that is relatively close to the "fair-market" (unmanipulated) price (Duffie, 2010)." A crisp competent answerwith which we completely concur --is provided by Portes (2010), "We are told [...] that because net CDS exposures are only a few percent of the stock of outstanding government bonds, "the tail can"t wag the dog", so the CDS market can"t be responsible for the rising spreads on the bonds.This of course contradicts the argument that the CDS market leads in price discovery because of its superior liquidity.More important, it is nonsense.Over a period of several days in September 1992, George Soros bet around $ 10 billion against sterling, and most observers believe that significantly affected the marketand the outcome.But daily foreign exchange trading in sterling then before serious speculation began was somewhat over $100 billion.The issue is how CDS prices affect market sentiment, whether they serve as a coordinating device for speculation." Furthermore, strong empirical support is provided from Shim and Zhu (2010).The authors analyse the time period of January 2003 to June 2009 and conclude: "[t]hat at the peak of the financial crisis the CDS market contributed to higher spreads in the bond market." Based on the evidence presented above we identify CDS as explanatory variable. Chicago Board Options Exchange Volatility Index (VIX) --Global Risk Aversion Proxy VIX, was first initiated by the CBOE in 1993 (data series commencing in January 1986), as a weighted measure of the implicit volatility of eight S&P 100 at-the-money options (both put and call).In ten years time, it has been extended to exploit options based on the broader index (S&P 500), offering more precise scrutiny of investors' expectations on future market volatility.Thus VIX is a commonly used measure of market risk and is often referred to as the "investor fear gauge".VIX values bigger than 30 are normally associated with a large amount of volatility due to investor"s fear or insecurity, whereas values under 20 in general correspond to tranquil periods in the markets.When VIX reaches excessively high levels, this tends to imply that economic agents have bought puts as insurance against a falling market (the explanation is following on Investopedia.com,"VIX -CBOE volatility Index").We take VIX is an appropriate index to be used in our analysis due to its broad acceptance as representation of investor"s expectations market volatility of S&P 500, plus its high frequency, long period time-series availability.The results of the F-statistic for the joint test of zero restrictions on the coefficients of additional variables for Bulgaria reject the null hypothesis in favour of the existence of long-run relationship between SSEMBI, CDS and VIX.We estimate eq.2 and get the long-run coefficient; then we obtain the estimates of the error correction model associated with these long-run estimates and report the outcome as eq.2 above.All the explanatory variables are strongly significant (t-ratios shown in parenthesis) and with the expected sign.One point increase in the Bulgaria"s risk (approximated by the CDS) would lead to increase of about 0.38 basis points in the dependent variable SSEMBI (Bulgaria"s bond"s spread) ceteris paribus.If the global risk aversion (proxied by VIX) goes up by one point an increase of about 9.9 basis points in SSEMBI would be induced everything else remaining the same.The error correction coefficient of about -0.045 implies just less than 15 working days half-life to equilibrium of the Bulgarian bond spread.The coefficient for mutual determination corrected for degrees of freedom equals 0.2137 suggesting that about 21 per cent of variability in the dependent variable is explained. Croatia The joint test for zero restrictions on the coefficients of the lagged level variables does not reject the null hypothesis.Given that the unit root tests suggest that the underlying data series are non-stationary, they have to be modelled in an appropriatecointegration --econometric framework to avoid making inferences based on spurious regressions results.However, as the variables are not cointegrated such option is precluded.The results of the F-statistic for the joint test of zero restrictions on the coefficients of additional variables for Hungary reject the null hypothesis in favour of the existence of long-run relationship between SSEMBI, CDS and VIX.We estimate eq.3 and get the long-run coefficient; then we obtain the estimates of the error correction model associated with these long-run estimates and report the outcome as eq.3 above.All the explanatory variables are strongly significant (t-ratios shown in parenthesis) and with the expected sign.One point increase in the Hungary"s risk (approximated by the CDS) would lead to increase of about 0.82 basis points in the dependent variable SSEMBI (Hungary"s bond"s spread) ceteris paribus.If the global risk aversion (proxied by VIX) goes up by one point an increase of about 3.9 basis points in SSEMBI would be induced everything else remaining the same.The error correction coefficient of about -0.042 implies just less than 17 working days half-life to equilibrium of the Hungarian bond spread.The coefficient for mutual determination corrected for degrees of freedom equals 0.0675 suggesting that just less than 1 per cent of variability in the dependent variable is explained.The results of the F-statistic for the joint test of zero restrictions on the coefficients of additional variables for Poland reject the null hypothesis in favour of the existence of long-run relationship between SSEMBI, CDS and VIX.We estimate eq.4 and get the long-run coefficient; then we obtain the estimates of the error correction model associated with these long-run estimates and report the outcome as eq.4 above.The explanatory variable VIX and the ECM term are strongly significant (t-ratios shown in parenthesis) and with the expected sign.However, the increase in the Polands"s risk effect is too small and not statistically significantly different from zero.If the global risk aversion (proxied by VIX) goes up by one point an increase of about 8.4 basis points in SSEMBI would be induced everything else remaining the same.The error correction coefficient of about -0.015 implies about 45 working days half-life to equilibrium of the Poland bond spread.The coefficient for mutual determination corrected for degrees of freedom equals 0.0329 suggesting that just less than 1 per cent of variability in the dependent variable is explained.The results of the F-statistic for the joint test of zero restrictions on the coefficients of additional variables for Russia reject the null hypothesis in favour of the existence of long-run relationship between SSEMBI, CDS and VIX.We estimate eq.5 and get the long-run coefficient; then we obtain the estimates of the error correction model associated with these long-run estimates and report the outcome as eq.5 above.All the explanatory variables are strongly significant (t-ratios shown in parenthesis) and with the expected sign.One point increase in the Russia"s risk (approximated by the CDS) would lead to increase of about 0.63 basis points in the dependent variable SSEMBI (Russia"s bond"s spread) ceteris paribus.If the global risk aversion (proxied by VIX) goes up by one point an increase of about 6.0 basis points in SSEMBI would be induced everything else remaining the same.The error correction coefficient of about -0.042 implies just about 17 working days half-life to equilibrium of the Russian bond spread.The coefficient for mutual determination corrected for degrees of freedom equals 0.4946 suggesting that almost exactly 50 per cent of variability in the dependent variable is explained.The results of the F-statistic for the joint test of zero restrictions on the coefficients of additional variables for Ukraine reject the null hypothesis in favour of the existence of long-run relationship between SSEMBI, CDS and VIX.We estimate eq.6 and get the long-run coefficient; then we obtain the estimates of the error correction model associated with these long-run estimates and report the outcome as eq.6 above.All the explanatory variables turn out to be statistically insignificant (t-ratios shown in parenthesis) and VIX is with the "wrong" sign.The error correction coefficient of about -0.0008 implies about 866 working days half-life to equilibrium of the Ukraine bond spread, but is statistically insignificant.The coefficient for mutual determination corrected for degrees of freedom equals 0.2348 suggesting that about 23 per cent of variability in the dependent variable is explained.All the explanatory variables being insignificant only in the specific case of Ukraine tend to suggest that the bond spread of the country is driven by other forces, possibly including low quality of governance, corruption and heavy speculation.The results of the F-statistic for the joint test of zero restrictions on the coefficients of additional variables for Serbia reject the null hypothesis in favour of the existence of long-run relationship between SSEMBI, CDS and VIX.We estimate eq.7 and get the long-run coefficient; then we obtain the estimates of the error correction model associated with these long-run estimates and report the outcome as eq.7 above.All the explanatory variables are strongly significant (t-ratios shown in parenthesis) and with the expected sign.One point increase in the Serbia"s risk (approximated by the CDS) would lead to increase of about 0.48 basis points in the dependent variable SSEMBI (Hungary"s bond"s spread) ceteris paribus.If the global risk aversion (proxied by VIX) goes up by one point an increase of about 21 basis points in SSEMBI would be induced everything else remaining the same.The error correction coefficient of about -0.020 implies just about 34 working days half-life to equilibrium of the Serbian bond spread.The coefficient for mutual determination corrected for degrees of freedom equals 0.1965 suggesting that just around 20 per cent of variability in the dependent variable is explained.The results of the F-statistic for the joint test of zero restrictions on the coefficients of additional variables for Kazakhstan reject the null hypothesis in favour of the existence of long-run relationship between SSEMBI, CDS and VIX.We estimate eq.8 and get the long-run coefficient; then we obtain the estimates of the error correction model associated with these long-run estimates and report the outcome as eq.8 above.All the explanatory variables are strongly significant (t-ratios shown in parenthesis) and with the expected sign.One point increase in the Kazakhstan"s risk (approximated by the CDS) would lead to increase of about 0.33 basis points in the dependent variable SSEMBI (Kazakhstan"s bond"s spread) ceteris paribus.If the global risk aversion (proxied by VIX) goes up by one point an increase of about 21 basis points in SSEMBI would be induced everything else remaining the same.The error correction coefficient of about -0.042 implies just about 16 working days half-life to equilibrium of the Kazakhstan bond spread.The coefficient for mutual determination corrected for degrees of freedom equals 0.2903 suggesting that about 29 per cent of variability in the dependent variable is explained. Sovereign Bond Spreads, Financial Markets Determinants: Cross Sectional Time Series Estimate -Pooled Least Squares The cross-sectional-time-series (CSTS) data contains valuable information about both: i) changes between the subjects (cross-sectional information); and, ii) changes within the subjects (time-series information). Turning to the panel data model, first we perform series of unit-root tests (checking both for individual and common unit root processes), on the basis of which, we are not able to reject the presence of unit roots (detailed results of the tests are presented in Annex 1) in the data. Next we perform panel cointegration tests (see Annex 2), all of which reject the null hypothesis of no cointegration.Hence, given that our variables are cointegrated we proceed with estimating both fixed and random effects (cointegrated panels) models.In general, the fixed effects model assumes that each country differs in its intercept term, while the random effects model assumes that each country differs in its error term.We perform two test: i) Pedroni residual cointegration test; and Kao residual cointegrtion test. All of the eleven statistics reported in the test of Pedroni reject the null hypothesis of no cointegration at a very high level of significance.The same strong result is obtained from the Kao test (Annex 2). It should be noted that the literature on panel cointegration is still in a process of development and fine-tuning.In particular cointegration tests based on cross sectional dependence when improved further should replace / should be used together with the Pedroni and Kao (first-generation) tests which assume cross-sectional independence. As a next step we proceed with estimating a fixed effect (FE) model.The results are shown at Table 5. below. Table 4. Pooled Least Squares Fixed Effects Model, Estimation Results The fixed effects coefficients differ in sign and size.Consequently, we test for (unobserved) heterogeneity.The test applied is the standard (in EViews) Redundant Fixed Effects Tests, where the null hypothesis is that the fixed effects are all equal to each other. Table 5. Redundant fixed effects test The p-values related to the F-statistic and the Chi-square statistics are both very small, (see Table 5, above) providing strong evidence against the null hypothesis and suggesting the existence of heterogeneity. Next we plot and examine both the residual correlation and residual covariance matrices: The correlation matrix indicates that there certainly is correlation observed among cross-sections.Interestingly, Ukraine displays negative correlations with Poland and Hungary, and such effect obtains between Serbia and Poland: an "anti-contagion" effect.The diagonal demonstrates the variances of the residuals for each cross-section in bold; the remaining numbers of the matrix show the covariance of the residuals across cross-sectional units.Based on the results from tables 6 and 7, above, we explore the opportunity to obtain an efficient estimator (using EGLS with SUR weights) by utilising the correlations between the residuals.The results of the re-estimated model are presented below. Table 8.Fixed effects model using estimated generalized least squares (EGLS) with seemingly unrelated regression (SUR) weights The estimates of CDS and VIX are to some extent smaller, but as the heteroscedasticity EGLS is more efficient than OLS estimator the standard error of CDS and VIX are less significant. Next we experiment with estimating a random effects (RE) model (Table 9, below) Table 9. Random Effects Model, Estimation Results While the regression coefficients obtained are practically identical to those of the fixed effects model, the random effects model presumes that the random effects are uncorrelated with the explanatory variablesif not the estimators would be rendered inconsistent (endogeneity problem).We apply the Hausman test (Correlated Random Effects) to test this hypothesis. Table 10.Correlated random effects -Hausman test The test (Table 11, above) rejects the null hypothesis at all conventional levels of confidence.Hence, the assumption that the random effects are uncorrelated to the explanatory variables is not acceptable, not allowing us to continue further with this approach. Sovereign Bond Spreads, Macroeconomic Determinants -Spread Regressions by Country In what follows we move to quarterly data frequency and try to assess the effect of the macroeconomic variables listed below as determinants of the sovereign bond spreads.We continue by applying the ARDL procedure. Bulgaria Comparing the F-statistic (2.0662) obtained (below) with the critical value bounds determined by Pesaran, Shin and Smith (1996), the critical values at the 90 per cent level are specified as 2.425 to 3.574.Since the F-statistics is below the lower bound of the critical range, we cannot reject the null of no long-run relationship independent of the order of integration of the respective variables. No We compare the F-statistic (14.1462) with the critical value bounds determined by Pesaran, Shin and Smith (1996).The critical values at the 99 per cent level are specified by 3.516 to 4.781.Since the F-statistics is above the upper bound of the critical value, we reject the null of no long-run relationship unconnected of the order of integration of the respective variables. Then based on the Schwartz Bayesian information criteria (SBC) we select the ARDL(1,0,1,0,0,0) model specification and estimate the long-run coefficients; subsequently we estimate the error correction model related to these long-run coefficients and we get: The error correction coefficient is strongly significant, has the correct sign and implies a half-life to convergence of about 50 working days. Poland The value of the F-statistic (3.6998) attained (below) is just above the higher critical value bound (at the 90 per cent level) specified by 2.425 to 3.574.Hence, at this level, we can reject the null hypothesis of no long run relationship. No With the exception of CHTOT all coefficients are statistically significant and with the expected sign.We observe that for Poland the RGDPG is having the most important effect on SSEMBI, i.e., one unit increase in the terms of trade would lead to about seven basis points reduction in the spread (SSEMBI).The error correction coefficient is strongly significant, has the correct sign and implies a half-life to convergence of about 38 working days. Finally, we amend somewhat the model to include the change in oil prices variable (CHOILP), and remove the PDGDP (public debt as per cent of GDPfor which we do not have data) below, and apply it for Russia With the exception of VIX all coefficients are not statistically significant and with the "wrong" sign.Interestingly, one of these coefficnts is CHOILP.The error correction coefficient is significant and has the correct sign.However, it implies quite a long half-life to convergence of about 215 working days. Concluding Remarks and Policy Implications First we analyse the financial markets (variables) explanatory power (using proxies for change in market sentiment (VIX) and for adjustment in country"s risk (CDS)) over the emerging market bond index spread on a country by country basis. Using the F-statistic test for joint significance of zero restrictions on the lagged levels of the additional variables (Pesaran, Shin and Smith, 1996) we cannot reject at conventional significance levels the null hypothesis that sovereign bond spreads are cointegrated with the VIX and the country specific CDS 5 . On examination most of the explanatory variables are strongly significant (t-ratios are presented in parenthesis) and have the expected signs.The underlying ARDL equations also pass the diagnostic tests in the majority of cases. Studying the range of the estimated values we observe that a one point increase in the country"s risk (as measured by the CDS) would induce an increase in the region of about half a basis point (ranging from about 0.33 to 0.82) in the dependent variable SSEMBI (bond"s spread), everything else remaining the same.If VIX (the proxy for global risk aversion) goes up by one point, this will induce on average about an 11 basis points increase (displaying values from about 3.9 to just above 21) in the country"s spread. The error correction coefficient estimates are within the cluster of -0.015 to -0.044 suggesting a reasonable speed of convergence to equilibrium, with a half-life reporting from fewer than 15 working days to about 45 working days.Hence, in just about two-thirds of a quarter the spread (SSEMBI) should return to its equilibrium.Interestingly, the error correction coefficients and hence the speed of convergence for most of the countries (Bulgaria, Hungary, Russia, and Kazakhstan) is almost one and the same (in the vicinity of -0.042 to -0.044).Therefore it is evident that hypothetically they would converge back to their respective equilibrium values for the SSEMBI more than three times as fast as Serbia and Poland. The coefficients for mutual determination corrected for degrees of freedom are generally in-between 0.2 to 0.5 suggesting that about 20 to 50 per cent of the variability in the dependent variable (SSEMBI) has been explained.The exceptions are Hungary and Poland, where just about five per cent (on average) of the variability of the respective dependent variable is explained. Furthermore, for Serbia the tests (for joint significance) suggest that the variables CDS and VIX can be treated as the long-run forcing variables for the dependent variable SSEMBI.Interestingly while this is valid for Serbia, for Poland, Russia and Ukraine our results suggest a bidirectional relationship between CDS (as potential dependent variable) and SSEMBI and VIX, and non-rejection of the null hypothesis that the lagged level variables CDS and SSEMBI do not enter significantly in the potential determination (potential equation) of VIX.In the case of Kazakhstan the null hypothesis that the lagged values of SSEMBI and VIX do not enter significantly in the determination of CDS cannot be rejected, but there is an apparent relationship between VIX and CDS and SSEMBI.Regarding Bulgaria and Hungary we observe complete bidirectional interrelations among all three variables. In our analysis we estimate separate equations / data generation processes for the various (former centrally planned) economies and find statistically significant and economically perceivable coefficients.The data shortage precluded any potential experimentation with different specifications or another dataset.Hence, if the coefficients tend to be homogenous, pooled panel estimation would be useful and suitable to be exploited. For this reason we estimate cointegrated pooled panel models.The results from the fixed effects and random effects pooled panel data models are practically identical and are consistent with our previous findings from the individual equation estimates.Concretely, a one point increase in CDS (proxy for country risk) would add about 0.42 basis points to the variable SSEMBI, ceteris paribus; whereas a one unit increase of VIX (stand-in for global risk aversion) would bring about an 8.3 basis points increase in SSEMBI.The coefficient of mutual determination corrected for degrees of freedom is very high, suggesting that about 84 per cent of the variability of the dependent variable (sovereign bond spreads) is explained. Next we examine the effect of a change in macroeconomic fundamentals on changes of spreads.A relatively noteworthy proportion of fluctuations in transition economies market spreads may be attributed to be driven by country-specific fundamentals.The results imply that improved macroeconomic fundamentals, such as lower ratios of debt to GDP, higher rates of real GDP growth, and low inflation help in reducing sovereign spreads. For example, reduced indebtedness seems to contribute positively to sovereign spreads in Hungary; one may expect the same to be valid for Poland, but in the case of Poland, the model did not include any measure of indebtedness due to the lack of a time series from (at least) 2001Q1. It is interesting that in the cases of Bulgaria and Russia we find four insignificant independent variables, whereas these are significant for some of the other countries.This seems to be a possible indication of institutional weakness, limiting the effect of the stance of the macroeconomic aggregates and making their impact trivial.This result is in agreement with Hallerberg and Wolff (2006). Still, macroeconomic aggregates play a certain role in determining bond spreads, but mostly through the channel of global risk aversion / appetite corroborating Favero and Missale (2011) for our specific set of (CEE and CCA) countries. Evidently, the only variable which appears in both financial market reaction and macroeconomic fundamentals equations and works strongly and consistently in the same direction is VIX.This suggest that the levels of spreads can be subject to significant alteration from the impact of financial market volatility (as measured by VIX) and could potentially be pushed up or down in ways that have little to do with their respective macroeconomic fundamentals. The error correction coefficients suggest a return to equilibrium (with half-life) in the range of about 38 to 50 working days 6a very similar order of magnitude to that derived in the financial market high frequency data sample equations. This sheds light and provides clear evidence on the critical factors that have a significant influence on the variation in spreads in the transition countries environment --in reality worldwide factors are principally responsible for the changes in spreads.Hence, any kind of government intervention aiming to to bring down spreads may prove ineffective, unless strongly determined and unfalteringly pre-coordinated. Now we may ask: has the transition ended?It is debatable, and an agreement on the appraisal of the results of transition is impractical as there are expectations, attitudes and beliefs involved.What would be the appropriate criteria?Obvious cases to look at for constructive suggestions would be Japan, South Korea and China.In their cases it seemed to be self-evident: supreme economic success guided by the respective government (developmental state).Considering transition economies; whatever their pros and cons; neither of them matches the remarkable economic growth achieved by the previous group.Why might that be?The answer is closely linked to the quality of governance, human capital development and corruption, and as a result the level of development of the social knowledge and its practical implementation, i.e., this generally is manifested by the stage of development of manufacturing. Transition would then end when the transition economies find their place in the global production process and become equal partners with the industrialised world economies --to become integrated into the international economic framework rather than to be subordinated to it.This would depend on their abilities in developing and exploiting knowledge in the contemporary exceptionally competitive world economy.If Government maintains strong incentives to provide public goods and retains motivation for wealth creation through the efficient use of capital and labour, as an outcome, the economy would remain connected to its comparative advantage, which (for a low-rent country) lies initially in labour-intensive manufactured goods.The brief initial dependence on primary product exports (of low-rent economies) encourages industrialization at a relatively low per capita income, which is therefore labour-intensive and competitive and triggers a beneficial economic advancement.Moreover, competitive diversification increases the capacity of the economy to cope with economic shocks and reinforces the resilience that arises from sustained high rates of investment. There is a relationship between macroeconomic sustainable growth and the financial sector development. Adequate attention needs to be paid to institutional development and regulatory structure.The financial sector / banking sector features that are critical for successful intermediation and indispensible for growth include: i) transparency (e.g., independence of commercial bank governance from detrimental oligarchic "clients"; ii) sufficient central bank independence from government control; iii) macro-prudential policy needs to be oriented towards the resilience of the entire system and careful judgement (rather than just fixed rules) need to be exercised when applying macro-prudential instruments (in dealing with market failures, e.g., moral hazard, information frictions, risk illusion, herd behaviour, etc.); and, iv) enhanced efficiency of international cooperation in this area. Potential major future research areas would include: dynamic interaction of local and international developments; absorbing capacity of transition economies; markets in transition economies; and, importance of modern manufacturing for transition economies. Spread JPM EMBI GLOBAL VIX -Volatility Index (proxy for global risk aversion) CDS -Credit Default Swap (perceived individual country risk) Initially we estimate the model on an individual country by country basis and then we move to panel data (cross-sectional-time-series) estimation.Our motivation for using a framework allowing for fractionally integrated variables (ARDL) is based on various important factors, including:  The conventional (dichotomous) choice between unit root I(1) and level stationarity I(0) is overly restrictive many economic time series show signs of being neither I(0) nor I(1); Figure 1 ( Figure 1 (below) depicts the developments in sovereign stripped spreads for selected CEE and Caucasus and Central Asia (CCA) countries over the period of 1994 to 2012.Over the period starting from the end of 2005 to around the first quarter of 2007, sovereign spreads clustered closely together, reaching their historically lowest point of below 200 basis points.Given that, undoubtedly, there were significant differences in the creditworthiness of the borrowers in the index --this state of affairs at that time might suggest that investors did not differentiate adequately among borrowers.This situation was followed eventually by the Bear Sterns alarm in March 2008, which led to the increased discrimination in spreads across countries.Furthermore, the spreads widened extensively after September 2008, following the bankruptcy of the Lehman Brothers. Figure 4 . Figure 4. Chicago Board Options Exchange Volatility Index (VIX) -Global Risk Aversion Proxy, daily eq. 8 SSEMBI = -173.21INPT + 0.3261 CDS + 20.9384 VIX -0.0417 ECM ( Figure 5. Sovereign bond yield spreads, potential macroeconomic determinants: Bulgaria, Croatia, Hungary, Poland, and Russia Not including RGDPG and INFL all other coefficients are statistically significant and with the expected sign.It is interesting to observe that for Hungary the CHTOT is exercising the most substantial effect on SSEMBI, i.e., one unit increase in the terms of trade would lead to an almost 25 basis points reduction in the spread (SSEMBI). Table 1 While we have only been able to use data at the intersection of the table 2 and table 3 for daily frequencies empirical analysis and no more than the data, which overlap among all of the tables 1, 2, and 3 (for quarterly data estimates), we have been careful not to push our analysis beyond what both available and reliable data permits. Table 3 . Credit Default Swaps (CDS USD 5Y), Daily -Data Availability Yield Spreads, Financial Markets Determinants, June 2006 -June 2012, Daily, Estimated Equations and Results Table 6 . Residual Correlation Matrix Table 7 . obs: 39Joint test of zero restrictions on the coefficients of additional variables:We test the null hypothesis of the non-existence of a long-run relationship, i.e., 0 : 1 2 3 4 5 6 0 RussiaThe value of the F-statistic (3.8821) attained (below) is above the upper critical value bound (at the 90 per cent level) specified by 2.425 to 3.574.Hence, at this level, we can reject the null hypothesis of no long run relationship.
2018-12-15T03:50:21.227Z
2017-02-10T00:00:00.000
{ "year": 2017, "sha1": "81dfb769af1497d7c5120183e8eb080c9b9cbfc9", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ibr/article/download/66306/35871", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "81dfb769af1497d7c5120183e8eb080c9b9cbfc9", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
1358728
pes2o/s2orc
v3-fos-license
Hyperthyroidism-Associated Insulin Resistance Is Not Mediated by Adiponectin Levels To evaluate the relationship between circulating adiponectin and insulin sensitivity in patients with hyperthyroid Graves' disease, we studied 19 adult patients with this disease and 19 age- and sex-matched euthyroid controls. All hyperthyroid patients were treated with antithyroid drugs and were re-evaluated after thyroid function normalized. Before antithyroid treatment, the adiponectin plasma concentrations were not different comparing with those in control group. The adiponectin levels remained unchanged after treatment. The homeostasis model assessment of insulin resistance (HOMA-IR) in hyperthyroid group was higher before treatment than after treatment. There was no significant difference in serum glucose and insulin levels between hyperthyroid and control groups and in the hyperthyroid group before and after treatment. BMI-adjusted adiponectin levels were not different among three groups. On the other hand, BMI-adjusted insulin levels and HOMA-IR values were significantly decreased after management of hyperthyroidism. Pearson's correlation revealed that insulin and HOMA-IR values positively correlated with triiodothyronine (T3) and free thyroxine (FT4) levels. However, adiponectin did not correlate with T3, FT4, insulin, HOMA-IR and thyrotropin receptor autoantibody (TRAb) levels. In conclusion, insulin resistance associated with hyperthyroidism is not mediated by the levels of plasma adiponectin. Introduction Hyperthyroidism has been linked to reduced lean and fat body mass, resulting in lower-than-normal body weight. Patients with hyperthyroidism often have disrupted intermediary metabolism and thyrotoxicosis which has been associated with insulin resistance [1][2][3]. The mechanism of insulin resistance induced by thyrotoxicosis has not been completely elucidated. Adipocytokines which are regulated by thyroid hormones may play a role in the development of insulin resistance in hyperthyroidism. Adipocytokines play crucial roles in the regulation of energy homeostasis, insulin sensitivity, lipid/carbohydrate metabolism, and even inflammatory/atherogenic reactions [4][5][6][7][8][9][10][11]. Among the adipocytokines, adiponectin is the only one to have anti-inflammatory and antiatherogenic properties [6][7][8][9]. It is paradoxically decreased in an insulin-resistance state [4]. We, therefore, measured plasma adiponectin concentrations in subjects with hyperthyroidism due to Graves' disease before and after antithyroid treatment and in euthyroid control subjects. The relationships among adiponectin, insulin concentrations, and homeostatic model assessment-insulin resistance index (HOMA-IR) were evaluated. We aimed to evaluate if adiponectin would be associated with the indicator of insulin resistance in patients with hyperthyroid Graves' disease. Subjects. The study enrolled 19 patients with hyperthyroidism due to Graves' disease (14 women and 5 men) and 19 age-and sex-matched euthyroid controls (17 women and 2 men) as previous study mentioned [12]. All hyperthyroid patients were treated with antithyroid drugs, either propylthiouracil (procil, Nysco Co., Ltd., Taiwan) or carbimazole (neo-thyreostat, Dr. Herbrand Co., Ltd., Germany). Thyroid function was normalized after three to seven months of treatment by antithyroid drugs (mean = 5.4 ± 0.4 months). The patients were evaluated at the time of diagnosis and after thyroid function normalized. The plasma and serum samples were tested to measure serum concentrations of free thyroxine (FT4), total triiodothyronine (T3), thyrotropin (TSH), thyrotropin receptor autoantibody (TRAb), glucose, insulin, and plasma concentrations of adiponectin (the plasma assays of adiponectin had been performed simultaneously with the former protocol and not reported), all after overnight fasting. Body mass index (BMI) was calculated as weight in kilograms divided by the square of the height in meters. Insulin resistance was estimated by using the HOMA-IR index, calculated as serum glucose concentration (mmol/L) × serum insulin concentration (mU/L)/22.5. Biochemistry and Hormone Analyses. Serum glucose, FT4, T3, TSH, and insulin concentrations were measured as former protocol reported [12]. Plasma adiponectin concentrations were measured by the quantitative sandwich enzyme immunoassay technique (R & D systems, Inc., Minneapolis, MN, USA). The intraassay and inter-assay CVs were 3.4% and 5.8%, respectively. The sensitivity of the assay was 0.246 ng/mL. Serum TRAb was analyzed by radioreceptor assay method (RSR Ltd., Cardiff CF23 8HE, UK). The levels greater than 10% was considered positive. Statistical Analyses. Data were reported as mean ± standard error of the mean (SEM). Comparisons of hyperthyroid subjects before treatment, hyperthyroid subjects after treatment, and euthyroid control subjects were made by using one-way analysis of variance (ANOVA) and Bonferroni test for post-hoc multiple comparisons. Mann-Whitney test was used for nonparametric data. Correlations between parameters were assessed by using Pearson's correlation analysis. A value of P < .05 was considered as statistically significant. Results The demographic and clinical characteristics of the study population are shown in Table 1. The mean ± SEM age was 32.6 ± 1.8 years for hyperthyroid subjects and 36.7 ± 2.7 years for matching control subjects. As expected, subjects in the hyperthyroid group before treatment had lower TSH and higher T3 and FT4 serum concentrations than they did after treatment or the control group. In addition, TRAb also decreased after antithyroid drugs treatment (44.9 ± 5.7 versus 29.1 ± 5.0%). HOMA-IR values were higher in hyperthyroid group before treatment (2.06 ± 0.26 mM mU/L) than after treatment (1.21 ± 0.16 mM mU/L, P = .027). Before antithyroid treatment, the adiponectin plasma concentrations were not different comparing with those in control group (5.57 ± 0.97 versus 6.55 ± 0.71 ng/L, ns). The adiponectin levels remained unchanged after treatment (6.62 ± 0.80 versus 5.57 ± 0.97 ng/L, ns). There was no significant difference in serum glucose, and insulin levels between hyperthyroid and control groups and in the hyperthyroid group before and after treatment. BMI-adjusted adiponectin levels were not different among three groups (Table 2). On the other hand, BMIadjusted insulin levels and HOMA-IR values were significantly decreased after management of hyperthyroidism. Pearson's correlation (Table 3) revealed that insulin and HOMA-IR values were positively correlated with T3 and FT4 levels. However, adiponectin did not correlate with T3, FT4, insulin, HOMA-IR, and TRAb levels. Discussion Patients with hyperthyroidism are known to have elevated fasting serum glucose levels [13,14], which may be explained by increased endogenous glucose production through more rapid glycogenolysis and gluconeogenesis [13,15,16]. The speed of insulin-stimulated glucose disposal in peripheral tissues is variable in hyperthyroidism and may be normal, increased, or decreased [3]. Adipose tissue, an endocrine active tissue, releases adipocytokines to modulate several peripheral tissue functions. Among these adipocytokines, adiponectin is a 244 amino acid protein which is also known as the adipose most abundant gene transcript 1. Reduction in adiponectin secretion has been found to be associated with the development of insulin resistance in animal models of obesity and lipoatrophy [4]. Injection of adiponectin into nonobese diabetic mice may lead to an insulin-independent decrease in glucose levels [17]. Plasma adiponectin concentrations in patients with both obesity and type 2 diabetes mellitus were found to be reduced [18,19]. Adiponectin and thyroid hormones share some physiological actions as reduction of body fat by increasing thermogenesis and lipid oxidation. One animal study had showed a potential inhibitory effect of T3 on adiponectin mRNA expression on adipose tissues [20]. However, adiponectin concentrations had also been found to be associated with FT4 levels in patients with hypothyroidism [21]. The effect of thyroid hormones on the adiponectin levels revealed contrary results, which needs more study to clarify. From the limited studies on the relationship between circulating adiponectin levels and thyroid function status, results have been inconsistent. In human study, hyperthyroidism has been associated with similar [22,23] or elevated [24,25] circulating adiponectin levels compared with euthyroid subjects. Normalization of hyperthyroidism with appropriate therapy was not accompanied by significant changes in circulating adiponectin levels in one study [22]. Conversely, in another study a significant reduction was found [26]. In addition, this study also demonstrated that hyperadiponectinemia was associated with levels of TRAb, pointing to an immunological influence on adiponectin metabolism. However, we did not find correlation between TRAb and adiponectin levels in hyperthyroid patients. In the present study, HOMA-IR values and BMI-adjusted insulin concentrations were decreased after hyperthyroidism management. In addition, HOMA-IR and insulin levels were positively correlated with T3 and FT4 levels in patients with Graves' disease. These findings were consistent with the assumption that hyperthyroidism is associated with higher insulin resistance. However, we did not observe difference in adiponectin concentrations among hyperthyroid and control patients and also the posttreatment patients. There was no correlation between serum insulin or HOMA-IR levels and plasma adiponectin concentrations. These observations suggest that insulin resistance associated with hyperthyroidism is not mediated by a reduction in plasma adiponectin levels. However, because of the small sample, these results must be interpreted with caution. Hyperthyroidism could alter the neurophysiology of food intake regulation which may drive the marked hyperphagia and craving for carbohydrates [27]. The carbohydrate-rich diet with a high glycemic load appears to be associated with lower adiponectin concentrations [28]. Therefore, the carbohydrates intake in hyperthyroid patients may influence adiponectin levels. Hyperthyroidism is associated with insulin resistance. However, hyperthyroidism also reduces BMI, which may increase insulin sensitivity. Plasma adiponectin concentrations were decreased in patients with obesity. Besides, the levels also paradoxically decreased in an insulin-resistance state. The concentrations of adiponectin among hyperthyroid subjects may be influenced by many conflicting factors. However, no study has reported a decreased adiponectin concentration in hyperthyroidism. Therefore, the insulin resistance associated with thyrotoxicosis is likely not to be mediated by altered adiponectin secretion. Conclusions In conclusion, insulin resistance associated with hyperthyroidism is not mediated by the levels of plasma adiponectin. Additional studies are needed to clarify the contribution of these adipocytokines in the development of insulin resistance in patients with hyperthyroidism.
2014-10-01T00:00:00.000Z
2011-01-18T00:00:00.000
{ "year": 2011, "sha1": "9d10e42a950557776e5215320455845be847dc78", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jtr/2011/194721.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3805ffb8c49c1d9aa07ffcc6907b0dcfd6807940", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270766963
pes2o/s2orc
v3-fos-license
A panoptic segmentation dataset and deep-learning approach for explainable scoring of tumor-infiltrating lymphocytes Tumor-Infiltrating Lymphocytes (TILs) have strong prognostic and predictive value in breast cancer, but their visual assessment is subjective. To improve reproducibility, the International Immuno-oncology Working Group recently released recommendations for the computational assessment of TILs that build on visual scoring guidelines. However, existing resources do not adequately address these recommendations due to the lack of annotation datasets that enable joint, panoptic segmentation of tissue regions and cells. Moreover, existing deep-learning methods focus entirely on either tissue segmentation or cell nuclei detection, which complicates the process of TILs assessment by necessitating the use of multiple models and reconciling inconsistent predictions. We introduce PanopTILs, a region and cell-level annotation dataset containing 814,886 nuclei from 151 patients, openly accessible at: sites.google.com/view/panoptils. Using PanopTILs we developed MuTILs, a neural network optimized for assessing TILs in accordance with clinical recommendations. MuTILs is a concept bottleneck model designed to be interpretable and to encourage sensible predictions at multiple resolutions. Using a rigorous internal-external cross-validation procedure, MuTILs achieves an AUROC of 0.93 for lymphocyte detection and a DICE coefficient of 0.81 for tumor-associated stroma segmentation. Our computational score closely matched visual scores from 2 pathologists (Spearman R = 0.58–0.61, p < 0.001). Moreover, computational TILs scores had a higher prognostic value than visual scores, independent of TNM stage and patient age. In conclusion, we introduce a comprehensive open data resource and a modeling approach for detailed mapping of the breast tumor microenvironment. Published in partnership with the Breast Cancer Research Foundation https://doi.org/10.1038/s41523-024-00663-1 A panoptic segmentation dataset and deep-learning approach for explainable scoring of tumor-infiltrating lymphocytes Check for updates Shangke Liu 1,4 , Mohamed Amgad 1,4 , Deeptej More 1 , Muhammad A. Rathore 1 , Roberto Salgado 2,3 & Lee A. D. Cooper 1 Tumor-Infiltrating Lymphocytes (TILs) have strong prognostic and predictive value in breast cancer, but their visual assessment is subjective.To improve reproducibility, the International Immunooncology Working Group recently released recommendations for the computational assessment of TILs that build on visual scoring guidelines.However, existing resources do not adequately address these recommendations due to the lack of annotation datasets that enable joint, panoptic segmentation of tissue regions and cells.Moreover, existing deep-learning methods focus entirely on either tissue segmentation or cell nuclei detection, which complicates the process of TILs assessment by necessitating the use of multiple models and reconciling inconsistent predictions.We introduce PanopTILs, a region and cell-level annotation dataset containing 814,886 nuclei from 151 patients, openly accessible at: sites.google.com/view/panoptils.Using PanopTILs we developed MuTILs, a neural network optimized for assessing TILs in accordance with clinical recommendations.MuTILs is a concept bottleneck model designed to be interpretable and to encourage sensible predictions at multiple resolutions.Using a rigorous internal-external cross-validation procedure, MuTILs achieves an AUROC of 0.93 for lymphocyte detection and a DICE coefficient of 0.81 for tumor-associated stroma segmentation.Our computational score closely matched visual scores from 2 pathologists (Spearman R = 0.58-0.61,p < 0.001).Moreover, computational TILs scores had a higher prognostic value than visual scores, independent of TNM stage and patient age.In conclusion, we introduce a comprehensive open data resource and a modeling approach for detailed mapping of the breast tumor microenvironment. Advances in digital imaging of glass slides and machine learning have increased interest in histology as a source of data in cancer studies 1,2 .Tissue morphology contains important prognostic and diagnostic information and reflects underlying molecular and biological processes.This work presents approaches for the computational discovery of interpretable predictive histologic biomarkers, focusing on invasive breast carcinomas and immune response.Histopathology is a medical field where medical experts (i.e., pathologists) examine stained microscopic tissue sections to make diagnostic decisions, most often from tumor biopsies.While much of medicine relies on the clinical examination of patients, histopathology is a visual-focused field, like radiology, where much of the focus is on visual pattern recognition. The term biomarker refers to a biological feature that we can use to indicate a clinical outcome.For example, prognostic biomarkers are biological features associated with good (or bad) prognosis, while predictive biomarkers predict response to therapy in randomized controlled trials 3 .Typically, when a histologic trait is related to outcomes in cancer, it is incorporated into the grading criteria, though this is not always the case.For example, there has been a strong focus on tumorinfiltrating lymphocytes (TILs) as a prognostic and predictive biomarker in breast cancer and other solid tumors in recent years 4 .This is because TILs infiltration can be a somewhat direct visualization of how well the host (patient) body can respond to the growing tumor by immune cells. The majority of breast cancers are carcinomas.Based on morphology, breast carcinomas include many variants; the most common are infiltrating ductal carcinoma (which originates from breast duct epithelium) and infiltrating lobular carcinoma (from breast acini/glands) 5,6 .There are numerous morphological elements within a single breast cancer slide.Integrative genomic analysis of breast cancer identified four main subtypes, including Luminal-A, Luminal-B, Her2-Enriched/Her2+, and Basal 7 .These subtypes have distinct alterations and are associated with distinct patient survival prospects 8 .TILs are particularly prognostic and predictive of therapeutic response in basal and Her2+ breast carcinomas 4 . The stromal TILs score is the fraction of stroma within the tumor bed occupied by lymphoplasmacytic infiltrates (Fig. 1).TILs are assessed visually by pathologists through examination of formalin-fixed paraffin-embedded, hematoxylin and eosin (FFPE H&E) stained slides from tumor biopsies or resections.They are subject to considerable inter-and intraobserver variability, and hence a set of standardized recommendations was developed by the International Immuno-Oncology Working Group 9,10 .Nevertheless, observer variability remains a critical limiting factor in the widespread clinical adoption of TILs scoring in research and clinical settings.Therefore, a set of recommendations was published for developing computational tools for TILs assessment 11 .Several existing computational algorithms have been developed to score TILs.However, most diverge from clinical scoring recommendations, as summarized by Amgad et al. 11 .This report describes the PanopTILs dataset and MuTILs, an interpretable deep-learning model for breast cancer WSIs, with a special emphasis on evaluating TILs (Fig. 2, Supplementary Table 1).MuTILs jointly classifies both tissue regions and individual cell nuclei to produce a panoptic segmentation for TIL scoring and other applications. Results Accurate panoptic segmentation of the breast cancer tumor microenvironment MuTILs has a strong emphasis on explainability; it segments individual regions and nuclei, which are then used to calculate the computational scores (Supplementary Fig. 1).Table 1 shows the region segmentation and nucleus classification accuracy on the testing sets.MuTILs achieves a high classification performance for components of the computational TILs score, including stromal region segmentation (DICE = 80.8 ± 0.4) as well as the classification of fibroblasts (AUROC = 91.0 ± 3.6), lymphocytes (AUROC = 93.0 ± 1.1), and plasma cells (AUROC = 81.6 ± 6.6).Region segmentation performance is variable and class-dependent, with the predominant classes (cancer, stroma, and empty) being the most accurate.The region constraint improves nuclear classification AUROC by ~2-3% overall, mainly by reducing the misclassification of immature fibroblasts and large TILs/plasma cells as cancer.A detailed performance analysis of the impact of region constraint is presented in Supplementary Fig. 2 and Supplementary Tables 2-7.The generalization accuracy of MuTILs predictions is also supported by a qualitative examination of model predictions on the ROIs from BCSS and NuCLS datasets (Fig. 3) and the full WSI (Fig. 4).Note that in Fig. 4, the predictions show full WSI inference for illustration. We compared the performance of MuTILs to previously published models for tissue region segmentation 12 and nuclei instance segmentation 13 .The region segmentation performance of MuTILs was compared to the fully convolutional network (VGG-FCN8) of 12 on common testing slides from both papers (see Supplementary Table 8).We note that while MuTILs segments tissue regions at 10× objective magnification, the VGG-FCN8 model performs segmentation at 40× objective magnification.MuTILs improves segmentation of stromal regions while sacrificing some performance on epithelial and TIL regions (see Supplementary Table 8).A per- slide performance comparison of tissue region segmentation is presented in Supplement Data 1.In nuclear classification, MuTILs performs better than the mask-RCNN model of ref. 13 on all nuclei types, including by 2% for TIL nuclei (see Supplementary Table 9).As discussed earlier, for TILs scoring, the most clinically relevant classes are stromal regions and TILs nuclei. Computational TIL scores are moderately concordant with pathologist TIL scores Computational TILs score variants had a modest to high correlation with the visual scores, with Spearman correlations ranging from 0.55 to 0.61 (all p-values < 0.001) (Fig. 5).Points in red are outliers that contributed to the correlation metric but were not used in calibration.Some slides were outliers with discrepant visual and computational scores; the causes for this discrepancy are discussed below.Both global and saliency-weighted scores were significantly correlated with the visual scores (p < 0.001).We further analyzed pathologist-pathologist concordance using Bland-Altman analysis.For the pathologist-pathologist comparison, most points fall within the +/−two standard deviations interval, with the strongest differences seen in the moderate scores ranging from 20 to 60% with no evidence of proportional bias (Supplementary Fig. 3).Score-score concordance was evaluated to measure agreement between scoring methods composed of score variants (nTSa, nTnS, nTnA) and score aggregation methods (global, saliency weighted).Correlations are high when comparing aggregation methods for the same score variant (Spearman, 0.89-0.92)and across score variants (0.72-0.86) (Supplementary Fig. 4). Computational TIL scoring improves prognostic accuracy for infiltrating ductal carcinomas and Her2+ carcinomas We examined the prognostic value of MuTILs on infiltrating ductal carcinomas and Her2+ carcinomas (Fig. 6).While we had access to visual scores from the basal cohort, the number of outcomes was limited, and neither visual nor computational scores had prognostic value.All metrics were obtained by saliency-weighted averaging of computational scores from 300 ROIs.Both visual and computational scores had good separation within the infiltrating ductal cohort, although only the nTnS and nTnA computational scores had significant log-rank p-values (p = 0.009 and p = 0.006, respectively).Within the Her2+ cohort, all metrics had good separation on the Kaplan-Meier, although the visual score had a borderline p-value.All computational scores were significant within this cohort (p = 0.018 for nTSa, p = 0.002 for nTnS, and p = 0.006 for nTnA). We also examined the prognostic value of the continuous (non-thresholded) TILs scores using Cox proportional hazards regression, with and without controlling for clinically relevant covariates, including patient age, AJCC pathologic stage, histologic subtype, and basal status (Table 2).The analysis was restricted to slides where visual TILs scores were available for a fair comparison.In the multivariable setting, a model was built for each metric combined with clinically salient covariates.We controlled all multivariable models for patient age and AJCC pathologic stage I and II status.Additionally, we controlled models using the infiltrating ductal carcinoma subset for basal genomic subtype status, and we controlled models using the Her2+ subset for infiltrating ductal histologic subtype status.Within the infiltrating ductal cohort, the only metric with significant independent prognostic value on multivariable analysis was the nTnS computational score.Within the Her2+ cohort, the visual score was not independently prognostic (p = 0.158), while the computational scores all had independent prognostic value, with the most prognostic being the nTnS variant (p = 0.003, HR < 0.001).Saliency-weighted ROI scores almost always had better prognostic value than global computational scores. Discussion We present PanopTILs, a segmentation dataset that enables the joint segmentation of tissue regions and cell nuclei.This dataset enabled us to train a panoptic segmentation model, MuTILs, which is a lightweight deeplearning model for reliable assessment of TILs in breast carcinomas in accordance with clinical scoring recommendations.It jointly classifies tissue regions and cell nuclei at different resolutions and uses these predictions to derive patient-level scores.We show that MuTILs can produce predictions with good generalization for the predominant tissue and cell classes relevant for TILs scoring.Furthermore, computational scores correlate significantly with visual assessment and have strong independent prognostic value in infiltrating ductal carcinoma and Her2+ cancer. One of the difficulties facing widespread adoption of state-of-the-art DL in medical domains is their opacity.There is a broad consensus that explainability is critical to trustworthiness, especially in clinical applications 1,[13][14][15] .The standard application of DL models in histopathology involves the direct prediction of targets from the raw images.For example, we may predict patient survival given a WSI scan 16 .However, an alternative paradigm is beginning to emerge that combines the strong predictive power of opaque DL models and the interpretable nature of handcrafted features, a technique called Concept bottleneck modeling 17 .The fundamental idea is simple: 1. Use DL to delineate various tissue compartments and cells; 2. Extract handcrafted features that make sense to a pathologist; 3. Learn to predict the target variable, say patient survival, using an interpretable ML model that takes handcrafted features as its input.Hence, the most challenging task is handled using powerful DL models, while the terminal prediction task uses highly interpretable models. MuTILs is a concept bottleneck model; it learns to predict the individual components that contribute to the TILs score (i.e., peritumoral stroma and TILs cells) and uses those to make the final predictions.This setup makes its predictions explainable and helps identify sources of error.The region constraint helped provide context for the nuclear predictions at high resolution, which helped reduce the misclassification of immature fibroblasts and plasma cells as cancer (Fig. 7).To improve the reliability of tissue and cell classifications, we grouped model predictions into a simplified set of labels necessary for the core task of TIL quantification.For example, normal breast acini are not well represented in the training data, and so MuTILs model predictions are not reliable for distinguishing normal and cancer acini (Fig. 7, bottom row).Hence, we assessed performance at the level of grouped classes with reliable ground truth (epithelium, stroma, TILs) at evaluation.A richer set of predicted labels can be achieved by expanding the training set or downstream modeling of architectural patterns, which is beyond the scope of this work.The MuTILs model analyzes a typical slide in approximately 15 min on a system equipped with a single NVIDIA A4000 graphics processor.This time can be further reduced by parallelizing tiles over multiple graphics processors on a server system. A qualitative examination of slides with discrepant visual and computational TILs scores shows there are three major contributors to discrepancies: 1. Misclassifications of some benign or low-grade tumor nuclei as TILs. 2. Variations in TILs density in different areas within the slide, which cause inconsistencies in visual scoring.This phenomenon is also a wellknown contributor to inter-observer variability in visual TILs scoring 10 .3. Variable influence of tertiary lymphoid structures on the WSIlevel score. Our results show that the most prognostic TILs score variant (nTnS) is derived from dividing the number of TILs cells by the total number of cells within the stromal region.The visual scoring guidelines rely on the nTSa, which is reflected in the slightly higher correlation of the nTSa variant with the visual scores compared to nTnS 9 .So why is nTnS more prognostic than nTSa?There are two potential explanations.First, it may be that nTnS is better controlled for stromal cellularity since it would be the same in low-vs.high-cellularity stromal regions if the proportion of stromal cells that are TILs is the same.Second, nTnS may be less noisy since it relies entirely on nuclear assessment at 20x objective, while stromal regions are segmented at half that resolution. Finally, we note that this validation was done only using the TCGA cohort, and future work will include validation on more breast cancer cohorts.In addition, we note that MuTILs cannot distinguish cancer from normal breast tissue at low resolution, which may necessitate manual curation of the analysis region, especially for low-grade cases. MuTILs model design MuTILs jointly classifies tissue regions and cell nuclei and extends our earlier work on this topic (Fig. 2) 18 .It acts as a panoptic segmentation algorithm 19 ; that is, it uses semantic segmentation to delineate tissue regions and instance segmentation to segment and classify individual cell nuclei to enable a holistic, context-aware assessment of TILs.MuTILs comprises two parallel U-Net models 20 (each with a depth of 5) for segmenting tissue regions and nuclei at 10X objective and 20X objective magnifications, respectively.Inspired by the HookNet method, information is shared from the tissue region segmentation to inform nucleus segmentation by providing lowpower context 21 .Additionally, region predictions from the lowresolution branch are upsampled to 20X magnification and used to constrain the predicted nucleus classes.Tissue region predictions are used to infer attention maps that define the likelihood of different nuclei types occurring based on learned prior probabilities.These attention maps also incorporate user-defined compatibility kernels that prohibit biologically implausible predictions, for example, a fibroblast nucleus in a tumor tissue region.The MuTILs model was trained using a multi-task loss that gives equal weight to Regions of Interest (ROI) ROI and High-Power Field (HPF) region predictions, unconstrained HPF nuclear predictions, and region-constrained nuclear predictions.Results are on testing sets from the internal-external 5-fold cross-validation scheme (separation by hospital).Fold 1 contributed to hyperparameter tuning, so it is not included in the mean and standard deviation calculation.MuTILs achieves a high classification performance for components of the computational TILs score.Region segmentation performance is variable and class-dependent, with the predominant classes (cancer, stroma, and empty) being the most accurate.The region constraint improves nuclear classification AUROC by ~2-3% overall, mainly by reducing the misclassification of immature fibroblasts and large TILs/plasma cells as cancer (see qualitative examination Fig. 7).Classes and the simplified merged classes are indicated in the first and second columns respectively.a Classes that contribute to the computational TILs score.b Performance for Necrosis/Debris and TILs-dense regions is modest, primarily because of the inherent subjectivity of the task and variability in the ground truth.Infiltrated stromal regions do not have clear boundaries and necrotic regions also often have TILs infiltrates at the margin or adjacent areas of fibrosis, which are inconsistently labeled as necrosis, stroma, or TILs-dense in the ground truth.Nonetheless, classifying cells/material that comprise necrotic regions (neutrophils, apoptotic bodies, debris, etc.) is reasonable at higher magnification. The model fails to distinguish normal and neoplastic breast epithelium at 10× magnification.This failure is likely caused by: 1.The low representation of normal breast tissue in the validation data from NuCLS and BCSS datasets; 2. Inconsistency in defining "normal," which is sometimes used in the sense of "non-cancer" (including benign proliferation), and sometimes only refers to terminal ductal and lobular units (TDLUs).At high resolution, the distinction between cancer versus normal/ benign epithelial nuclei is reasonable. PanopTILs dataset We created a panoptic segmentation dataset that fuses the annotations from two public datasets: the Breast Cancer Semantic Segmentation dataset (BCSS) 12 and the Nucleus classification, localization, and segmentation dataset (NuCLS) 22 .These datasets were produced through a crowdsourcing process that engaged an international network of medical students, pathology residents, and pathologists using a web-based platform as described in refs.12,22.These datasets annotated regions selected in WSIs from 125 infiltrating ductal breast carcinoma patients from The Cancer Genome Atlas.We call this combined dataset PanopTILs, since it enables the panoptic segmentation of tissue regions and cell nuclei necessary for TIL assessment (Fig. 1).The PanopTILs dataset contains manual annotations comprising 16,322 cancer cells, 9596 lymphocytes, 6945 fibroblasts, 5943 debris, and 4641 plasma cell nuclei (see Supplementary Table 1), along with MuTILs model training For the purposes of training MuTILs models, nuclei annotations were extrapolated to the full 1024 × 1024 ROI using models for nuclear instance classification from ref. 13 to infer nuclear boundaries and classes in the periphery.The extrapolation models were trained using the manual nuclear annotations from the central 256 × 256 regions of training images and then applied to margins of the 1024 × 1024 ROI to infer nuclei annotations there (see Fig. 1d).During MuTILs model training, we also supplement Panop-TILs with annotations from 85 slides from the Cancer Prevention Study II cohort to enrich the training data with lower-grade and normal tissue examples (these annotations are not included in the PanopTILs release) 23 . Analytical validation of MuTILs panoptic segmentation Slides were separated into training and testing sets using 5-fold internal-external cross-validation 24 , using the same folds described in ref. 13.This ensures that slides from a single hospital never appear in both training and testing to better estimate generalizability.In all experiments fold 1 contributed to hyperparameter tuning and so it is not included in reporting of mean and standard deviation for performance metrics.Metrics calculated include the Sørensen-Dice (DICE) coefficient for tissue segmentation, and accuracy, area under ROC curve (AUROC), sensitivity and specificity, precision and recall, F1 score, and Matthews correlation coefficient (MCC) for nucleus classification.Model performance was assessed entirely on manual cell annotations (Fig. 3).Extrapolated nuclei annotations were not used for validation.The fields depicted in Fig. 3 are from the application testing sets.The classes in the PanopTILs training set are more granular than what is required for TIL scoring (for example, discriminating lymphocyte from plasma cell nuclei).Some of these finer subclassifications than what is necessary for TIL scoring, for example distinguishing between lymphocytes and plasma cells.Some granular classifications have lower inter-rater agreement ("unreliable") or are not abundant enough for a model to learn.Therefore, we assessed performance by grouping several classes to form a more reliable and practical ground truth (epithelium, stroma, TILs). have lower inter-rater agreement in the annotation datasets ("unreliable"), or are not abundant enough for a model to learn (normal epithelium).Therefore, we assessed performance by grouping several classes to form a more reliable and practical ground truth (epithelium, stroma, TILs).Predictions for normal and cancer epithelium are combined into a single "epithelium" class. Pathologist visual TIL scoring For whole-slide image (WSI) inference, we relied on data from 305 breast carcinoma patients for validation, 269 of whom were infiltrating ductal carcinomas, and 156 were Her2+.Visual scores were assessed by two pathologists and used as the baseline.Scores were performed in accordance with recommendations of the International TILs Working Group, which a. Her2+ carcinomas (N=156) Fig. 6 | Kaplan-Meier analysis of visual and computational TILs assessment in predicting breast cancer progression.A threshold of 10% was used to define lowscore and high-score patients for scores estimated in stromal regions which includes the visual, nTnS, and nTSa scores.For comparison, the nTnA score was included where the denominator includes all cells, not just those in the stromal compartment. For the nTnA score used a 3% threshold to account for the larger denominator.An analysis of the nTnA score distributions is presented in Supplementary Fig. 1 to justify this choice.We utilized visual scores from pathologist 1 as pathologist 1 is a breast cancer subspecialist.Both visual and computational scores effectively stratify outcomes in the (a) infiltrating ductal and (b) HER2+ carcinomas.Stratification for visual scores is clear but not statistically significant at the p = 0.05 level.Computational scores generally improve stratification over visual scores and are statistically significant, except for the nTSa in infiltrating ductal carcinoma. a. Global scoring b. Fig. 5 | Correlation between visual and computational TILs scores.Visual scores were obtained from two pathologists using scoring recommendations from the TILs Working Group.Each point in the scatter plots above represents a single patient.Each plot above illustrates the correlation between the visual scores of one pathologist against either nTSa or nTnS scoring (using either global or salience weighted aggregation).Computational TIL scores were calibrated for the sake of interpretation to map them to a similar value range as the visual scores.Points in red are outliers that contributed to the correlation calculations but were not used during calibration.a Scores obtained globally by aggregating data from all ROIs.b Scores obtained by saliency-weighted averaging using estimated peritumoral stroma to weight each ROI.The training set annotations of cells and tissue regions are very granular, for example distinguishing between lymphocytes and plasma cells.Some of these finer subclassifications have lower inter-rater agreement ("unreliable"), or are not abundant enough for a model to learn.Furthermore, the classes that a model needs to distinguish for the core task of TIL quantification is much simpler.Therefore, we assessed performance by grouping several classes to form a more reliable and practical ground truth (epithelium, stroma, TILs).The low abundance of normal breast acini in the training data makes it difficult for MuTILs models to distinguish normal and cancerous epithelial tissue (bottom row).We combine predictions of cancer and normal epithelium regions into a single "epithelium" class.Note how the region constraint improves nuclear classifications (third vs fourth column).This improvement is most notable for large TILs (first row) and immature fibroblasts (second row), which are misclassified as cancer without the region constraint.Additionally, we controlled models using the infiltrating ductal carcinoma subset for basal genomic subtype status, and we controlled models using the Her2+ subset for infiltrating ductal histologic subtype status.Significant p-values are outlined in bold, using a significance threshold of 0.05.The * symbol indicates values < 0.001.HR Hazard Ratio, 95% CI upper and lower bounds of the 95% confidence interval, C-index concordance index, No. number, Avg weighted average. Region of interest recommends scoring stromal TILs as a percentage of the stromal areas between nests of carcinoma cells 9 .Scoring was performed within the border of the invasive cancer, but areas occupied by malignant cells are not included in the total assessed area.All mononuclear cells are scored (including both plasma cells and lymphocytes).Pathologists were blinded to each others' scores, and the scores of the algorithms in these experiments.Scores from reader 1 were used in clinical correlations as this reader is a breast cancer subspecialist. Computational TIL score calculation Analysis of a whole-slide image to generate scores begins with a tiling procedure that includes: 1. Tissue detection; 2. Exclusion of non-tissue and markers/inking regions; 3. Tiling the slide and generating an informativeness score for each tile at low resolution (2 MPP); 4. Analyzing the regions corresponding to the top 300 most informative tiles at high resolution.Fixing the number of regions ensures a near-constant run time of fifteen minutes per slide.The large_image 25 Python library was used to read the whole slide image files, and the histolab 26 library was used for the exclusion of marker/inking and non-tissue areas.The informativeness score was calculated as follows.Low-resolution tiles were deconvolved using a masked Macenko method to identify the hematoxylin and eosin components, excluding white space.This was performed with the color_deconvolution_routine method from the HistomicsTK package 27,28 .The informativeness score was calculated as the product of the mean hematoxylin and eosin values in each tile.Hence, tiles with a high composition of cellular (hematoxylin-rich) and acellular (eosin-rich) regions received a higher informativeness score, which favors tiles with more peritumoral stroma.Note that the informativeness score is different from the saliency score described below.The informativeness score provides a fast evaluation of where to perform the time-intensive high-resolution segmentation, while the saliency score is derived at high resolution to weigh the relative importance of ROIs in determining the overall TILs score.At inference the 5 MuTILs models obtained from cross validation are used to perform ensembling.ROIs are assigned to the models in a cyclical manner (every fifth ROI is analyzed by the model from the first cross validation fold).This provides additional robustness without increasing the overall inference time.If runtime is not a constraint, then each ROI can be analyzed by all five models and the model outputs averaged. Using the nucleus and tissue segmentations obtained from MuTILs models we assessed the following variants of the TILs score (Fig. 1 We obtained these score variants using two aggregation strategies: 1. Globally (aggregating region and nuclear counts from informative tiles) and 2. By saliency-weighted averaging of informative tiles.The saliency score for each tile was obtained using a Euclidean distance transform to identify stroma within 32 microns from the tumor boundary.The fraction of image pixels occupied by this peritumoral stroma was used as a saliency score for each tile.The 32 micron distance was determined by visual comparison of 8, 16, 32, and 64 microns and finding 32 to most closely represent the commonly accepted definition of peritumoral stroma. Computational TIL score calibration A simple linear calibration was used to scale computational scores to a similar range of magnitudes as the visual scores.This calibration procedure first z-scores the visual and computational scores to identify outliers where disagreement is greater than 1.96 standard deviations.The remaining inliers are used to define a scaling factor between computational scores and visual scores using linear regression with no intercept.This scaling improves interpretability and enables the value of a threshold intended for pathologist TIL scores to be mapped to a corresponding threshold value for computational scores. Clinical outcomes analysis Clinical data analysis used progression-free interval (PFI) as the endpoint used per recommendations from Liu et al. for TCGA, with progression events including local and distant spread, recurrence, or death 29 .Kaplan-Meier curves were examined for patient subgroups using a TILsscore threshold of 10% for stromal TILs scores.While different thresholds are used in the literature, a 10% is often the defining threshold for a low TILscore.For the nTnA score variant, a threshold of 3% is used to adjust for the larger number of cells included in the denominator (see Supplementary Fig. 1).To avoid having all conclusions rely on a specific choice of threshold, TIL-scores were also included in Cox regression analysis as continuous variables. Informed consent and ethics All data was shared with investigators in a deidentified form.All patients participated voluntarily and provided written informed consent.CPS-II data sharing was approved through the Emory University Institutional Review Board, approval number IRB00045780. We have complied with all relevant ethical regulations including the Declaration of Helsinki. Fig. 1 | Fig. 1 | Construction of the PanopTILs dataset to facilitate computational scoring of TILs. a Components of various variants of the computational TILs score.b Logo of our Panoptic segmentation dataset, PanopTILs, which reconciles and expands the region-level and cell-level annotations from the BCSS 12 and NuCLS 22 datasets to better suit the task of densely mapping the tumor microenvironment for TILs assessment.PanopTILs is openly accessible at: sites.google.com/view/panoptils. c The result of combining manual tissue and nucleus annotations from the BCSS and NuCLS datasets.This variant of PanopTILs was used for calculating validation accuracy metrics for our panoptic segmentation model.d Expansion of the manual nuclei annotations to facilitate panoptic (MuTILs) model training.This expansion was done by training additional models to extrapolate nuclei annotations beyond the manual annotations as in ref. 13.These extrapolated data were used in MuTILs model training and were not used in validation. Fig. 2 | Fig. 2 | MuTILs model architecture.a The MuTILs architecture utilizes two parallel U-Net models to segment regions at 10× objective magnification and nuclei at 20× objective magnification.Inspired by HookNet, we passed information from the lowresolution region segmentation branch to the high-resolution nuclei classification branch by concatenation.This concatenation, as indicated by the dashed arrow, enriches the high-resolution data with contextual details.Additionally, region predictions from the low-resolution branch are upsampled and used to constrain the possible nucleus classifications in the high-resolution branch.The model was trained using a multi-task loss that gives equal weight to ROI and HPF region predictions, unconstrained HPF nuclear predictions, and region-constrained nuclear predictions.b Region predictions are used to constrain nucleus predictions to enforce compatible cell type predictions through class-specific attention maps.These maps represent the likelihood for each nuclei class occurring at different points in space based on the region prediction, user-defined hard constraints on what cell types can occupy what tissue regions and learned prior probabilities describing cell type and region type associations.Hard constraints can be used to define rules that prohibit, for example, a nucleus from being classified as a fibroblast within a tumor region. Fig. 3 | Fig. 3 | Reconciliation of manual region and nucleus ground truth to produce the PanopTILs validation dataset.Each high-power field from the pathologistcorrected single-rater NuCLS dataset was padded to 1024 × 1024 at 0.5 MPP resolution (20× objective).As a result, each ROI had region segmentation for the entire field (from the BCSS dataset) and nucleus segmentation and classification for the central portion (from the NuCLS dataset).Note that the nucleus ground truth contains a mixture of bounding boxes and segmentation.The fields shown here are from the testing sets. Fig. 4 | Fig. 4 | Sample whole-slide predictions from trained MuTILs models.The predictions show full WSI inference for illustration, however, our analysis only admitted the 300 most informative ROIs to the MuTILs model to limit run time to fifteen minutes per slide for practical applicability.ROI saliency was measured at a very low resolution (2 MPP) during WSI tiling and favored ROIs with more peritumoral stroma.The training set annotations of cells and tissue regions are more granular TILs / No. of cells in stroma Spearman R = 0.55 (p<0.001) Fig. 7 | Fig. 7 | Qualitative examination of sample testing set predictions and sources of misclassification.The training set annotations of cells and tissue regions are very granular, for example distinguishing between lymphocytes and plasma cells.Some of these finer subclassifications have lower inter-rater agreement ("unreliable"), or are not abundant enough for a model to learn.Furthermore, the classes that a model needs to distinguish for the core task of TIL quantification is much simpler.Therefore, we assessed performance by grouping several classes to form a more reliable and practical ground truth (epithelium, stroma, TILs).The low abundance of Table 1 | Generalization accuracy for region segmentation and nucleus classification using manual ground truth Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Mean Std Table 2 | Cox regression survival analysis of the predictive value of visual and computational TILs scores for breast cancer progression Each metric was combined with clinically salient covariates to create a separate multivariable model.All multivariable models were controlled for patient age and AJCC pathologic stage I and II status.
2024-06-28T13:13:26.107Z
2024-06-28T00:00:00.000
{ "year": 2024, "sha1": "3d36424a0b82228385cab3b4f67bd60d61a2a69a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ed17049832a20833095c7961ce8d0a1dfd38acbf", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
2260990
pes2o/s2orc
v3-fos-license
Structure of l-Serine Dehydratase from Legionella pneumophila: Novel Use of the C-Terminal Cysteine as an Intrinsic Competitive Inhibitor Here we report the first complete structure of a bacterial Fe–S l-serine dehydratase determined to 2.25 Å resolution. The structure is of the type 2 l-serine dehydratase from Legionella pneumophila that consists of a single polypeptide chain containing a catalytic α domain and a β domain that is structurally homologous to the “allosteric substrate binding” or ASB domain of d-3-phosphoglycerate dehydrogenase from Mycobacterium tuberculosis. The enzyme exists as a dimer of identical subunits, with each subunit exhibiting a bilobal architecture. The [4Fe-4S]2+ cluster is bound by residues from the C-terminal α domain and is situated between this domain and the N-terminal β domain. Remarkably, the model reveals that the C-terminal cysteine residue (Cys 458), which is conserved among the type 2 l-serine dehydratases, functions as a fourth ligand to the iron–sulfur cluster producing a “tail in mouth” configuration. The interaction of the sulfhydryl group of Cys 458 with the fourth iron of the cluster appears to mimic the position that the substrate would adopt prior to catalysis. A number of highly conserved or invariant residues found in the β domain are clustered around the iron–sulfur center. Ser 16, Ser 17, Ser 18, and Thr 290 form hydrogen bonds with the carboxylate group of Cys 458 and the carbonyl oxygen of Glu 457, whereas His 19 and His 61 are poised to potentially act as the catalytic base required for proton extraction. Mutation of His 61 produces an inactive enzyme, whereas the H19A protein variant retains substantial activity, suggesting that His 61 serves as the catalytic base. His 124 and Asn 126, found in an HXN sequence, point toward the Fe–S cluster. Mutational studies are consistent with these residues either binding a serine molecule that serves as an activator or functioning as a potential trap for Cys 458 as it moves out of the active site prior to catalysis. G iven that L-serine serves as a major source of one-carbon units for methylation reactions, it is not surprising that its metabolism is a tightly regulated process. 1 Indeed, high rates of serine biosynthesis have been reported in human colon carcinoma, rat sarcoma, and rat hepatoma cell lines, attesting to a possible role in tumor cell growth. 2,3 In many bacteria, defects in the phosphorylated pathway of L-serine production result in auxotrophy, 4,5 whereas high levels of L-serine can be toxic. 6 All organisms contain enzymes that specifically deaminate Lserine to produce pyruvate and ammonia. In eukaryotes, these enzymes utilize pyridoxal 5′-phosphate (PLP) for activity, and they are believed to function in metabolism by providing pyruvate for gluconeogenesis and the citric acid cycle. Strikingly, in bacteria, the serine dehydratases contain Fe−S clusters rather than PLP. 7−9 Most, if not all bacteria, produce at least one Fe−S L-serine dehydratase, and some, such as Escherichia coli, produce several that are differentially expressed. 8,10 The ubiquitous nature of these enzymes attests to their fundamental importance, but it is not known why bacterial L-serine dehydratases utilize an Fe−S cluster rather than PLP. Recent reports suggest that these enzymes play a protective role in guarding against high intracellular L-serine levels that are detrimental to the organism. In E. coli, high serine levels interfere with incorporation of alanine into the peptidoglycans at a critical point in cell wall synthesis. 10,11 In some mycobacteria that are missing L-serine dehydratase, high serine levels inhibit growth by blockage of glutamine synthetase, 12 and in Campylobacteri jejuni, L-serine dehydratases are essential for host colonization. 13 Importantly, the L-serine dehydratases must function in a manner that does not deplete the levels of Lserine required for viability. They appear to accomplish this task by maintaining a high K m for L-serine along with a high k cat for efficient turnover. 8,14 The bacterial Fe−S L-serine dehydratases from Peptostreptococcus asaccharolyticus 15 and E. coli 8 have been shown to contain a diamagnetic [4Fe-4S] 2+ cluster in which three of the irons are thought to be ligated by cysteine residues whereas the fourth presumably interacts with the substrate, similar to that observed in aconitase. 16 A potential catalytic mechanism for L-serine dehydratase is shown in Scheme 1. The model is based on mechanistic studies with aconitase and other 4Fe-4S dehydratases. 8,9,14 Most nonredox 4Fe-4S proteins contain a cubane structure in which three of the four iron atoms are ligated to protein cysteine residues. Using aconitase as a model, the fourth iron is generally described as the catalytic metal that is ligated to water and interacts directly with the substrate according to the general mechanism depicted in Scheme 1. A catalytic base extracts a proton from the α carbon, and the Fe− S cluster acts as a Lewis acid, which coordinates the leaving hydroxyl group of the substrate. The aminoacrylate dehydration product is protonated at the β carbon to form iminopyruvate that is subsequently rehydrated with the loss of ammonia to form pyruvate. Site-directed mutagenesis studies with the Legionella pneumophila L-serine dehydratase have revealed that the critical cysteines are found in a C-X 41 -C-X 10 -C sequence pattern. 14 At least four types of Fe−S L-serine dehydratases that differ in their domain content and arrangement have been identified. 17 They all contain a catalytic or α domain that harbors the binding site for the Fe−S cluster as well as a β domain whose function has not yet been completely defined. In the type 1 enzymes, the α and β domains are found on separate polypeptide chains, whereas in types 2−4, they are located on a single polypeptide chain. 17,18 In the type 2 and 4 enzymes, the β domain is found at the N-terminus, whereas in the type 3 enzymes, the β domain is located at the C-terminus. A partial structure of the L. pneumophila L-serine dehydratase representing the β domain alone (residues 11−161), determined in 2006 by the Midwest Center for Structural Genomics, revealed that its fold had a molecular architecture similar to that of the "allosteric substrate binding" or "ASB" domain observed in some D-3-phosphoglycerate dehydrogenases. 14 The D-3-phosphoglycerate dehydrogenases function in metabolism by catalyzing the first and rate-limiting step in serine biosynthesis. 1 The β domains of type 1, 3, and 4 L-serine dehydratases contain an additional segment of polypeptide that appears to be similar to the "ACT" domain also found in the D-3-phosphoglycerate dehydrogenases. In addition, type 1 and 3 L-serine dehydratases require potassium for activity, whereas the type 2 enzymes do not. 18 The ASB domain in D-3-phosphoglycerate dehydrogenase functions in enzyme regulation by binding substrate, and possibly phosphate, which act as allosteric effectors. 19,20 It is thought that the ASB domain plays a similar role in the L-serine dehydratases by binding serine as an allosteric ligand. Activation by L-serine binding at a second, noncatalytic site has been demonstrated by kinetic analysis of the type 2 enzyme from L. pneumophila, although the location of the effector binding site has not yet been directly demonstrated to reside in the β domain. 18,21 The bacterial Fe−S L-serine dehydratases represent only the second group of proteins in which ASB domains have been found. 17 Interestingly, both the L-serine dehydratases and the D-3-phosphoglycerate dehydrogenases are involved in some aspect of L-serine metabolism. Here we report the first complete structure of the L. pneumophila L-serine dehydratase determined to 2.25 Å resolution. Each subunit of the dimeric enzyme adopts a distinctly bilobal architecture with the [4Fe-4S] 2+ cluster situated between the N-and C-terminal domains. Remarkably, the model reveals that the C-terminal cysteine residue, which is conserved among the type 2 L-serine dehydratases, functions as a ligand to the iron−sulfur cluster through a "tail in mouth" configuration. The molecular architecture described herein serves as a paradigm for the bacterial L-serine dehydratases in general. ■ MATERIALS AND METHODS Cloning of the Gene Encoding Serine Dehydratase. The gene encoding L. pneumophila L-serine dehydratase was cloned from genomic DNA obtained from the American Type Culture Collection as previously described. 14 It was placed between the BamHI and HindIII sites of pSV281, which provided a hexahistidine tag at the N-terminus of the protein after expression. Protein Expression and Purification. BL21 DE3 cells containing the plasmid were grown in lysogeny broth medium supplemented with kanamycin at 37°C with shaking until an optical density of 0.5−0.7 was reached at 600 nm. Protein expression was induced via the addition of 3 mM isopropyl β-D-1-thiogalactopyranoside, and the cells were grown until an optical density of 1−3 was reached at 600 nm. The cells were recovered by centrifugation, suspended in buffer [50 mM MOPS (pH 7.0) or 100 mM potassium phosphate (pH 7.0)], and lysed by sonication in an anaerobic chamber in the presence of 0.16 mg/mL lysozyme and then treated with 5 mg of DNAase after being stirred for 10 min. Protein was purified in the anaerobic chamber on a Talon Cobalt metal affinity column using standard procedures. 14,21 Mutant proteins were prepared by PCR as previously described 22 and purified in the same manner that was used for the wild-type enzyme. Kinetic Analysis. Activity was measured by following the absorbance of product (pyruvate) formation at 250 nm. 14 The concentration of the active enzyme was determined from the charge transfer absorbance of the Fe−S center at 400 nm using an extinction coefficient of 13750 M −1 cm −1 . The kinetic parameters, k cat and K m , were determined by fitting the L-serine concentration-dependent plot to the cooperative Michaelis− Menten equation where v is the velocity as a function of substrate concentration, V is the maximal velocity, [S] is the substrate concentration, K m is the Michaelis constant, n is the Hill coefficient, and k cat is V/ [E t ]. Double-reciprocal plots were fit by linear regression analysis, which yielded values for the slopes. Slope replots and plots of velocity versus inhibitor concentration at a constant substrate concentration were produced as described by Segel. 23 Slope replots were fit either by linear regression analysis or in the case of D-serine to a second-degree polynomial. The values for K i were estimated graphically where the x-axis intercept equals −K i . The data points in plots of velocity versus inhibitor concentration were connected from point to point using the smooth fit function of Kaleidograph and are used only to show either linear or plateauing functionality. Linearity indicates that the velocity can be driven to zero at an infinite inhibitor concentration, consistent with simple competitive inhibition. A plateau demonstrates that activity cannot be driven to zero at an infinite inhibitor concentration indicating partial inhibition. Size Exclusion Chromatography. Protein (8−10 mg/ mL) isolated in the anaerobic chamber was applied to a 1.6 cm × 100 cm column of Sephacryl S-200 at ambient room atmosphere and developed with 50 mM MOPS buffer (pH 7.0) that had been deoxygenated in the anaerobic chamber. Fractions were assayed for catalytic activity, and the absorbance at 280 nm was measured. Protein Expression and Purification for X-ray Diffraction Analysis. The pSV281-lpLSD plasmid was used to transform Rosetta2(DE3) E. coli cells (Novagen). The cultures were grown in M9 medium supplemented with kanamycin and chloramphenicol at 37°C with shaking until an optical density of 1.0 was reached at 600 nm. Methionine biosynthesis was suppressed by the addition of lysine, threonine, phenylalanine, leucine, isoleucine, valine, and selenomethionine, and 10 mg/L ferrous ammonium sulfate was also added. After an additional 30 min, the flasks were cooled to room temperature, and protein expression was initiated by addition of isopropyl β-D-1thiogalactopyranoside to a final concentration of 1 mM. The cells were allowed to express protein for 18 h before being harvested. Protein purification was performed in a COY anaerobic chamber at ambient temperature. The cells were lysed using 0.2 mg/mL lysozyme in standard lysis buffer [50 mM sodium phosphate, 200 mM NaCl, and 20 mM imidazole (pH 8)]. After cell lysis was complete, 0.05 mg/mL DNaseI was added for nucleotide degradation. The lysed cells were subsequently sealed in centrifuge bottles and spun at 45000g for 30 min. The supernatant was loaded onto a Ni-NTA column, and after being rigorously washed, the protein was eluted with 50 mM sodium phosphate, 200 mM NaCl, and 250 mM imidazole (pH 8). The sample was dialyzed against 10 mM Tris-HCl (pH 8.0) and 200 mM NaCl. After dialysis, the protein concentration was adjusted to approximately 10 mg/mL based on an extinction coefficient of 2.16 mg −1 mL cm −1 at 280 nm. Dithiothreitol was added to a final concentration of 8 mM. The Fe−S cluster was reconstituted by adding an 8-fold molar excess of FeCl 3 dropwise (100 mM stock) over 15 min, followed by a similar addition of Na 2 S. The mixture was allowed to stir for 5 h, followed by dialysis against 10 mM Tris-HCl (pH 8.0) and 200 mM NaCl. The solution was diluted with 3 volumes of 50 mM CHES (pH 9) and loaded onto a DEAE-Sepharose column that had been equilibrated in the same buffer (pH 9). The protein was eluted with a linear gradient from 0 to 800 mM NaCl and dialyzed against 10 mM Tris-HCl (pH 8.0) and 200 mM NaCl. The final protein concentration was 15 mg/mL. Crystallization. Crystallization conditions were initially surveyed in a COY anaerobic chamber at ambient temperature by the hanging drop method of vapor diffusion using a laboratory-based sparse matrix screen. Single crystals were subsequently grown via vapor diffusion against 100 mM homopipes (pH 5.0), 9−13% poly(ethylene glycol) 3400, and 200 mM tetramethylammonium chloride. The crystals grew to maximal dimensions of ∼0.4 mm × 0.4 mm × 0.05 mm in 2 weeks. They belonged to space group P3 1 21 with the following unit cell dimensions: a = b = 81.4 Å, and c = 267.5 Å. There was one dimer in the asymmetric unit. Structural Analysis. Prior to X-ray data collection, the crystals were transferred to a cryoprotectant solution containing 20% poly(ethylene glycol) 3400, 15% ethylene glycol, 250 mM NaCl, 250 mM tetramethylammonium chloride, and 100 mM homopipes (pH 5.0). X-ray data were collected at the Structural Biology Center beamline 19-BM at a wavelength of 0.9794 Å (Advanced Photon Source). The X-ray data were processed and scaled with HKL3000. 24 Relevant X-ray data collection statistics are listed in Table 1. The structure of the protein was determined via singlewavelength anomalous dispersion. Analysis of the X-ray data measured from the selenomethionine-labeled crystals with SHELXD revealed 32 selenium atoms. 25,26 Protein phases were calculated using these sites with SHELXE 26 followed by solvent flattening and averaging with RESOLVE. 27,28 An initial model was built and subsequently refined against the SAD X-ray data. Iterative rounds of model building with COOT 29 and refinement with REFMAC 30 reduced the R work and R free to 19.8 and 25.8%, respectively, from 30 to 2.25 Å resolution. Model refinement statistics are listed in Table 2. Overall Molecular Architecture of the Fe−S L-Serine Dehydratase. A previous report on an L-serine dehydratase from E. coli indicated that it exists as a dimer in solution. 8 Crystals used in this investigation belonged to the space group P3 1 21 with two subunits in the asymmetric unit. To confirm the quaternary structure of the L. pneumophila enzyme, we used size exclusion chromatography. Chromatography of the purified enzyme on a Sephacryl S-200 column showed a single main peak with a trailing shoulder (Figure 1). After elution from the column, the enzyme retained significant activity that corresponded to the main absorbance peak. The molecular weight of the main peak was determined to be 95500 and that of the shoulder to be approximately 56200. These molecular weights correspond well to the calculated molecular weights of 98952 and 49476 for dimeric and monomeric molecules, respectively. This is consistent with a monomer−dimer equilibrium under the conditions used for the chromatography, with only the dimer exhibiting catalytic activity. Furthermore, these data suggest that the dimer−dimer contacts are critical to the catalytic integrity of the active site and may have implications concerning the relationship of in vivo activity and enzyme expression levels. Overall, the quality of the electron density for both polypeptide chains in the asymmetric unit was excellent with the exceptions of several surface loops (breaks between Lys 161−Asn 167 and Ile 250−Phe 259 in subunit A and between Asp 160−Asn 167 and Lys 332−Ser 337 in subunit B). Shown in Figure 2a is a ribbon representation of the serine dehydratase dimer. It has overall dimensions of ∼100 Å × 80 Å × 60 Å and a total buried surface area of 4800 Å 2 . The iron−sulfur clusters are separated by ∼25 Å. There are three cis-peptide bonds at Pro 15, Pro 289, and Pro 395. Both Pro 15 and Pro 289 reside approximately 10 Å from the active site, whereas Pro 395 abuts one side of the iron−sulfur cluster. Shown in Figure 2b is a stereoview of one subunit, which is distinctly bilobal in architecture. The N-terminal or β domain, delineated by Met 1−Lys 161, is dominated by a five-stranded mixed β sheet that is flanked on one side by four α helices. The C-terminal domain, which harbors the active site cluster, is composed of 11 α helices. Given that the α carbons for the two subunits of the dimer superimpose with a root-mean-square deviation of 0.25 Å, the following discussion will refer only to subunit B. The electron density corresponding to the iron−sulfur cluster is presented in Figure 3a. Unexpectedly, the C-terminal cysteine residue, Cys 458, serves as a ligand to the iron−sulfur cluster. As indicated in Figure 3b Catalytic Activity with Other Amino Acids. Since the substrate of the enzyme is an amino acid, which is also thought to act as an allosteric effector, all of the other naturally encoded L-amino acids as well as D-serine and L-cystine (CysS-SsyC) were tested for their ability to act as substrates or inhibitors. Previously, L-cysteine and D-serine were reported to be competitive inhibitors of the type 2 dehydratase from E. coli, 8 and L-cysteine was reported to be a competitive inhibitor of the type 1 dehydratase from P. asaccharolyticus. 31 The only other amino acid besides L-serine that shows catalytic activity with the L. pneumophila enzyme studied in this investigation is Lthreonine, but only at a very low level. Specifically, it exhibits approximately 3% of the level of activity seen with L-serine and shows a K m of 288 mM compared to a K m of 2 mM for L-serine for the wild-type enzyme. The Hill coefficient for L-threonine is 2.3 ± 0.5 compared to a value of 1.40 ± 0.04 for L-serine. The apparent cooperativity is likely due to the different affinities of the substrate for the effector and catalytic sites. The amino acids that display significant inhibition of enzymatic activity are L-cysteine, D-serine, L-histidine, and glycine. Strikingly, L-alanine was not an effective inhibitor. Double-reciprocal plots of activity varying L-serine at fixed inhibitor concentrations are most consistent with simple competitive inhibition for L-cysteine and L-histidine with K i values of 60 μM and 11.4 mM, respectively ( Figure 4). The slope replots are linear as are plots of velocity versus inhibitor concentration, indicating that the velocity goes to zero at an infinite inhibitor concentration, which is consistent with simple competitive inhibition (Scheme 2, dashed box). Interestingly, Lcystine does not inhibit the enzyme at 1 mM and only shows approximately 12% inhibition at 20 mM. The reciprocal plots for D-serine and glycine also appear to show competitive inhibition with apparent K i values of 7 and 11 mM, respectively. However, the slope replot for D-serine displays a slight nonlinear character, and the plot of velocity versus inhibitor concentration shows distinct plateauing with retention of activity at higher inhibitor concentrations, indicating only partial inhibition. The plots of velocity versus glycine concentration (not shown) are not linear but do not show a distinct plateau. This along with the linear slope replot suggests that glycine inhibition is predominately competitive. Classically, partial inhibition occurs when the substrate and inhibitor bind to separate sites on the enzyme and the IES complex is capable of turning over to yield product (Scheme 2). Previous investigations have provided additional evidence of two sites, a catalytic site and an effector site. The enzymatic product, pyruvate, is capable of activating the enzyme at high concentrations after initial product inhibition at lower concentrations, 14 and previous transient kinetic analyses indicate that there is a second noncatalytic site for L-serine that is responsible for activation of the enzyme. 21 Therefore, the partial inhibition observed here may be more complex than simple partial competitive inhibition, with the inhibitor and Lserine competing at two distinct sites, the catalytic and effector sites, rather than mutually exclusive sites. Scheme 2 depicts three scenarios. The central portion, in the dashed box, depicts simple competitive inhibition. The bottom portion depicts binding of the inhibitor to an allosteric site, competing with substrate activation and producing partial inhibition, and the top portion depicts substrate activation through binding to the allosteric site. If the concentration of D-serine were to increase, it would decrease the activation effect of L-serine if it was not capable of activation itself (in Scheme 2, b > 1 > a). The residual activity remaining at a high inhibitor concentration might reflect, at least partially, the catalytic turnover of the unactivated enzyme. Mutational Analysis. Possible Catalytic Residues. The area surrounding the Fe−S cluster is occupied by a large number of polar side chains that are absolutely conserved across all four types of L-serine dehydratases. These include Ser 16, Ser 17, Ser 18, His 19, Ser 53, Thr 57, His 61, His 124, Asn 126, Ser 147, and Thr 290. In addition, Thr 63 is invariant in the type 1−3 enzymes and is a serine residue in members of the type 4 family. These conserved residues cluster together as shown in Figure 5. Note that His 61 is invariant in the type 1−3 enzymes. In members of the type 4 family that we have examined where it is missing, the proteins show no catalytic activity. It is possible that the type 4 family members are pseudoenzymes because bacteria containing them always appear to express an active L-serine dehydratase, as well. Previous mutation of Cys 458 to alanine demonstrated that there was little change in the activity of the enzyme (see Table 3). 14 Both the K m and k cat increased slightly, and the k cat /K m decreased slightly. This would indicate that the coordination of this cysteine residue to the Fe−S cluster is not essential for activity. In contrast, mutations to Cys 343, Cys 385, and Cys 396 resulted in a complete loss of activity. 14 The crystal structure confirms that these latter three cysteines form the critical structural coordination with the Fe−S cluster. As noted above, Cys 458 is anchored into the active site by multiple hydrogen bonds to its carboxylate group (Figure 3b). This hydrogen bonding pattern would still form with the alanine replacement so that the C-terminal alanine in the mutant enzyme may be performing the same function. The iron coordination by Cys 458 probably plays a key role in protecting the cluster from oxygen, which would be consistent with the relative insensitivity of the L. pneumophila enzyme to oxygen exposure. 17 Clearly, Cys 458 serves as an excellent mimic for the location of the substrate, L-serine. As indicated in Scheme 1, dehydration of L-serine is assisted by extraction of the proton from its α carbon by a catalytic base. Given that there are no potential catalytic bases close enough to the α carbon of Cys 458 to perform this function in the structure described here, our current model represents a "precatalytic" conformation. There are two histidine residues (His 19 and His 61) within approximately 6 Å of the Cys 458 α carbon, however, that could possibly move within catalytic distance upon a conformational rearrangement initiated by serine binding (Figure 6). Mutation of His 61 to alanine results in a complete loss of catalytic activity. Importantly, the Fe−S cluster is still intact because the charge transfer absorbance (∼400 nm) is not diminished (not shown), thus suggesting that His 61 may serve as the catalytic base. Mutation of His 19 to alanine has an effect on both K m and k cat . In addition, the mutation has a significant effect on the K i for L-cysteine and essentially no effect on the K i for D-serine. Most likely, His 19 is involved in substrate binding because the mutation mainly affects the interaction with the competitive inhibitor, L-cysteine. Possible Effector Site Residues. Cys 458 is coordinated to the fourth Fe atom in the cluster, which by analogy to other 4Fe-4S proteins, such as aconitase, is expected to be the external or catalytic iron that interacts with the substrate. In addition, it is extensively hydrogen bonded, largely by residues from the β domain. It is thus likely that Cys 458 occupies the The β domain of the L. pneumophila dehydratase has the same structural fold as the ASB domain of D-phosphoglycerate dehydrogenase from Mycobacterium tuberculosis, which binds its substrate allosterically. The two domains superimpose with a root-mean-square deviation of 2.8 Å for 111 structurally equivalent α carbons. They differ significantly in two regions. As shown in Figure 7, the loops connecting the second and third α helices of the domains adopt different orientations beginning at Glu 72 in the L. pneumophila dehydratase. The second difference is the presence of a β hairpin motif in the β domain of the L. pneumophila dehydratase, which is lacking in the M. tuberculosis ASB domain. By analogy to the ASB domain in D-phosphoglycerate dehydrogenase, L-serine is thought to bind to the β domain of L-serine dehydratase as an allosteric ligand. In this regard, His 124 and Asn 126 may be particularly significant. This HXN (His 124-X-Asn 126) sequence is the same motif that binds Lserine in the ACT domain of E. coli D-phosphoglycerate dehydrogenase (His 344-X-Asn 346). Inspection of the crystal structure of the L. pneumophila dehydratase shows that His 124 and Asn 126 are, indeed, in the vicinity of the active site with their side chains both pointing toward the Fe−S cluster (Figure (Table 3). Since the k cat /K m is essentially a second-order rate constant for ligand binding at low concentrations, this result is consistent with these residues potentially being involved with activation of the enzyme. Moreover, because they do not participate in extensive hydrogen bonding in this structure, they are poised to bind a ligand with a minimal energy cost. L-Serine may act as an effector by binding to these residues. Alternatively, L-serine may allosterically bind in a different location, causing the C-terminal Cys 458 to be released from the Fe−S cluster and subsequently to bind to His 124 and Asn 126, which could serve as a latch to prevent it from interfering with subsequent substrate binding at the active site. The inhibition constants for L-cysteine and D-serine for the L. pneumophila dehydratase H124A/N126A double mutant are more consistent with the latter scenario (Table 3). Whereas the K i for D-serine actually decreases slightly, that for L-cysteine increases slightly more than 3-fold. If His 124 and Asn 126 act as a latch to tether the C-terminal cysteine away from the substrate binding pocket during catalysis, their absence would likely result in more competition at the active site for the substrate competitive inhibitor, Lcysteine. Additional data from cocrystallization and sitedirected mutagenesis studies will be required to further probe the catalytic mechanism of this fascinating enzyme. These experiments are in progress. The novel "tail in mouth" configuration revealed in this structure, where the C-terminal cysteine residue provides a unique fourth ligand to the Fe−S cluster, is typically found in the type 2 L-serine dehydratases. Other types do not usually contain a terminal cysteine residue, although there are some that do. This suggests that its mechanism may differ in several key ways from that of the other types, including how their activity is regulated. Indeed, it has already been demonstrated that type 1 and 3 L-serine dehydratases require a monovalent cation for activity whereas the type 2 enzymes do not. 18 As far as we can tell, it appears that each bacterial species expresses only a single type of active L-serine dehydratase. 17 Furthermore, the type 1 enzymes appear to be found mainly in Gram-positive bacteria, whereas type 2 enzymes are typically observed in Gram-negative bacteria. Apparently, nature has produced several different types of Fe−S L-serine dehydratases to catalyze the same reaction with a species-specific distribution. This raises the following question: if the four types are regulated differently, is it because they are matched to some aspect of the bacterium's metabolism? This study represents a significant step toward understanding the functions of these families of enzymes and the roles that they play in the diverse lifestyles of bacteria. ■ ASSOCIATED CONTENT
2016-05-04T20:20:58.661Z
2014-11-07T00:00:00.000
{ "year": 2014, "sha1": "f5740681b65b8d2bc97a1a64aaae69d328eeb83f", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/bi501253w", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f5740681b65b8d2bc97a1a64aaae69d328eeb83f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231734367
pes2o/s2orc
v3-fos-license
Targets of Vitamin C With Therapeutic Potential for Cardiovascular Disease and Underlying Mechanisms: A Study of Network Pharmacology Vitamin C (ascorbic acid) is a nutrient used to treat cardiovascular disease (CVD). However, the pharmacological targets of vitamin C and the mechanisms underlying the therapeutic effects of vitamin C on CVD remain to be elucidated. In this study, we used network pharmacology approach to investigate the pharmacological mechanisms of vitamin C for the treatment of CVD. The core targets, major hubs, enriched biological processes, and key signaling pathways were identified. A protein-protein interaction network and an interaction diagram of core target-related pathways were constructed. Three core targets were identified, including phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit alpha isoform, signal transducer and activator of transcription-3 (STAT3), and prothrombin. The GO and KEGG analyses identified top 20 enriched biological processes and signaling pathways involved in the therapeutic effects of vitamin C on CVD. The JAK-STAT, STAT, PD1, EGFR, FoxO, and chemokines signaling pathways may be highly involved in the protective effects of vitamin C against CVD. In conclusion, our bioinformatics analyses provided evidence on the possible therapeutic mechanisms of vitamin C in CVD treatment, which may contribute to the development of novel drugs for CVD. INTRODUCTION Cardiovascular disease (CVD) is a leading cause of death worldwide, accounting for 205 deaths per 100,000 persons (Yu et al., 2019). It represents a substantial proportion of healthcare spending, therefore placing an enormous financial burden on patients and their families (Evans et al., 2020). CVD is characterized by a cluster of disorders in the arteries and heart that result in atherosclerosis, hypertension, myocardiopathy, myocardial infarction, and heart failure (North and Sinclair, 2012). The mechanism underlying the pathogenesis of CVD is complex and has not been fully elucidated (Buckley et al., 2019). To develop better treatment for patients with CVD, studies have been launched to identify novel therapeutic targets for the past two decades (Touzé and Rothwell, 2007;O'Donnell and Nabel, 2011;Khera and Kathiresan, 2017). Drugs targeting beta receptor, RASS system, P2Y12 receptor, PCSK9, and HMG CoA reductase have been shown to decrease cardiovascular morbidity and mortality in CVD patients (Ference et al., 2017;Squizzato et al., 2017;Blessberger et al., 2018). Despite great advances in treatment, CVD remains the dominant cause of mortality worldwide (Feng et al., 2019). Hence, there is an urgent need to develop novel therapeutic strategies and drugs for the treatment of CVD. Vitamin C (ascorbic acid) is a nutrient with radical scavenger activity and antioxidant effects (May and Harrison, 2013). Observational studies have demonstrated that high vitamin C supplementation is inversely correlated with the risk of CVD (Salonen et al., 2003;Wang et al., 2013). A systematic metaanalysis of 44 randomized controlled trials (RCTs) suggested that vitamin C intake improved left ventricular ejection fraction in patients with heart failure (Ashor et al., 2014). It has also been shown that vitamin C intake is associated with low blood pressure in hypertensive participants (Juraschek et al., 2012;Ran et al., 2020). In addition, vitamin C supplementation decreased cardiovascular mortality in a cohort of Spanish graduates (Martín-Calvo and Martínez-González, 2017). The antioxidant property of vitamin C contributes to the prevention and treatment of cardiovascular disorders (Ingles et al., 2020). However, the molecular mechanisms remain elusive. Few studies have investigated the signaling pathways involved in the effects of vitamin C on CVD. The network pharmacology-based approach has been used to identify novel therapeutic targets and illustrate the molecular mechanisms of vitamin C against sepsis (Li et al., 2020a). We also elucidated the pharmacological mechanisms of the active ingredients of traditional Chinese medicines on the treatment of CVD (Huang et al., 2020). In the present study, we investigated the molecular mechanisms underlying the protective effects of vitamin C against CVD by using network pharmacology approach. Our bioinformatics data may contribute to the development of new treatments for CVD for both clinical and basic investigators. Prediction of Putative Targets of Vitamin C With Therapeutic Potential for Cardiovascular Disease Swiss Target Prediction (http://www.swisstargetprediction.ch), DrugBank (http://www.drugbank.ca/) database, and Traditional Chinese Medicine Systems Pharmacology (TCMSP, http://lsp.nwu.edu.cn/tcmsp.php) were used to obtain all known targets of vitamin C. The therapeutic targets for CVD were collected using DrugBank database, Online Mendelian Inheritance (OMIM) (http://www.omim.org/) database, and GeneCards (www.genecards.org/) database. Then the targets of vitamin C with therapeutic potential for CVD were identified after eliminating duplicates. Construction of Protein-Protein Interaction Network and Topological Analysis of Vitamin C Against Cardiovascular Disease STRING database was used to construct a target-to-target, function-related, protein-protein interaction (PPI) network and to obtain general data (tsv.). Protein interactions with a confidence score of >0.9 were selected. General data were imported into Cytoscape (v3.7.1) to rebuild a PPI network of vitamin C against CVD. The network analyzer in Cytoscape was used to analyze the topological parameters, including mean and maximum degrees of freedom in the PPI network. The upper limit of the screening range was the maximum degree value in the topological data, while the lower limit was twice the average degree of freedom . The core targets were identified based on the above settings. Functional Processes and Pathway Analysis The R packages of ClusterProfiler, enrichplot, and ggplot2 were used for Gene Ontology (GO) Biological Function (BP) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses of the core targets. The cut-off value was set as p < 0.05. Construction of Network Relationships Cytoscape (v3.7.1) software was used for network visualization of the targets of vitamin C against CVD. An interaction diagram of core target-related GO/KEGG enrichment was generated . The study design is shown in Figure 1. Known Targets of Vitamin C With Therapeutic Potential for Cardiovascular Disease A total of 339 known target genes of vitamin C and 802 known therapeutic targets for CVD were obtained (Supplementary Tables S1, S2). The Venn diagram was plotted using online accessible tools to identify the targets of vitamin C against CVD ( Figure 2). Finally, 66 targets of vitamin C with therapeutic potential for CVD were identified and then analyzed using STRING database. The functionrelated PPI network is shown in Figure 3. Topology Parameter Analysis and Identification of Core Targets Cytoscape was used to calculate the topological parameters of the interaction network of 66 identified targets ( Figure 4). The minimum degree of freedom of the target was one and the maximum degree of freedom was 21. The screening criteria were set to 12 and 21 for the core targets. A total of three core targets were determined, including phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit alpha isoform (PIK3CA), signal transducer and activator of transcription-3 (STAT3), and prothrombin (F2) ( Figure 5). Gene Ontology Biological Function and Kyoto Encyclopedia of Genes and Genomes Pathway Enrichment Analysis The GO BP and KEGG pathway enrichment analyses of three core targets were performed using R language. The enriched Frontiers in Pharmacology | www.frontiersin.org February 2021 | Volume 11 | Article 591337 2 biological functions included regulation of energy homeostasis, acute-phase response, regulation of multicellular organism growth, regulation of multicellular organism growth, astrocyte differentiation, negative regulation of autophagy, positive regulation of phosphatidylinositol 3-kinase signaling, positive regulation of receptor signaling pathway via JAK-STAT, positive regulation of receptor signaling pathway via STAT, acute inflammatory response, regulation of phosphatidylinositol 3-kinase signaling, regulation of receptor signaling pathway via JAK-STAT, multicellular organism growth, regulation of receptor signaling pathway via STAT, phosphatidylinositol 3kinase signaling, platelet activation, regulation of generation of precursor metabolites and energy, receptor signaling pathway via JAK-STAT, receptor signaling pathway via STAT, phosphatidylinositol-mediated signaling, and inositol lipidmediated signaling ( Figure 6). The enriched molecular signaling pathways of the core targets were involved in acute myeloid leukemia, non-small cell lung cancer, prolactin signaling pathway, pancreatic cancer, EGFR tyrosine kinase inhibitor resistance, PD-L1 expression and PD-1 checkpoint pathway in cancer, AGE-RAGE signaling pathway in diabetic complications, insulin resistance, HIF-1 signaling pathway, growth hormone synthesis, secretion and action, platelet activation, FoxO signaling pathway, measles, signaling pathways regulating pluripotency of stem cells, phospholipase D signaling pathway, Hepatitis C, JAK-STAT signaling pathway, Hepatitis B, chemokines signaling pathway, and Kaposi's sarcoma-associated herpesvirus infection (Figure 7). Network Construction The network of the core targets of vitamin C with therapeutic potential for CVD and the interaction diagram of the core targetrelated pathways were constructed (Figure 8). FIGURE 1 | A schematic diagram of network pharmacology approach for the identification of core targets, major hubs, PPI network, biological processes, and key pathways of vitamin C acting on CVD. All known targets of vitamin C and CVD were predicted using online databases. Then the targets of vitamin C with therapeutic potential for CVD were identified. After the construction of a PPI network and the identification of the hub targets of vitamin C acting on CVD, GO BP, and KEGG pathway enrichment analyses were performed. Finally, a network of vitamin C-CVD -GO-KEGG was generated. FIGURE 2 | The targets of vitamin C with therapeutic potential for CVD were identified using the Venn diagram. Swiss Target Prediction, DrugBank database, and TCMSP were used to obtain all known targets of vitamin C. DrugBank database, OMIM database, and GeneCards database were used to collect all therapeutic targets for CVD. A total of 66 targets of vitamin C with therapeutic potential for CVD were identified. Frontiers in Pharmacology | www.frontiersin.org February 2021 | Volume 11 | Article 591337 FIGURE 3 | A PPI network of the targets of vitamin C with therapeutic potential for CVD was generated. A PPI network of 66 targets was imported into STRING database and visualized with the interaction score set to the highest confidence (0.900). FIGURE 4 | A protein network of known targets of vitamin C with therapeutic potential for CVD was generated by Cytoscape. The targets were imported into Cytoscape for the analysis of topological parameters. The diagram was generated based on topological analysis. Frontiers in Pharmacology | www.frontiersin.org February 2021 | Volume 11 | Article 591337 4 DISCUSSION Oxidative stress and inflammation play a crucial role in the onset and progression of atherosclerosis and the development of cardiovascular events (Pignatelli et al., 2018;Ilatovskaya et al., 2019). Inflammation promotes oxidative stress, which in turn leads to further inflammation, therefore establishing a selfperpetuating cycle. Methotrexate and doxycycline, two medications with antioxidant properties, exhibit therapeutic benefits in patients with CVD, possibly by inhibiting inflammation and breaking the self-perpetuating cycle (Clemens et al., 2020). Numerous studies have shown that reduced inflammation attenuated CVD. Statins reduce both LDL cholesterol and high-sensitivity C-reactive protein (hs-CRP), a biomarker of inflammation (Ridker et al., 1999;Albert et al., 2001). Moreover, a clinical trial showed that treatment with canakinumab, a monoclonal antibody targeting interleukin (IL)-1β, significantly downregulated hs-CRP and reduced the occurrence of cardiovascular events (Harrington 2017;Ridker et al., 2017). However, treatment efficacy of other inflammation inhibitors for CVD needs to be evaluated in future clinical studies. The role of oxidative stress and inflammation in CVD remain to be deeply and systematically investigated. The identification of new therapeutic targets and the elucidation of novel regulatory FIGURE 5 | The hub targets of vitamin C with therapeutic potential for CVD were identified using the Cytoscape software. The median and maximum degrees of freedom of the target were 1 and 21, respectively. To screen the hub targets, the degree of freedom was set to >12. Frontiers in Pharmacology | www.frontiersin.org February 2021 | Volume 11 | Article 591337 5 mechanisms of existing drugs enable the development of treatments for CVD. The network pharmacology approach is an innovative method to systematically explore the mechanisms of the effects of drugs on disease. It consists of bioinformatics, network analysis and integrates multiple sources of information. Hence, network approaches can accurately determine potential interactions between drug and target, and promote drug discovery. In this study, we used network pharmacology to identify the core targets of vitamin C with therapeutic potential for CVD and performed GO BP and KEGG enrichment analyses. Bioinformatics findings can reveal the proposed pharmacological mechanism and strengthen the new ideas for CVD treatment . Previous studies have extensively reported the beneficial effects of vitamin C on CVD. However, large RCTs did not identify any benefit of vitamin C (and E) intake on CVD (Cook et al., 2007;Sesso et al., 2008). Furthermore, a meta-analysis of RCTs showed that vitamin C supplementation has no effects on major cardiovascular outcomes such as CVD mortality and myocardial infarction (Al-Khudairy et al., 2017). There is still a need to identify the role of vitamin C in mitigating cancer treatment-induced heart failure (Malik et al., 2020) and other risks of CVD, including cardiomyopathies, arrhythmias and myocarditis/pericarditis in long-term cancer survivors (Berretta et al., 2020). However, the evidence is limited because many trials of vitamin C (and E) supplementation did not report protective effects on risks of CVD, such as blood pressure, arterial stiffness, endothelial function, or left ventricular ejection fraction. These risks of CVD may be more sensitive to vitamin C (and E) supplementation because they are responsible for the damaging effects of inflammation and oxidative stress (Pashkow, 2011). Moreover, they occur in the early stage of CVD pathogenesis. The potential protective effects of vitamin C intake on CVD remains controversial in both preclinical and clinical studies (Morelli et al., 2020). Therefore, to promote novel drug discovery, the mechanisms underlying the therapeutic effects of vitamin C on CVD should be well determined. Here, three core targets of vitamin C against CVD were identified: PIK3CA, STAT3, and F2. The GO analysis revealed that the most enriched BP was positive regulation of the STAT signaling pathways, which may play a critical role in CVD. The JAK-STAT pathway, which can be activated by a range of cytokines (i.e., IL-6, IL-2, and interferons), regulate the survival, proliferation, and differentiation of various types of cells (Speirs et al., 2018). In addition, the excessive activation of the JAK-STAT signaling is a key driver of many chronic inflammatory diseases, including CVD (Jones et al., 2011;Sansone and Bromberg, 2012). The four JAKs (JAK1-3 and Tyk2) comprise a family of cytoplasmic tyrosine kinases. Inflammatory cytokines, such as IL-6, promote the activation of JAKs, leading to subsequent phosphorylation of gp130 on Tyr residues, which generates docking sites for SH2-domaincontaining STAT proteins (Schaper and Rose-John, 2015). JAKs induce Tyr-phosphorylation of STATs within their SH2 domains (Tyr701 on STAT1, Tyr705 on STAT3). The activated STATs translocate to the nucleus to drive the transcription of inflammatory target genes (Mao et al., 2005;O'Shea et al., 2013). Vitamin C is a potent anti-inflammatory drug (Williams et al., 2019) that has been used to treat inflammatory diseases, such as obesity, sepsis, and pneumonia (Gorton and Jarvis, 1999;Ellulu., 2017;Marik, 2018). In the present study, we showed the therapeutic effects of vitamin C on CVD were associated with its anti-inflammatory property, more specifically, the inhibition of the JAK-STAT signaling. Therefore, drugs targeting the JAK-STAT signaling may be developed for the treatment of CVD. In agreement with GO analysis, KEGG pathway enrichment analysis also showed that the therapeutic effects of vitamin C on CVD were mainly associated with the JAK-STAT signaling pathway, as well as the PD-1 pathway, EGFR tyrosine kinase inhibitor resistance, the FoxO signaling pathway, and the chemokines signaling pathway (Li et al., 2020a;Li et al., 2020b). There is crosstalk among STAT, PD-1, EGFR, and FoxO signals. It was reported that the upregulation of EGFR promoted COPD airway epithelial cells by regulating the FOXO signaling pathway (Ganesan et al., 2013). JAK-STAT and EGFR together specify a population of cells called the posterior follicle cells to establish the embryonic axes (Wittes and Schüpbach, 2019). The expression of PD-L1 was induced by EGFR and JAK2/ STAT1, while the inhibition of JAK2 repressed the upregulation of PD-L1 in tumor cells and enhanced their immunogenicity (Concha-Benavente et al., 2016). A clinical study showed that patients with EGFR mutation had increased PD-L1 expression and T cell infiltration (Chen et al., 2020). The activation of STAT resulted in upregulated PD-L1 expression and the progression of lymphoma (Estrada et al., 2018). The activation of EGFR was implicated in CVD via the regulation of blood pressure, endothelial dysfunction, neointimal hyperplasia, atherogenesis, and cardiac remodeling (Makki et al., 2013). In addition, FOXOs were identified as therapeutic targets in several major cardiac diseases, such as ischemic cardiac diseases, diabetic cardiomyopathy, and myocardial hypertrophy (Xin et al., 2017). The expression level of PD-1 can affect the degree of inflammation and the state of coronary plaques in atherosclerosis (Sun et al., 2020). The crosstalk among these signaling pathways result in excessive inflammation of heart and vessels. In this study, we demonstrated that these signaling pathways play crucial roles in the progression of CVD. The network pharmacology provided a better understanding of the role of inflammation in CVD at systematic level. Further studies are needed to determine the role of these pathways in CVD and the mechanisms involved in the regulatory processes. KEGG pathway enrichment showed that the chemokines signaling pathway was implicated in the therapeutic effects of vitamin C against CVD. The secreted chemokines cause epidermal damage by attracting proinflammatory immunocytes (Bellón, 2019). The role of immunocytes in CVD has been studied. Neutrophils promote atherosclerosis at different stages, including atherogenesis, plaque destabilization, and plaque erosion (Silvestre-Roig et al., 2020). They are also involved in the pathogenic and repair processes in heart failure, myocardial infarction, and neointima formation. A number of experimental and clinical studies have indicated that T cells protect against cardiovascular disease, particularly atherosclerosis and abdominal aortic aneurysm (Meng et al., 2016). Other clinical researchers suggested that therapeutic strategies targeting B cells might exhibit beneficial effects on CVD (Porsch and Binder, 2019). Our data indicated that the blockage of the chemokines signaling pathway and immune response could reduce the progression of CVD. Because of the crosstalk among inflammation, oxidative stress, immune response, and chemokines, combination drug treatment may be a more useful approach for the treatment of CVD. CONCLUSION In summary, we identified three core targets (PIK3CA, STAT3, and F2) of vitamin C with therapeutic potential for CVD and showed that the protective effects of vitamin C on CVD were attributed to its anti-inflammation property. The most enriched pathways were the JAK-STAT, PD-1, EGFR, FoxO, and chemokines signaling pathways. Our findings may guide further pharmacological investigations on the therapeutic effects of vitamin C on CVD and the discovery of new drugs for CVD treatment. Limitation There are some limitations to the current study. First, the targets of vitamin C with therapeutic potential for CVD were collected using public databases, which may lead to inaccurate results. Second, the core targets identified in the current study need to be further validated. Third, as CVD is a complex process involving different types of diseases, the roles of these targets and pathways in specific pathological conditions need to be evaluated in both basic scientific and clinical studies. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS NZ: designed, drafted, and revised the manuscript. BWH: performed the research and wrote the manuscript. WBJ: collected and analyzed the data. All the authors read and approved the final manuscript.
2021-02-02T18:17:33.056Z
2021-02-02T00:00:00.000
{ "year": 2020, "sha1": "49a80128e9ae35059d74c7cb83831b36e4535b30", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.591337/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49a80128e9ae35059d74c7cb83831b36e4535b30", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }